repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
conv2d feed into lstm break model for inference
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 8 describe the current behavior have a conv2d layer with a filter size 1 f where f be the number of feature deliver inconsistent result while feed a chunk stream to a model I identify this conv2d layer as the breaking point in my model for speech recognition the model work fine without the conv2d layer use chunking and non chunk however if conv2d be present chunk be not work anymore describe the expect behavior the error between the chunk and non chunk model output should always be exactly 0 0 or at least very close standalone code to reproduce the issue the code below generate plot as show below to illustrate the potential issue essentially it generate a toy model which feed input take the form batch time feature channel into the conv2d layer and project this down to batch time 1 filter note the tf concat before the conv2d layer which artificially create this input in the example after the convolution we just drop the third axis and feed the result into the lstm layer python import string import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer def get model vocab size int embed size int hide size int conv2d kwargs dict none input layer input shape none dtype tf int32 embed layer embed vocab size 1 embed size mask zero true x embed input if conv2d kwargs be not none x layer lambda lambda l tf expand dim l axis 1 x x layer lambda lambda t tf concat t t t axis 1 x x layer conv2d conv2d kwargs x x layer lambda lambda l tf squeeze l axis 2 x lstm out layer lstm hidden size return sequence true return state true x x lstm out 0 lstm out layer lstm hidden size return sequence false return state true x x lstm out 0 output layer dense vocab size activation softmax x return keras model inputs input output output def infer model input state none x input new state list we re just interested in the output of the conv2d layer and the succeed lstm layer layer of interest model layer 2 for layer in layer of interest if isinstance layer layers lstm idx len new state output layer x initial state state idx if state be not none else none x new state output 0 output 1 new state append new state else x layer x return x new state def eval plot model sample text str char2idx dict chunk size int enc chunk list state train none nb chunk int np ceil len sample text chunk size for I in range nb chunk s I chunk size e s chunk size text chunk sample text s e test input list map lambda c char2idx c text chunk test input tf constant test input dtype tf int32 encode state train infer model test input state state train enc chunk append encode test input list map lambda c char2idx c sample text test input tf constant test input dtype tf int32 enc full infer model test input enc full enc full numpy 0 chunk concat tf concat enc chunk axis 1 numpy 0 diff chunk concat enc full rmspe np sqrt np mean np square diff enc full 100 import matplotlib pyplot as plt fix axis plt subplot 1 3 figsize 16 9 sharex all sharey all ax1 ax2 ax3 axis ax1 matshow chunk concat t ax1 set title chunk ax2 matshow enc full t ax2 set title full ax3 matshow diff t ax3 set title f diff rmspe rmspe 8f for ax in axis ax set xlabel time t ax1 set ylabel feature f plt tight layout plt show return rmspe def main vocab set string ascii lowercase vocab size len vocab idx2char idx 1 char for idx char in enumerate vocab char2idx char idx for idx char in idx2char item unit 32 embed size 16 filter unit 2 model conv param dict vocab size vocab size embed size embed size hide size unit conv2d kwargs dict filter filter kernel size 1 embed size use bias false model conv get model model conv param sample text how be you buddy for chunk size in range 1 11 print f create plot for chunk size chunk size rmspe eval plot model conv sample text char2idx chunk size print f rmspe rmspe 4f print all do if name main main toy example plot image image real model plot the follow be plot from a speech recognition model which use this simple conv2d operation before it be send into lstm layer one plot be after 1000 step of train the other after 200 as you can see the mse compute for each frame in the last image increase over time note the y axis range this model be break use chunk and do not produce any meaningful output anymore after train it until early stop it do work without chunk though image image on the other hand use the same model but remove the conv2d layer give an mse for each frame somewhere below 6e 14 which be basically zero image
tensorflowtensorflow
gradient computation fail in graph mode with numpy function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary conda forge tensorflow version use command below v2 3 0 rc2 23 gb36436b087 v2 4 0 49 g85c8b2a817f python version 3 7 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory titan rtx 24 gb describe the current behavior I m try to use a function implement in cython in a loss function the int64 output of this function be use as index for tf gather when compute the loss in eager execution mode gradient be compute fine but when run in graph mode an exception be raise describe the expect behavior I would expect the same result in graph execution mode standalone code to reproduce the issue a tiny sample where the cython function simply implement argmin import tensorflow as tf import numpy as np import assign model tf keras application resnet50 false weight none input shape 224 224 3 datum tf random normal 8 224 224 3 128 64 def f x y model x idx tf numpy function assign assign object to cell y np int64 idx tf argmin y 1 idx tf numpy function np argmin y 1 np int64 value tf gather y idx batch dim 3 return tf reduce sum value def grad x with tf gradienttape as tape y f x return tape gradient y model trainable variable tf grad tf function grad print eager execution g grad datum print graph execution g tf grad datum and the assign pyx import numpy as np cimport numpy as np cimport cython cython boundscheck false turn off bound check for entire function cython wraparound false turn off negative index wrap for entire function cdef void assign object to cell float error np int64 t argmax nogil cdef int n error shape 0 h error shape 1 w error shape 2 c error shape 3 cdef int y x c good i d cdef float good e for n in range n for y in range h for x in range w good i d 0 good error n y x 0 for c in range 1 c e error n y x c if e line 25 in g tf grad datum file d conda envs tf2 lib site package tensorflow python eager def function py line 780 in call result self call args kwd file d conda envs tf2 lib site package tensorflow python eager def function py line 823 in call self initialize args kwd add initializer to initializer file d conda envs tf2 lib site package tensorflow python eager def function py line 697 in initialize args kwd file d conda envs tf2 lib site package tensorflow python eager function py line 2855 in get concrete function internal garbage collect graph function self maybe define function args kwargs file d conda envs tf2 lib site package tensorflow python eager function py line 3213 in maybe define function graph function self create graph function args kwargs file d conda envs tf2 lib site package tensorflow python eager function py line 3075 in create graph function capture by value self capture by value file d conda envs tf2 lib site package tensorflow python framework func graph py line 986 in func graph from py func func output python func func args func kwargs file d conda envs tf2 lib site package tensorflow python eager def function py line 600 in wrap fn return weak wrap fn wrap args kwd file d conda envs tf2 lib site package tensorflow python framework func graph py line 973 in wrapper raise e ag error metadata to exception e typeerror in user code 19 grad return tape gradient y model trainable variable d conda envs tf2 lib site package tensorflow python eager backprop py 1073 gradient unconnected gradient unconnected gradient d conda envs tf2 lib site package tensorflow python eager imperative grad py 77 imperative grad compat as str unconnected gradient value d conda envs tf2 lib site package tensorflow python eager backprop py 162 gradient function return grad fn mock op out grad d conda envs tf2 lib site package tensorflow python ops array grad py 678 gatherv2grad batch dim param shape axis d conda envs tf2 lib site package tensorflow python ops array grad py 603 batchgathergrad indice getbatchindice param shape index batch dim d conda envs tf2 lib site package tensorflow python ops array grad py 582 getbatchindice 1 dim 1 dim value 1 indice ndim dim axis 0 typeerror unsupported operand type s for nonetype and int note that both eager and graph mode work if I replace tf numpy function with tf argmin and the result be the same up to round error graph execution also fail when use np argmin
tensorflowtensorflow
tpu model fit 2 gb of ram limit tf version 2 4
Bug
describe the current behavior there be an error session crash for unknown reason when try to use a dataset with the size more than 2 gb of ram error be reproducible for different step size 256 1024 8192 describe the expect behavior no error when feed a dataset which be more than 2 gb of ram code to reproduce the issue to reproduce just copy the follow code to colab with tpu enable the bug can happen due to 2 gb limit in protobuf since tensorflow rely on it import tensorflow as tf import numpy as np import distutil if distutil version looseversion tf version 1 14 raise exception this notebook be compatible with tensorflow 1 14 or high for tensorflow 1 13 or low please use the previous version at import os resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to cluster resolver this be the tpu initialization code that have to be at the beginning tf tpu experimental initialize tpu system resolver print all device tf config list logical device tpu optimizer tf tpu crossshardoptimizer tf train gradientdescentoptimizer 0 01 strategy tf distribute tpustrategy resolver with strategy scope model tf keras application vgg16 input shape 32 32 3 class 10 weight none create model optimizer tf keras optimizer adam model compile optimizer optimizer loss mse metric mse training loss tf keras metric mean training loss dtype tf float32 training accuracy tf keras metric sparsecategoricalaccuracy training accuracy dtype tf float32 1024 192 32 32 3 4bytes 2 415 919 104 byte 2 gb session crash for unknown reason x np zero 1024 192 32 32 3 dtype np float32 y np one 1024 192 10 dtype np float32 1 gb work x np zero 1024 192 2 32 32 3 dtype np float32 y np one 1024 192 2 10 dtype np float32 model fit x y epoch 10 step per epoch 1024 batch size be 192 per tpu print tf version version print tf version git version 2 4 1 v2 4 1 0 g85c8b2a817f log 106828116 a4958500 66dd 11eb 9ae6 06606e561bc4
tensorflowtensorflow
some optimizer don t work on gpu with embed layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 5 lts tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 7 9 cuda cudnn version cuda version 11 0 gpu model and memory tesla t4 15 gb describe the current behavior with some optimizer I get this error consistently tensorflow python framework error impl invalidargumenterror can not assign a device for operation readvariableop could not satisfy explicit device specification because the node colocation node readvariableop be colocate with a group of nodes that require incompatible device job localhost replica 0 task 0 device gpu 0 all available device job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 device gpu 0 full error message be in this gist it look like these three optimizer have an issue with embedding adadelta adagrad ftrl none of the other optimizer have this issue I test all eight of they code to reproduce this describe the expect behavior all optimizer should work with embedding or the documentation need to reflect which one be not compatible standalone code to reproduce the issue code to reproduce this other info log include any log or source code that would be helpful to log in the gist
tensorflowtensorflow
error in keras tokenizer text to sequence
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when few text be give to the keras tokenizer text to sequence it can produce the right sequence but when we have loarge number of text it produce wrong sequence describe the expect behavior expect the same number of non zero value for few text and large text in example below for few text row 0 and 5 have 2 and 4 non zero value respectively but for text that we have many row as show below for these row we have 1 and 3 non zero value the first 8 row in text be the same as in few text these be just two row there be many row like this standalone code to reproduce the issue sample txt from keras preprocesse text import tokenizer from keras preprocesse sequence import pad sequence import numpy as np import pickle few text product market business marketing entrepreneur business invest money money invest business money investment invest strategy market marketing investment tokenizer tokenizer num word 25 tokenizer fit on text few text sequence tokenizer text to sequence few text print sequence sequence word index tokenizer word index datum pad sequences sequences maxlen 25 padding post print shape of datum tensor datum shape datum sequence 7 3 1 4 8 1 5 2 2 5 1 2 6 9 10 3 4 6 shape of datum tensor 8 25 array 7 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 6 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 with open sample txt rb as fp seed pickle load fp text seed len text 1000 text 0 text 1 text 2 text 3 text 4 text 5 text 6 text 7 product market business marketing entrepreneur business invest money money invest business money investment invest strategy market marketing investment tokenizer tokenizer num word 25 tokenizer fit on text text sequence tokenizer text to sequence text print sequence sequence word index tokenizer word index print find s unique token len word index datum pad sequences sequences maxlen 25 padding post print datum print shape of datum tensor datum shape datum 0 datum 5 shape of datum tensor 1000 25 array 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 array 1 3 10 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
tensorflowtensorflow
update pool py
Bug
update the value of stride argument in example snippet fix the change as suggest in 46998
tensorflowtensorflow
documentation mistake in tf keras layers maxpool2d python section
Bug
url s with the issue description of issue what need change a documentation mistake in tf keras layers maxpool2d python section example clear description there be a documentation mistake in tf keras layers maxpool2d python section example in the description part of there be few example give with their respective code snippet under this text for example for stride 2 2 and pad valid in the code snippet of example instead of stride 2 2 stride 1 1 be write
tensorflowtensorflow
tf lite micro expect near inf inf give incorrect result for some platform
Bug
micro kernels exp test cc check that two inf value be near eachother l67 l72 this work ok for all the ci target but break the xtensa build make f tensorflow lite micro tool make makefile target xtensa optimize kernel dir xtensa target arch fusion f1 xtensa core f1 190305 swupgrade test kernel exp test j8 fail with testing singledim expect output datum I inf near output datum I inf fail at tensorflow lite micro kernels exp test cc 54 0 1 test pass some test fail the underlie issue that the expect near macro be take a difference of two infinity which at least with the xtensa toolchain can give a nan which in turn result in the check fail even though inf inf be true l153 l165
tensorflowtensorflow
makefile fail when set co processor ethos u
Bug
tensorflow micro make no rule to make target tensorflow lite micro tool make ext lib ethos u inc stop system information host os platform and distribution e g linux ubuntu 16 04 macos tensorflow from most recent commit describe the problem make no rule to make target tensorflow lite micro tool make ext lib ethos u inc stop please provide the exact sequence of command step when you run into the problem make f tensorflow lite micro tool make makefile j8 target cortex m generic target arch cortex m55 optimize kernel dir cmsis nn co processor ethos u microlite
tensorflowtensorflow
initialization of global pointer variable seem suspect with our current use of renode
Bug
tensorflow micro I hit this issue while work on if I create a global pointer variable explicitly initialize to nullptr and check for the variable nullptr in my factory function the check always return false afaict this behavior be specific to our use of renode I have not be able to reproduce on linux or with the xtensa simulator I have a workaround that should allow 46904 to be merge and I will then update this issue with a clean way to reproduce this error
tensorflowtensorflow
quantization post training quantization use tfliteconverter isn t work in tf 2 3
Bug
hello there wave system information have I write custom code not one bit os platform and distribution linux ubuntu 20 04 tensorflow instal from pip tensorflow version 2 3 1 python version 3 8 5 cuda cudnn version cuda 11 0 cudnn 8 0 5 gpu model and memory geforce rtx 2070 max q 8 gb ram describe the current behavior there seem to be an issue in tf 2 3 where post training quantization do not seem to actually work quantize version be no small than fp16 counterpart use the conversion tutorial from the documentation post training this seem to have be fix in tf2 4 but I din t see any related issue or mention of a fix I may have miss a few thing sweat smile describe the expect behavior use the same tflite converter the bare tflite model should be strictly big than its fp16 counterpart which in turn should be strictly big than its int8 counterpart standalone code to reproduce the issue the follow code raise an error in tf2 3 1 but no long in tf 2 4 import sys import tensorflow as tf import numpy as np from tensorflow keras import layer sequential layer layer conv2d 8 3 activation relu pad same input shape 224 224 3 layer globalaveragepooling2d layer flatten layer dense 10 mock model sequential layer def convert to fp16 tf model converter tf lite tfliteconverter from keras model tf model converter optimization tf lite optimize default converter target spec support type tf float16 return converter convert def quantize model tf model input shape converter tf lite tfliteconverter from keras model tf model converter optimization tf lite optimize default float fallback for operator that do not have an integer implementation def representative dataset for in range 100 datum np random rand 1 input shape yield datum astype np float32 converter representative dataset representative dataset converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf int8 converter inference output type tf int8 return converter convert assert sys getsizeof convert to fp16 mock model sys getsizeof quantize model mock model 224 224 3 other info log no error be actually throw during the execution but the behaviour be not as expect the same test be pass between a bare tflite conversion in fp32 and fp16 conversion but fail between fp16 and int8
tensorflowtensorflow
custom good metric not track good accuracy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code in which the bug manifest but test code be slightly edit stock example code from tensorflow doc os platform and distribution e g linux ubuntu 16 04 macos and linux tensorflow instal from source or binary via pip tensorflow version use command below 2 4 0 python version 3 8 2 describe the current behavior I ve create a custom metric which track the max achieve result of a metric during a training run so for example it should reflect the good model s accuracy see so far at any give point in time when I apply this to the simple mnist code from the doc where the accuracy continuously move up at the epoch level this custom metric do not exactly match the raw accuracy though it roughly track with it describe the expect behavior the expect behaviour in this case would be for the custom metric to exactly match the accuracy since the accuracy improve monotonically at the epoch level it seem like it s a bug that it do not however I admit that it could be a hole in my understanding of tensorflow perhaps there s some kind of off by one error here in how I m interpret the metric s state maybe another comment which may or may not be relate in work with multiple class instance of tensorflow metric object in jupyter notebooks I notice that sometimes I have to be very liberal with my usage of reset state even though I have separate class instance of the same metric this make I expect that at the root of both this and the bug I outline in this issue that there s some kind of unintentionally share state between multiple instance of the same metric class if I can reproduce this someday again I ll post in separate issue I just want to include full context in case this provide clue that tie to other post issue in tf s github repo standalone code to reproduce the issue reproducible in this collab notebook
tensorflowtensorflow
add missing exception to model fit docstre
Bug
the shuffle argument to the fit method in the model class get ignore if the input x to the method be a tf datum dataset object but this isn t document in the docstring of the method sign off by suraj upadhyay close 46492
tensorflowtensorflow
tf transpose crash abort if a be complex
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf transpose crash abort if a be complex and conjugate true describe the expect behavior expect no crash standalone code to reproduce the issue python import tensorflow as tf tf transpose conjugate true a complex 1 output python 2021 02 03 17 58 05 565680 f tensorflow core kernel transpose functor h 169 check fail in dim 2 0 vs 2 abort core dump
tensorflowtensorflow
tensorflow lite return zero in the third running time
Bug
hello everyone I build a tensorflow lite 1 15 0 1 15 5 c lib for android in the first and second time the interpreter output be normal but after it interpreter return all zero I m accept the camera video image image image there be my code int init std unique ptr model model tflite flatbuffermodel buildfrombuffer model buffer model size tflite op builtin builtinopresolver resolver tflite interpreterbuilder model resolver interpreter if interpreter allocatetensor ktfliteok return 1 else input interpreter input 0 return 0 int detect android camera frame process image memcpy interpreter type tensor input detect mat datum 256 256 3 sizeof float tflitetensor predict tensor interpreter tensor interpreter output 0 float detect out datum predict tensor datum f for int I 0 I 20 I printf detect out datum d f n I detect out datum I init be a initialization function only run once and the frame of android camera be process by detect always in the result only first and second return be true after that result be all zero or nan there be my compile option bazel build tensorflow lite libtensorflowlite so crosstool top external android crosstool cpu arm64 v8a host crosstool top bazel tool tool cpp toolchain cxxopt std c 11 tensorflow vision be 1 15 0 and 1 15 5
tensorflowtensorflow
update nn impl py
Bug
remove redundant name argument fix change suggest in issue 40592
tensorflowtensorflow
crash when dense 2d without bias on android gpu delegate
Bug
system information have I write custom code yes os platform and distribution ubuntu 20 04 and win10 mobile device huawei honor v30 pro oxf an10 tensorflow instal from pip tensorflow version 2 4 1 python version 3 8 cuda cudnn version 11 0 8 0 5 gpu model and memory gtx 1080 ti you can collect some of this information use our environment capture v2 4 0 49 g85c8b2a817f 2 4 1 describe the current behavior a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x0 describe the expect behavior work fine as run the model on mobile cpu standalone code to reproduce the issue from tensorflow import kera from tensorflow keras import layer import tensorflow as tf x layer input shape 256 64 3 y layer dense 1 use bias false x here use bias false be the key point y layer globalmaxpool2d y model keras model input x output y model summary converter tf lite tfliteconverter from keras model model model input model input 0 input shape model input shape model input set shape 1 input shape 1 lite model converter convert with open issue dense 2dmustbias android gpu tflite wb as fp fp write lite model issue dense 2dmustbias android gpu zip
tensorflowtensorflow
tflm keyword benchmark break when use generate makefile project
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu tensorflow instal from source or binary source tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc describe the problem when build the keyword benchmark project like this make f tensorflow lite micro tool make makefile generate keyword benchmark make project I get 2 error one be due to micro benchmark h not be copy into the generate project the other be a duplicate object error for g keyword scramble model datum that happen because keyword scramble model datum cc somehow appear twice in the generated makefile I will open a pr with a fix shortly please provide the exact sequence of command step when you run into the problem
tensorflowtensorflow
linearregression example in forwardaccumulator docstring have an error
Bug
url s with the issue l234 description of issue what need change I be try to understand precisely how forwardaccumulator jvp work and be work through the example to which I have link from compute by hand what I believe to be the correct jvp formula and compare it to the linear regression code I find that in the line loss tf reduce sum dense x tf constant 1 1 2 the target tf constant 1 1 which I would call y should be reshape for example replace by tf constant 1 1 the code as it stand introduce a multiplicative factor of n in the loss function where n be the number of sample this same problem occur in other place in this docstring would it be possible to include somewhere in this docstre a formula for the jacobian vector product
tensorflowtensorflow
gpt2 int8 quantization op context input type ktfliteuint8 op context input type ktfliteint8 op context input type ktfliteint16 op context input type ktflitefloat16 be not true node number 15 dequantize fail to prepare
Bug
tf 2 4 1 huggingface transformer 4 2 2 python 3 8 describe the current behavior the int8 quantization during the exporting have no error or problem but it would throw an error when it be invoke runtimeerror tensorflow lite kernel dequantize cc 61 op context input type ktfliteuint8 op context input type ktfliteint8 op context input type ktfliteint16 op context input type ktflitefloat16 be not true node number 15 dequantize fail to prepare note if we only use last hidden state for tflite this would have no problem it seem like tf matmul be a problem describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import random import numpy as np import tensorflow as tf from transformer import rng random random gpt2 model tfgpt2model from pretraine distilgpt2 def get tf lm head tensor gpt2 lm pt model gpt2lmheadmodel from pretraine distilgpt2 np tensor gpt2 lm pt model lm head weight detach numpy np tensor np transpose np tensor tf lm head tensor tf convert to tensor np tensor return tf lm head tensor tf lm head get tf lm head tensor tf function input signature tf tensorspec shape 1 none dtype tf int32 name input ids def serve func input ids output gpt2 model input ids training false last hidden state output 0 0 1 next token logit tf matmul last hidden state tf lm head next token tf math argmax next token logit axis 1 output type tf int32 log prob tf math reduce max tf nn log softmax next token logit return decode ids next token log prob log prob tensor for example in range 100 value rng randint 0 30000 for in range 8 tensor append np array value dtype np int32 def representative dataset gen for t in tensor yield ts converter tf lite tfliteconverter from concrete function serve func get concrete function tf tensorspec shape 1 none dtype tf int32 name input ids converter experimental new converter true converter optimization tf lite optimize default converter representative dataset representative dataset gen converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tf lite opsset tflite builtins int8 tflite quant model converter convert with open tmp model tflite wb as f f write tflite quant model invoke the model import tensorflow as tf load tflite model and allocate tensor interpreter tf lite interpreter model path tmp model tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail import numpy as np test the tensorflow lite model on random input datum input shape input detail 0 shape input datum np array np random random sample input shape dtype np int32 interpreter set tensor input detail 0 index input datum interpreter invoke other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
bug or I use a generator with a dataset impossible to correctly specify shape
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 ubuntu 18 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 8 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 1 gpu model and memory 1080ti 8gigs you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior create a functional model with multiple output seem to work fine if all of the datum be in memory or be source from file but use a generator it seem to be impossible to properly define the shape I ve try both the deprecate output option and the current output signature option to no avail it be entirely possible it be just I but if so it be possible the the documentation might be a tad incorrect describe the expect behavior I would expect that the shape of the datum can be properly define and use for fit standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook this code will compile fine but fail to fit with an error concern the data shape I ve redefine the datum to be any of a variety of shape as soon as I switch to multiple output this fail import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer model def generate sample x list 123456789 y list 2345 while 1 yield np array x astype np float32 np array y astype np float32 np array y astype np float32 dataset tf datum dataset from generator generate sample output signature tf tensorspec shape 9 dtype tf float32 tf tensorspec shape 2 4 dtype tf float32 dataset dataset batch batch size 32 input keras input shape next generate sample 0 shape x layer dense 512 activation relu input x output layer dense 4 activation relu name output x y output layer dense 4 activation relu name output2 x model keras model inputs input output x output y outputs model compile loss mse optimizer adam metric accuracy history model fit dataset epoch 1 step per epoch 10 validation datum dataset validation step 5 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach invalidargumenterror traceback most recent call last in 1 history model fit dataset epoch 1 step per epoch 10 validation datum dataset validation step 5 anaconda3 envs sec595 lib site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 1098 r 1 1099 callback on train batch begin step 1100 tmp log self train function iterator 1101 if datum handler should sync 1102 context async wait anaconda3 envs sec595 lib site package tensorflow python eager def function py in call self args kwd 826 trace count self experimental get trace count 827 with trace trace self name as tm 828 result self call args kwd 829 compiler xla if self experimental compile else nonxla 830 new tracing count self experimental get trace count anaconda3 envs sec595 lib site package tensorflow python eager def function py in call self args kwd 886 lifting succeed so variable be initialize and we can run the 887 stateless function 888 return self stateless fn args kwd 889 else 890 filter flat args anaconda3 envs sec595 lib site package tensorflow python eager function py in call self args kwargs 2940 graph function 2941 filter flat args self maybe define function args kwargs 2942 return graph function call flat 2943 filter flat args capture input graph function capture input pylint disable protect access 2944 anaconda3 envs sec595 lib site package tensorflow python eager function py in call flat self args capture input cancellation manager 1916 and execute eagerly 1917 no tape be watch skip to run the function 1918 return self build call output self inference function call 1919 ctx args cancellation manager cancellation manager 1920 forward backward self select forward and backward function anaconda3 envs sec595 lib site package tensorflow python eager function py in call self ctx args cancellation manager 553 with interpolatefunctionerror self 554 if cancellation manager be none 555 output execute execute 556 str self signature name 557 num output self num output anaconda3 envs sec595 lib site package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 57 try 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e invalidargumenterror incompatible shape 32 2 4 vs 32 4 node mean square error squareddifference define at 1 op inference train function 12605 function call stack train function
tensorflowtensorflow
miss ci for optimize kernel dir cmsis nn with mvei extension
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc describe the problem the ci script tensorflow lite micro tool ci build test stm32f4 sh test optimize kernel dir cmsis nn with dsp extension however there be no equivalent test for mvei extension I e cortex m55 please provide the exact sequence of command step when you run into the problem
tensorflowtensorflow
to save subclasse keras model from config method be mandatory
Bug
url s with the issue custom object description of issue what need change the guide state that to save load custom layer or a subclasse model the get config and optionally from config method should be overwrite however in the case of subclasse model the definition of from config be need and not optional clear description overwrite from config be need because the method of the base class model call functional from config which look for the key layer in the config and therefore raise an exception in case of a subclasse model submit a pull request submit pr to update the guide
tensorflowtensorflow
problematic args description table of aixs in tf nn softmax
Bug
url s with the issue args document version 2 4 1 description of issue what need change in the args description table it say axis default to 1 while the code below view alia use axis none and accord to its equivalent code axis deafault to none in the function reduce sum so which one be right
tensorflowtensorflow
multi gpu training work on intel cpu but fail on amd cpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary and source with march native tensorflow version use command below 2 4 python version 3 6 9 bazel version if compile from source 3 1 0 gcc compiler version if compile from source 7 5 0 cuda cudnn version 11 0 8 0 4 gpu model and memory 2x gtx1080 titan 12 gb describe the current behavior with the standard tensorflow gpu 2 4 distribution instal use pip3 install tensorflow gpu 2 4 when the same script be execute on the follow two system system 1 intel cpu i7 6850k 2x titan pascal 12 gb ubuntu 18 04 system 2 amd threadripper 1950x 2x titan pascal 12 gb ubuntu 18 04 the script work on system 1 but stick on system 2 the script be stick on the follow line with both gpu at 100 utilization model fit train dataset epoch 12 callback callback describe the expect behavior the multi gpu training should work on the system with amd cpus system can run the same script successfully with only one gpu assign to the runtime export cuda visible device 0 the problem might be cpu relate I have also try tensorflow compile from source version 2 4 gpu with march native and the result be the same stick standalone code to reproduce the issue other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach system 1 intel cpu run the entire script successfully epoch 1 12 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 3 469 eta 2 40 loss 2 2512 accuracy 0 1276warning tensorflow callback method on train batch begin be slow compare to the batch time batch time 0 0065 vs on train batch begin time 0 0727s check your callback warn tensorflow callback method on train batch begin be slow compare to the batch time batch time 0 0065 vs on train batch begin time 0 0727s check your callback warn tensorflow callback method on train batch end be slow compare to the batch time batch time 0 0065 vs on train batch end time 0 0402 check your callback warn tensorflow callback method on train batch end be slow compare to the batch time batch time 0 0065 vs on train batch end time 0 0402 check your callback system 2 amd threadripper cpu get stuck on the model fit line model fit train dataset epoch 12 callback callback epoch 1 12 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0
tensorflowtensorflow
micro port op fake quant from lite
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version commit sha if source master target platform e g arm mbe os arduino nano 33 etc sparkfun edge describe the problem I be about to port the tf lite kernel op fake quant to tflite micro please provide the exact sequence of command step when you run into the problem pr 1 refactor flatbuffer conversion parse function pr 2 refactor reference implementation from lite kernels internal reference reference op h into its own header with only the change to pass internal ci build check pr 3 copy the kernel from lite to micro any make the micro op and its testing code to work
tensorflowtensorflow
tflite converter version incompatibility issue
Bug
2 code please find this notebook 3 failure after conversion model conversion work with tensorflow 2 3 0 and tf nightly version and not work with tensorflow 2 4 default colab version similary model inference be work only with tensorflow 2 3 0 and not work with both tensorflow 2 4 and tf nightly cc abattery khanhlvg
tensorflowtensorflow
posenet model readme point to the wrong paper reference
Bug
url s with the issue please provide a link to the documentation entry for example or description of issue what need change in far read the reference be point to the wrong posenet paper it point to which be also call posenet but it be about camera relocalization not body pose detection correct link accord to this blog post I think the correct paper be here and
tensorflowtensorflow
tf linalg triangular solve adjoint option description be wrong
Bug
link the follow description about adjoint look the other way around to I if adjoint be true then the innermost matrix in output satisfy matrix equation sum k matrix I k output k j rhs I j if adjoint be false then the innermost matrix in output satisfy matrix equation sum k adjoint matrix I k output k j rhs I j
tensorflowtensorflow
error tfl max pool 2d op quantization parameter violate the same scale constraint
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source tf nightly 2 5 0 dev20210112 2 code scrollto bos2yevkhrmx 3 failure after conversion if the conversion be successful but the generate model be wrong then state what be wrong model produce wrong result and or have less accuracy model produce correct result but it be slow than expect 4 optional rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title 5 optional any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach exception traceback most recent call last usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 216 debug info str 217 enable mlir converter 218 return model str 5 frame exception usr local lib python3 6 dist package tensorflow python util dispatch py 206 0 error tfl max pool 2d op quantization parameter violate the same scale constraint quant uniform vs quant uniform usr local lib python3 6 dist package tensorflow python keras layer pool py 300 0 note call from usr local lib python3 6 dist package tensorflow model optimization python core quantization keras quantize wrapper py 167 0 note call from usr local lib python3 6 dist package tensorflow python autograph impl api py 625 0 note call from usr local lib python3 6 dist package tensorflow python keras engine base layer py 1032 0 note call from usr local lib python3 6 dist package tensorflow python keras engine functional py 563 0 note call from usr local lib python3 6 dist package tensorflow python keras engine functional py 428 0 note call from usr local lib python3 6 dist package tensorflow python keras engine base layer py 1032 0 note call from usr local lib python3 6 dist package tensorflow lite python tflite keras util py 184 0 note call from usr local lib python3 6 dist package tensorflow python eager def function py 612 0 note call from during handling of the above exception another exception occur convertererror traceback most recent call last usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 218 return model str 219 except exception as e 220 raise convertererror str e 221 222 if distutil spawn find executable toco from proto bin be none convertererror usr local lib python3 6 dist package tensorflow python util dispatch py 206 0 error tfl max pool 2d op quantization parameter violate the same scale constraint quant uniform vs quant uniform usr local lib python3 6 dist package tensorflow python keras layer pool py 300 0 note call from usr local lib python3 6 dist package tensorflow model optimization python core quantization keras quantize wrapper py 167 0 note call from usr local lib python3 6 dist package tensorflow python autograph impl api py 625 0 note call from usr local lib python3 6 dist package tensorflow python keras engine base layer py 1032 0 note call from usr local lib python3 6 dist package tensorflow python keras engine functional py 563 0 note call from usr local lib python3 6 dist package tensorflow python keras engine functional py 428 0 note call from usr local lib python3 6 dist package tensorflow python keras engine base layer py 1032 0 note call from usr local lib python3 6 dist package tensorflow lite python tflite keras util py 184 0 note call from usr local lib python3 6 dist package tensorflow python eager def function py 612 0 note call from
tensorflowtensorflow
bqml kmean tflite call once op doesn t support multiple subgraph with input
Bug
file on request of tflite group system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 macos 11 0 1 tensorflow instal from source or binary pip tf nightly tensorflow version use command below 2 5 0 dev20210123 git v1 12 1 49539 g18d8bcbe72b python version 3 7 describe the current behavior export save model from bqml to gcs load save model and test screen shoot 2021 01 26 at 12 21 34 pm create tflite model use from save model screen shoot 2021 01 26 at 12 26 28 pm load tflite model with interpreter to test screen shoot 2021 01 26 at 12 29 29 pm describe the expect behavior load tflite model and make prediction use interpreter to validate successful conversion standalone code to reproduce the issue toy save model from bqml k mean notebook to repro and tflite model can be find here
tensorflowtensorflow
cash on tflite interpreter allocatetensor
Bug
I m try to make an audio classifier as android yamnet with c language on jni however I get the crash on tflite interpreter allocatetensor and my tflite version maybe 2 3 here be my code snippet bool yamnet classify const std vector wave datum const int sample rate const int top k std vector recognition static const int kdefaultsamplerate 16000 hz if kdefaultsamplerate sample rate trace err error yamnet input must be 16khz assert false return false std vector input tensor indice input tensor indice interpreter input interpreter resizeinputtensor input tensor indice input index of wave 1 int wave datum size interpreter allocatetensor and then I get the crash as follow 2021 01 27 18 15 36 478 20012 20012 com tomato ketchup a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x28 in tid 20012 tomato ketchup pid 20012 tomato ketchup 2021 01 27 18 15 36 839 20012 20055 com tomato ketchup e ketchup cseedudp processsend sendto fail errno 101 network be unreachable fd 58 2021 01 27 18 15 36 859 20105 20105 a debug 2021 01 27 18 15 36 859 20105 20105 a debug build fingerprint xiaomi wayne wayne 9 pkq1 180904 001 v11 0 5 0 pdccnxm user release key 2021 01 27 18 15 36 859 20105 20105 a debug revision 0 2021 01 27 18 15 36 859 20105 20105 a debug abi arm64 2021 01 27 18 15 36 859 20105 20105 a debug pid 20012 tid 20012 name tomato ketchup com tomato ketchup 2021 01 27 18 15 36 859 20105 20105 a debug signal 11 sigsegv code 1 segv maperr fault addr 0x28 2021 01 27 18 15 36 859 20105 20105 a debug cause null pointer dereference 2021 01 27 18 15 36 859 20105 20105 a debug x0 0000000000000000 x1 0000000000000000 x2 0000007c4b800000 x3 0000000000000004 2021 01 27 18 15 36 859 20105 20105 a debug x4 0000000000000096 x5 0000007c633a3848 x6 00003e8000000001 x7 00003e8000000001 2021 01 27 18 15 36 859 20105 20105 a debug x8 0000000000000000 x9 0000007c3abfe358 x10 00000000000000d9 x11 0000007c4b896650 2021 01 27 18 15 36 859 20105 20105 a debug x12 0000000000000008 x13 0000000000000000 x14 0000000000000011 x15 0000000000000001 2021 01 27 18 15 36 859 20105 20105 a debug x16 0000007c4fa760f0 x17 0000007c4f973910 x18 00000000ffffffff x19 0000000000000000 2021 01 27 18 15 36 859 20105 20105 a debug x20 0000000000000000 x21 0000007c4b86ff80 x22 0000007c4fa25993 x23 0000000000000000 2021 01 27 18 15 36 859 20105 20105 a debug x24 0000000000000000 x25 0000000000000000 x26 0000007cf1aaa5e0 x27 0000007c4fa71a08 2021 01 27 18 15 36 859 20105 20105 a debug x28 0000007c3abfe360 x29 0000007fdb66e470 2021 01 27 18 15 36 859 20105 20105 a debug sp 0000007fdb66e410 lr 0000007c4f974fc4 pc 0000007c4f975018 2021 01 27 18 15 37 182 20105 20105 a debug backtrace 2021 01 27 18 15 37 182 20105 20105 a debug 00 pc 0000000000235018 datum app com tomato ketchup v1z97zn 3brgxudk3rsqea base apk offset 0xedbb000 tflite subgraph modifygraphwithdelegate tflitedelegate 252 2021 01 27 18 15 37 182 20105 20105 a debug 01 pc 0000000000238b98 data app com tomato ketchup v1z97zn 3brgxudk3rsqea base apk offset 0xedbb000 tflite interpreter allocatetensor 220 dose anyone have any idea on this
tensorflowtensorflow
tf matmul and tf tensordot behave different in convert concrete function in tensorflowlite
Bug
1 system information os platform and distribution macos 10 15 and ubuntu 18 04 lt on colab machine tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source tensorflow 2 4 0 2 code this notebook demonstrate the bug with the simple example I come up with 3 failure after conversion I implement rfft for tflite use tf matmul and save the module concrete function but invoke save tflite model repeatedly return different result however replace tf matmul with tf tensordot fix the strange behavior therefore I have prepare the notebook above to demonstrate the bug I have realize interesting case which change the behavior if negative sign be remove from output return from dummymatmul or dummytensordot result variable output be same if we use tf module directly output be same colab demo show it somehow size of the right hand side matrix matter colab demo show it difference occur after first iteration colab demo show it and for some input it get large with every iteration
tensorflowtensorflow
the tag command line option be no long support in the tflm makefile
Bug
tensorflow micro I be follow guide from build arm cortex m voice assistant with google tensorflow lite link as below so when I come across the command below make f tensorflow lite micro tool make makefile target mbe tag cmsis disco f746ng generate micro speech mbe project this error pop out the tag command line option be no long support in the tflm makefile I try to remove tag but it after it compile it do not work as expect I also try optimize kernel dir it return error below tensorflow lite micro tool make makefile 552 disco f746ng inc no such file or directory make no rule to make target disco f746ng inc stop correct I if I be wrong base on my understanding cmsis and disco f746ng be folder specify in tensorflow tensorflow lite micro examples micro speech which the original tag be use to search out the makefile inc inside both folder and make the file so the question may be what can I use to replace what tag or be there any other way to use optimize kernel dir thank you in advanced for help
tensorflowtensorflow
typo recognize flower with tensorflow lite on android
Bug
url s with the issue 4 description of issue what need change copy the tensorflow lite model model tflite and label txt that you train early to asset folder at lite codelab flower classification start app src main asset the path use above be incorrect and need to be update to lite codelab flower classification android start app src main asset thank george
tensorflowtensorflow
convert a model with int8 fake quant node from a save model file fail
Bug
system information os platform and distribution e g linux ubuntu 16 04 macos 10 15 6 tensorflow instal from source or binary tensorflow 2 4 1 pypi pip package tensorflow version or github sha if from source 2 4 1 command use to run the converter or code if you re use the python api suppose keras model be a keras model contain qat fake quantiser op like tf quantization fake quant with min max var then the follow conversion code use from save model fail to output a valid int8 model and throw an error python with open convert save model tflite wb as f tf keras model save model keras model tmp save model save format tf converter tf lite tfliteconverter from save model tmp save model converter optimization tf lite optimize default converter inference input type tf int8 converter inference output type tf int8 f write converter convert note that from keras model work correctly see a minimal reproduction with the follow colab notebook the output from the converter invocation the above snippet fail with valueerror the inference input type and inference output type must be tf float32 for more detail see the link colab notebook also please include a link to the save model or graphdef can be find in the link colab notebook failure detail attempt to convert and set the inference input output type to int8 cause the error above attempt to convert without set those type such that they default to float doesn t cause an error but do yield a broken int8 model with lot of dangle quant dequant op pair
tensorflowtensorflow
tf keras backend reshape abortion when shape contain large value
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf keras backend reshape abortion when shape contain large value describe the expect behavior expect an exception message if the input be not expect instead of crash standalone code to reproduce the issue python import tensorflow as tf import numpy as np tf keras backend reshape x 1 shape np array 21943 45817 30516 61760 38987 dtype np uint16 output python 2021 01 26 15 32 50 289333 f tensorflow core framework tensor shape cc 405 check fail 0 new num element 0 vs 1 abort core dump
tensorflowtensorflow
fix inference equation and make it more readable
Bug
replace the denominator in the equation with its square root and make the multiplication of gamma a little more readable sign off by suraj upadhyay fix 46522
tensorflowtensorflow
model s allocation of node fail after conversion to tflite
Bug
system information os platform and distribution e g linux ubuntu 16 04 amazon linux 2 tensorflow instal from source or binary pip tensorflow version or github sha if from source tf nightly 2 5 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook model concrete function model inference decode get concrete function converter tf lite tfliteconverter from concrete function model concrete function converter target spec support op tf lite opsset tflite builtin if args quantize converter optimization tf lite optimize default output file name os path join args outdir model quant tflite else output file name os path join args outdir model tflite tflite model converter convert the output from the converter invocation 2021 01 26 09 30 17 656931 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2021 01 26 09 30 17 656958 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine home dmmatwic anaconda3 envs tflite x86 lib python3 8 site package tensorflow addon util ensure tf install py 37 userwarne you be currently use a nightly version of tensorflow 2 5 0 dev20210125 tensorflow addon offer no support for the nightly version of tensorflow some thing might work some other might not if you encounter a bug do not file an issue on github warning warn 2021 01 26 09 30 20 220470 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2021 01 26 09 30 20 220498 w tensorflow stream executor cuda cuda driver cc 326 fail call to cuinit unknown error 303 2021 01 26 09 30 20 220517 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host dev dsk dmmatwic 1b a5a0da5a eu west 1 amazon com proc driver nvidia version do not exist 2021 01 26 09 30 20 220718 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag warn tensorflow from home dmmatwic anaconda3 envs tflite x86 lib python3 8 site package tensorflow python op array op py 5039 call gather from tensorflow python op array op with validate index be deprecate and will be remove in a future version instruction for update the validate index argument have no effect index be always validate on cpu and never validate on gpu 2021 01 26 09 30 22 167 deprecation 528 warning from home dmmatwic anaconda3 envs tflite x86 lib python3 8 site package tensorflow python op array op py 5039 call gather from tensorflow python op array op with validate index be deprecate and will be remove in a future version instruction for update the validate index argument have no effect index be always validate on cpu and never validate on gpu 2021 01 26 09 30 22 977251 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 0 2021 01 26 09 30 22 977359 I tensorflow core grappler cluster single machine cc 356 start new session 2021 01 26 09 30 22 995718 I tensorflow core platform profile util cpu util cc 112 cpu frequency 2500000000 hz 2021 01 26 09 30 23 040599 I tensorflow core grappler optimizer meta optimizer cc 935 optimization result for grappler item graph to optimize function optimizer graph size after 900 node 122 1564 edge 136 time 10 878m function optimizer graph size after 900 node 0 1564 edge 0 time 9 301ms optimization result for grappler item while body 3883 function optimizer function optimizer do nothing time 0 006ms function optimizer function optimizer do nothing time 0 002ms optimization result for grappler item while body 4324 function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while cond 3882 function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while cond 4323 function optimizer function optimizer do nothing time 0 003ms function optimizer function optimizer do nothing time 0 001ms 2021 01 26 09 30 24 125 lite 659 info use new converter if you encounter a problem please file a bug you can opt out by set experimental new converter false 2021 01 26 09 30 24 216881 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 332 ignore output format 2021 01 26 09 30 24 216915 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 335 ignore drop control dependency 2021 01 26 09 30 24 305204 I tensorflow compiler mlir tensorflow util dump mlir util cc 210 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2021 01 26 09 30 25 025058 I tensorflow lite tool optimize quantize weight cc 233 skip quantization of tensor arg5 because it have no allocate buffer 2021 01 26 09 30 25 026738 I tensorflow lite tool optimize quantize weight cc 233 skip quantization of tensor arg5 because it have no allocate buffer 2021 01 26 09 30 25 077 convert model 102 info model of size 10 273849 mb save to model tflite model quant tflite also please include a link to the save model or graphdef no link because it s internal company s model failure detail if the conversion be successful but the generate model be wrong state what be wrong I get error tensorflow lite kernels kernel util cc 404 d1 d2 d1 1 d2 1 be not true error node number 3 add fail to prepare when run tfliteinterpreterallocatetensor interpreter in tflite c api the conversion to tflite model run fine the node look like that in netron add node I m not sure what exactly do this error mean as far as I can see these condition be meet with 1 128 and 1 128 128 d1 be in fact equal to d2 while d1 and d2 also be 1 unless I misunderstood how it work this happen during tensor allocation so I understand that this have nothing to do with input data shape thank
tensorflowtensorflow
get tensor args 0 0 shape dtype string when use tf datum textlinedataset map
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux version 3 16 0 7 amd64 gcc version 4 9 2 debian 4 9 2 10 deb8u1 1 smp debian 3 16 59 1 2018 10 03 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below unknown 1 15 3 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version no gpu use gpu model and memory no gpu use you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be read datum from a hdfs path use textlinedataset I use for to iter over the dataset everything go well but when it come to map all I get be tensor args 0 0 shape dtype string I don y know there it come from the issue can be reproduce with tf ver 2 4 1 here be the code python path hdfs path to file dataset tf datum dataset list file path dataset tf datum textlinedataset dataset for line in dataset print line output use for tf tensor b 1 t 1 15 1907 190706 19070605 161 nan nan nan 2 7 37 nan nan 1 0 nan nan 1819 181903 18190301 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 0 1 1 1 0 0 201 2 486379972076975 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 shape dtype string tf tensor b 0 t 1 13 1909 190911 19091101 195 nan nan nan 2 nan nan nan nan 1 0 nan nan 1909 190901 19090101 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 shape dtype string python def parse line line print line return line path hdfs path to file dataset tf datum dataset list file path dataset tf datum textlinedataset dataset map lambda x parse line x output use map this be all I get tensor args 0 0 shape dtype string describe the expect behavior map func should get element in the dataset rather than tensor args 0 0 shape dtype string standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook provide above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach no log
tensorflowtensorflow
tf raw op populationcount for uint32 not support but document
Bug
system information os platform and distribution intel linux ubuntu 20 04 tensorflow instal from source or binary python pip tensorflow version 2 4 1 python version python 3 8 6 cpu describe the current behavior tf raw op populationcount of array with type uint32 fail describe the expect behavior in api documentation for raw op populationcount for arg x a tensor must be one of the follow type int8 int16 int32 int64 uint8 uint16 uint32 uint64 so this be either a documentation error or more likely a bug because feature be important on uint32 standalone code to reproduce the issue a numpy array 3 dtype numpy uint32 tf raw op populationcount x a other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file line 1 in file home rst python ng3py lib python3 8 site package tensorflow python util tf export py line 404 in wrapper return f kwargs file home rst python ng3py lib python3 8 site package tensorflow python ops gen bitwise op py line 547 in population count return population count eager fallback file home rst python ng3py lib python3 8 site package tensorflow python ops gen bitwise op py line 570 in population count eager fallback result execute execute b populationcount 1 input input flat file home rst python ng3py lib python3 8 site package tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl notfounderror could not find device for node node populationcount populationcount t dt uint32 all kernel register for op populationcount device gpu t in dt int64 device gpu t in dt int32 device gpu t in dt int16 device gpu t in dt uint16 device gpu t in dt int8 device gpu t in dt uint8 device cpu t in dt int64 device cpu t in dt int32 device cpu t in dt int16 device cpu t in dt uint16 device cpu t in dt int8 device cpu t in dt uint8 op populationcount
tensorflowtensorflow
issue with tflite converter in tf nightly tf if op then branch input type tensor be incompatible with input type tensor at index 0
Bug
system information os platform and distribution linux ubuntu 18 04 5 python version 3 7 4 tensorflow version tf nightly 2 5 0 dev20210124 hello I have a new issue with the tflite converter which be happen only when use nightly it isn t happen when use tensorflow main branch I try as much as I can to create the minimal code to reproduce pip install tf nightly import tensorflow as tf import os import numpy as np class example tf module tf function input signature tf tensorspec shape 100 dtype tf float32 def calculate self x maxima ind tf where x 0 8 maxima ind tf gather maxima ind 0 axis 1 maxima ind tf cast maxima ind dtype tf float32 if len maxima ind 10 maxima ind tf cast maxima ind dtype tf float32 return maxima ind to export example np random seed 54 buffer size 100 x1 tf convert to tensor np random rand buffer size astype float32 solution to export calculate x1 print solution model dir content model example tf save model save to export model dir import tf save model load model dir converter tf lite tfliteconverter from save model model dir path to the savedmodel directory converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert with open model dir model example tflite wb as f f write tflite model when run it I m get this error exception traceback most recent call last usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 216 debug info str 217 enable mlir converter 218 return model str 4 frame exception 0 error loc cond inference calculate 3155 tf if op then branch input type tensor be incompatible with input type tensor at index 0 during handling of the above exception another exception occur convertererror traceback most recent call last usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 218 return model str 219 except exception as e 220 raise convertererror str e 221 222 if distutil spawn find executable toco from proto bin be none convertererror 0 error loc cond inference calculate 3155 tf if op then branch input type tensor be incompatible with input type tensor at index 0 you can also use this gist which include the above code I understand there be a thing with the type but I m use cast before the if statement and inside of it I m actually do nothing and it isn t happen in the main branch any help will be appreciate thank you
tensorflowtensorflow
operator softplus be not support by the standard tensorflow lite runtime
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 a gnu linux system with linux kernel 4 15 0 on 1 6 core 3 60ghz intel core cpu i7 6850k with 64 gb ram equip with a nvidia corporation gp102 gpu tensorflow instal from source or binary source tensorflow version use command below tensorflow2 1 0 gpu python version 3 6 describe the current behavior when I convert the train hdf5 model to tflite the follow operator non support occur softplus describe the expect behavior the hdf5 model should be successfully convert to the format of tflite standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook batch size 122 epoch 148 num class 10 import os save dir model model name train model h5 import kera as keras x train y train x test y test keras datasets fashion mnist load datum img row img col x train shape 1 x train shape 2 x train x train reshape x train shape 0 img row img col 1 x test x test reshape x test shape 0 img row img col 1 input shape img row img col 1 x train x train astype float32 x test x test astype float32 x train 255 x test 255 y train keras util to categorical y train num class y test keras util to categorical y test num class import kera as keras model keras model sequential model add keras layers thresholdedrelu theta 0 3597445834106594 model add keras layer maxpooling2d pool size 1 1 stride 1 1 padding valid model add keras layer flatten model add keras layer dense num class activation softplus model compile loss kera loss categorical crossentropy optimizer keras optimizers adadelta metric accuracy model fit x train y train batch size batch size epoch epoch verbose 1 validation datum x test y test model path os path join save dir model name model save model path other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use cast fully connect great max pool 2d mul here be a list of operator for which you will need custom implementation softplus
tensorflowtensorflow
xla convolution cause segmentation fault
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 20 04 tensorflow instal from source or binary from source commit 78ba23b6bcd7fe416aad1da4fe47b2b6036e09ad tensorflow version use command below commit 78ba23b6bcd7fe416aad1da4fe47b2b6036e09ad python version 3 8 5 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 9 3 0 cuda cudnn version 11 0 8 0 4 gpu model and memory geforce rtx 2070 driver 460 32 03 describe the current behavior I m currently work on write xla support for another language recently I upgrade to cuda 11 0 and cudnn 8 0 4 previously we be use cuda 10 2 and cudnn 7 after the switch all of our convolution test start produce segfault we also get some warning during execution warn link two module of different target triple usr local cuda 11 0 nvvm libdevice libdevice 10 bc be nvptx64 nvidia gpulib whereas be nvptx64 nvidia cuda occasionally it will warn about fail to find an optimal convolution algorithm and fall back to the default before segfaulte I suspect it have something to do with a version mismatch I be originally run off an old commit prior to 2 4 get release and upgrade to the most recent commit to see if it would fix it but the issue persist standalone code to reproduce the issue an xla computation similar to this should reproduce I m work in a separate language to build up the computation xla xlabuilder builder new xla xlabuilder conv xla shape input shape xla shapeutil makeshape xla primitivetype f32 32 3 120 120 xla shape kernel shape xla shapeutil makeshape xla primitivetype f32 32 3 3 3 xla xlaop inp xla rnguniform builder input shape xla xlaop kernel xla rnguniform builder kernel shape xla convolutiondimensionnumber dimension number dimension number set input batch dimension 0 dimension number set input feature dimension 1 dimension number set kernel output feature dimension 0 dimension number set kernel input feature dimension 1 dimension number set output batch dimension 0 dimension number set output feature dimension 1 xla xlaop result xla convgeneraldilate inp kernel padding lhs dilation rh dilation conv dimno dimension number other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach core dump backtrace 0 0x00007fae997adc84 in from lib x86 64 linux gnu libcuda so 1 1 0x00007fae9974144b in from lib x86 64 linux gnu libcuda so 1 2 0x00007fae997417b8 in from lib x86 64 linux gnu libcuda so 1 3 0x00007fae99998697 in from lib x86 64 linux gnu libcuda so 1 4 0x00007fae99998c02 in from lib x86 64 linux gnu libcuda so 1 5 0x00007fae99759d8a in from lib x86 64 linux gnu libcuda so 1 6 0x00007fae9999e3e0 in from lib x86 64 linux gnu libcuda so 1 7 0x00007fae99713243 in from lib x86 64 linux gnu libcuda so 1 8 0x00007fae99714555 in from lib x86 64 linux gnu libcuda so 1 9 0x00007fae997bab93 in culaunchkernel from lib x86 64 linux gnu libcuda so 1 10 0x00007faa1d62f8bb in from lib x86 64 linux gnu libcudnn cnn infer so 8 11 0x00007faa1d671686 in from lib x86 64 linux gnu libcudnn cnn infer so 8 12 0x00007faa1a926124 in cudnn cnn cudnnim2col4d cudnncontext cudnntensor4dstruct void const cudnnfilter4dstruct cudnnconvolutionstruct void from lib x86 64 linux gnu libcudnn cnn infer so 8 13 0x00007faa1a8e05a0 in cudnn cnn gemmconvolveengine false 9 2 execute internal impl cudnn backend variantpack const custream st from lib x86 64 linux gnu libcudnn cnn infer so 8 14 0x00007faa1a537033 in cudnn cnn engineinterface execute cudnn backend variantpack const custream st from lib x86 64 linux gnu libcudnn cnn infer so 8 15 0x00007faa1a563960 in cudnn cnn enginecontainer cudnnbackendenginename t 2 4096ul execute internal impl cudnn backend variantpack const custream st from lib x86 64 linux gnu libcudnn cnn infer so 8 16 0x00007faa1a537033 in cudnn cnn engineinterface execute cudnn backend variantpack const custream st from lib x86 64 linux gnu libcudnn cnn infer so 8 17 0x00007faa1a5be18c in cudnn cnn autotransformationexecutor execute pipeline cudnn cnn convolutionengine cudnn backend variantpack const custream st const from lib x86 64 linux gnu libcudnn cnn infer so 8 18 0x00007faa1a5d9871 in cudnn cnn generalizedconvolutionengine execute internal impl cudnn backend variantpack const cus type for more q to quit c to continue without page tream st from lib x86 64 linux gnu libcudnn cnn infer so 8 19 0x00007faa1a537033 in cudnn cnn engineinterface execute cudnn backend variantpack const custream st from lib x86 64 linux gnu libcudnn cnn infer so 8 20 0x00007faa1a53e730 in cudnn backend execute cudnncontext cudnn backend executionplan cudnn backend variantpack from lib x86 64 linux gnu libcudnn cnn infer so 8 21 0x00007faa1a63f40c in cudnn backend enginesalgomap execute wrapper cudnncontext cudnnconvolutionfwdalgo t cudnn backend executionplan cudnn backend variantpack from lib x86 64 linux gnu libcudnn cnn infer so 8 22 0x00007faa1a638ad9 in cudnn backend convolutionforward cudnncontext void const cudnntensorstruct const void const cudnnfilterstruct const void const cudnnconvolutionstruct const cudnnconvolutionfwdalgo t void unsigned long bool void const void const void const cudnnactivationstruct const cudnntensorstruct const void from lib x86 64 linux gnu libcudnn cnn infer so 8 23 0x00007faa1a73ae96 in cudnn cnn convolutionforward cudnncontext void const cudnntensorstruct void const cudnnfilterstruct void const cudnnconvolutionstruct cudnnconvolutionfwdalgo t void unsigned long void const cudnntensorstruct void from lib x86 64 linux gnu libcudnn cnn infer so 8 24 0x00007faa1a73b94c in cudnnconvolutionforward from lib x86 64 linux gnu libcudnn cnn infer so 8 25 0x00007fadf91ab772 in cudnnconvolutionforward from home sean project exla build test lib exla priv libexla so 26 0x00007fadf9191c6d in stream executor gpu cudnnsupport doconvolve stream executor dnn convolutionkind stream executor dnn datatype stream executor dnn datatype stream executor stream stream executor dnn batchdescriptor const stream executor devicememorybase stream executor dnn filterdescriptor const stream executor devicememorybase stream executor dnn batchdescriptor const stream executor devicememorybase stream executor dnn convolutiondescriptor const stream executor dnn algorithmdesc stream executor devicememory stream executor dnn profileresult from home sean project exla build test lib exla priv libexla so 27 0x00007fadf72a1735 in tensorflow status xla gpu anonymous namespace rungpuconvimpl xla gpu gpuconvparam const stream executor scratchallocator stream executor stream xla gpu runconvoption from home sean project exla build test lib exla priv libexla so 28 0x00007fadf72a4a2e in xla gpu rungpuconv xla gpu gpuconvconfig const absl lts 2020 02 25 span stream executor devicememorybase stream executor scratchallocator stream executor stream xla gpu runconvoption from home sean project exla build test lib exla priv libexla so 29 0x00007fadf72a70e4 in xla gpu rungpuconv xla gpu gpuconvconfig const absl lts 2020 02 25 span stream executor devicememorybase stream executor devicememorybase stream executor stream xla gpu runconvoption from home sean project exla build test lib exla priv libexla so 30 0x00007fadf72473fb in xla gpu convolutionthunk executeonstream xla gpu thunk executeparam const from home sean project exla build test lib exla priv libexla so type for more q to quit c to continue without page 31 0x00007fadf72573bb in xla gpu gpuexecutable executethunk xla serviceexecutablerunoption const xla gpu bufferallocation const bool xla hloexecutionprofile from home sean project exla build test lib exla priv libexla so 32 0x00007fadf725c2cc in xla gpu gpuexecutable executeasynconstreamimpl xla serviceexecutablerunoption const absl lts 2020 02 25 variant absl lts 2020 02 25 span xla hloexecutionprofile from home sean project exla build test lib exla priv libexla so 33 0x00007fadf725c8f8 in xla gpu gpuexecutable executeasynconstream xla serviceexecutablerunoption const std vector xla hloexecutionprofile from home sean project exla build test lib exla priv libexla so 34 0x00007fadfd69b286 in xla executable executeasynconstreamwrapper xla serviceexecutablerunoption const std vector from home sean project exla build test lib exla priv libexla so 35 0x00007fadf90c0df7 in xla localexecutable runasync absl lts 2020 02 25 span std vector xla executablerunoption from home sean project exla build test lib exla priv libexla so 36 0x00007fadf90c17c8 in xla localexecutable runasync std vector xla executablerunoption forgive I if this be a cuda version issue or if this be a question well ask in the xla dev group
tensorflowtensorflow
the example code in tf feature column categorical column with identity be not executable complete self sufficient
Bug
url s with the issue please provide a link to the documentation entry for example linear model description of issue what need change the example code can not be execute as it be we have to add the namespace by search in the tensorflow org site complete self sufficient stand alone example code will be very helpful especially for the new developer similar issue be raise in 46203 be wait for it to be resolve but it be close now
tensorflowtensorflow
break link in doc
Bug
link to doc 2021 jan 22 008 dead link name of link contribute five minute of your own voice 2021 jan 22 010
tensorflowtensorflow
issue md link break
Bug
url s with the issue most file under description of issue what need change the link to the github policy be break clear description for example why should someone use this method how be it useful I don t know why there be a separate github policy aside from the contribute md but it seem important and people be suppose to read it correct link be the link to the source code correct no parameter define be all parameter define and format correctly n a return define be return value define n a raise list and define be the error define n a usage example be there a usage example n a request visual if applicable be there currently visual if not will it clarify the content n a submit a pull request be you plan to also submit a pull request to fix the issue no
tensorflowtensorflow
mix precision conv3d error no algorithm work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I write a custom code os platform and distribution e g linux ubuntu 16 04 linux cento 7 tensorflow instal from source or binary instal use pip tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 8 0 cuda cudnn version cuda 11 1 1 cudnn 8 0 4 30 cuda 11 1 1 gpu model and memory 2 nvidia a100 pcie 40 gb 2 gpu on the same computer describe the current behavior when run the code with double precision the code do not generate error when run with mixed precision launch an error describe the expect behavior not have error when run with mixed precision standalone code to reproduce the issue I try to create a google colab for this code there be no error there but I could not get a google colab with 2 gpu like in my enviroment import tensorflow as tf from tensorflow keras mix precision import experimental as mixed precision from tensorflow keras import layer model optimizer import numpy as np if you comment the follow 2 line the code run policy mix precision policy mixed float16 mixed precision set policy policy mirror strategy tf distribute mirroredstrategy with mirror strategy scope x input layer input shape 60 60 2 32 x layer conv3d filter 32 kernel size 3 3 1 stride 1 1 1 padding same x input x layer batchnormalization x x out layer relu x test model model input x input output x out name test model adam optimizer adam learning rate 1e 4 test model compile optimizer adam loss mse metric mae input datum np random random 1000 60 60 2 32 target datum np random random 1000 60 60 2 32 ds tuple tf datum dataset from tensor slice input data target datum ds tuple ds tuple shuffle 1000 batch 100 history test model fit ds tuple epoch 10 verbose 1 other info log this be the log of the run when I try run with mixed precision 2021 01 21 15 37 22 318957 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 01 21 15 37 36 119468 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 01 21 15 37 36 123358 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 01 21 15 37 36 201391 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 3b 00 0 name a100 pcie 40 gb computecapability 8 0 coreclock 1 41ghz corecount 108 devicememorysize 39 59gib devicememorybandwidth 1 41tib s 2021 01 21 15 37 36 204229 I tensorflow core common runtime gpu gpu device cc 1720 find device 1 with property pcibusid 0000 d8 00 0 name a100 pcie 40 gb computecapability 8 0 coreclock 1 41ghz corecount 108 devicememorysize 39 59gib devicememorybandwidth 1 41tib s 2021 01 21 15 37 36 204423 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 01 21 15 37 37 945043 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 01 21 15 37 37 945235 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 01 21 15 37 37 998423 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 01 21 15 37 38 072705 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 01 21 15 37 38 207114 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2021 01 21 15 37 38 574293 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2021 01 21 15 37 38 612582 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 01 21 15 37 38 624584 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 1 2021 01 21 15 37 38 624772 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 01 21 15 37 38 626778 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set warn tensorflow from scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python keras mix precision loss scale py 56 dynamiclossscale init from tensorflow python train experimental loss scale be deprecate and will be remove in a future version instruction for update use tf keras mixed precision lossscaleoptimizer instead lossscaleoptimizer now have all the functionality of dynamiclossscale 2021 01 21 15 37 38 638220 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx512f to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 01 21 15 37 38 638847 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 01 21 15 37 38 840057 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 3b 00 0 name a100 pcie 40 gb computecapability 8 0 coreclock 1 41ghz corecount 108 devicememorysize 39 59gib devicememorybandwidth 1 41tib s 2021 01 21 15 37 38 842228 I tensorflow core common runtime gpu gpu device cc 1720 find device 1 with property pcibusid 0000 d8 00 0 name a100 pcie 40 gb computecapability 8 0 coreclock 1 41ghz corecount 108 devicememorysize 39 59gib devicememorybandwidth 1 41tib s 2021 01 21 15 37 38 842375 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 01 21 15 37 38 842436 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 01 21 15 37 38 842486 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 01 21 15 37 38 842536 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 01 21 15 37 38 842585 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 01 21 15 37 38 842635 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2021 01 21 15 37 38 842683 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2021 01 21 15 37 38 842733 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 01 21 15 37 38 850268 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 1 2021 01 21 15 37 38 850360 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 01 21 15 37 40 396681 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2021 01 21 15 37 40 396829 I tensorflow core common runtime gpu gpu device cc 1267 0 1 2021 01 21 15 37 40 396881 I tensorflow core common runtime gpu gpu device cc 1280 0 n y 2021 01 21 15 37 40 396925 I tensorflow core common runtime gpu gpu device cc 1280 1 y n 2021 01 21 15 37 40 407031 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 37570 mb memory physical gpu device 0 name a100 pcie 40 gb pci bus i d 0000 3b 00 0 compute capability 8 0 2021 01 21 15 37 40 414936 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 37570 mb memory physical gpu device 1 name a100 pcie 40 gb pci bus i d 0000 d8 00 0 compute capability 8 0 run tensorflow in mixed precision warn tensorflow tf keras mixed precision experimental lossscaleoptimizer be deprecate please use tf keras mixed precision lossscaleoptimizer instead note that the non experimental lossscaleoptimizer do not take a dynamiclossscale but instead take the dynamic configuration directly in the constructor for example opt tf keras mixed precision experimental lossscaleoptimizer opt 2021 01 21 15 37 47 583748 w tensorflow core grappler optimizer data auto shard cc 656 in auto mode and switch to datum base sharding instead of file base sharding as we can not find appropriate reader dataset op s to shard error find an unshardable source dataset name tensorslicedataset 2 op tensorslicedataset input placeholder 0 input placeholder 1 attr key toutput type value list type dt double type dt double attr key output shape value list shape dim size 60 dim size 60 dim size 2 dim size 32 shape dim size 60 dim size 60 dim size 2 dim size 32 2021 01 21 15 37 48 602974 I tensorflow compiler mlir mlir graph optimization pass cc 116 none of the mlir optimization pass be enable register 2 2021 01 21 15 37 48 606716 I tensorflow core platform profile util cpu util cc 112 cpu frequency 3000000000 hz epoch 1 10 2021 01 21 15 37 56 331798 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 01 21 15 39 18 436971 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 01 21 15 39 28 779314 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 01 21 15 39 31 933180 w tensorflow core framework op kernel cc 1763 op require fail at conv grad op 3d cc 1994 not find no algorithm work traceback most recent call last file main py line 31 in history test model fit ds tuple epoch 10 verbose 1 file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python keras engine training py line 1100 in fit tmp log self train function iterator file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager def function py line 828 in call result self call args kwd file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager def function py line 888 in call return self stateless fn args kwd file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager function py line 2942 in call return graph function call flat file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager function py line 1918 in call flat return self build call output self inference function call file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager function py line 555 in call output execute execute file scratch user eee conda myenvs tf 2 4 lib python3 8 site package tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl notfounderror 3 root error s find 0 not find no algorithm work node gradient tape test model conv3d conv3d conv3dbackpropfilterv2 define at thread py 932 1 not find no algorithm work node gradient tape test model conv3d conv3d conv3dbackpropfilterv2 define at thread py 932 cond 4 then 40 cond 4 cond pivot t 256 187 2 not find no algorithm work node gradient tape test model conv3d conv3d conv3dbackpropfilterv2 define at thread py 932 div no nan readvariableop 3 108 0 successful operation 0 derive error ignore op inference train function 2870 function call stack train function train function train function
tensorflowtensorflow
can not convert tf savedmodel to onnx
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 2 5 0 dev20210119 python version 3 6 anaconda tensorflow onnx version 1 8 0 build from source my command line shell python m tf2onnx convert save model model savedmodel output fea onnx custom op bucketize asstring stringtohashbucketfast signature def serve default tag serve opset 12 but I get the follow error shell 2021 01 21 11 29 41 413 error could not find table resource to replace placeholder unknown 172 2021 01 21 11 29 41 415 error could not find table resource to replace placeholder unknown 174 2021 01 21 11 29 41 416 error could not find table resource to replace placeholder unknown 176 2021 01 21 11 29 41 417 error could not find table resource to replace placeholder unknown 178 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 180 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 183 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 185 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 187 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 189 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 193 2021 01 21 11 29 41 418 error could not find table resource to replace placeholder unknown 195 2021 01 21 11 29 41 419 error could not find table resource to replace placeholder unknown 197 tensorflow python framework error impl invalidargumenterror func argument to tf graphcopyfunction can not be null exception ignore in traceback most recent call last file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python training tracking track py line 208 in del self destroy resource file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager def function py line 797 in call result self call args kwd file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager def function py line 841 in call self initialize args kwd add initializer to initializer file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager def function py line 695 in initialize args kwd file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 2981 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 3373 in maybe define function graph function self create graph function args kwargs file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 3218 in create graph function capture by value self capture by value file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python framework func graph py line 998 in func graph from py func func output python func func args func kwargs file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager def function py line 603 in wrap fn out weak wrap fn wrap args kwd file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python save model function deserialization py line 257 in restore function body return call concrete function function input file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python save model function deserialization py line 75 in call concrete function result function call flat tensor input function capture input pylint disable protect access file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python save model load py line 116 in call flat cancellation manager file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 1944 in call flat flat output forward function call ctx args with tangent file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 590 in call executor type executor type file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python op functional op py line 1206 in partition call f add to graph graph file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python eager function py line 506 in add to graph g add function self file usr local anaconda3 envs tf2 2 n lib python3 6 site package tensorflow python framework op py line 3403 in add function gradient I want to get the onnx model desperate for some advice thank you very much
tensorflowtensorflow
typo
Bug
there be multiple typo in the tensorflow python keras layers merge py use in a functiona model should be functional
tensorflowtensorflow
tf variable throw typeerror on conversion to type numpy ndarray
Bug
the array method be recognize by numpy to allow object to be conveniently convert to a numpy ndarray this doesn t work consistently for tf variable because its definition of the method use an incorrect signature that accept no argument other than self l469 this break numpy usage which expect to be able to pass a dtype argument when convert to an explicitly type array as in np array tf variable 0 dtype np int64 the expect signature be document a bit sparsely but can be see here scroll to pyarray fromarrayattr or by example of how ndarray itself define its array method here test against the late nightly and late numpy numpy 1 19 5 tf nightly 2 5 0 dev20210119 expect behavior convert tf variable with a dtype succeed the same as tf constant with a dtype or tf variable without a dtype python c import tensorflow as tf import numpy as np np array tf constant 0 dtype np int32 python c import tensorflow as tf import numpy as np np array tf variable 0 actual behavior it raise a typeerror python c import tensorflow as tf import numpy as np np array tf variable 0 dtype np int32 traceback most recent call last file line 1 in typeerror array take 1 positional argument but 2 be give note that there be various other single argument array self definition within tf that should probably also be update l276 l852 l1030 on that note I m a bit confused why tf constant 0 can be convert above because the eagertensorbase array definition have the same problem as tf variable indeed this can be confirm by try to explicitly call it like this python c import tensorflow as tf import numpy as np tf constant 0 array np int32 traceback most recent call last file line 1 in typeerror array take 1 positional argument but 2 be give my good guess be that there s some other path through the convert python object to numpy array flowchart that be be take e g perhaps it s because eagertensorbase implement len while resourcevariable do not and so the constant be interpret as a sequence and get convert via that path however it d still be good if array be consistently define with the dtype argument within tf code even for type where that path isn t reachable in normal usage today
tensorflowtensorflow
issue on use tf vectorized map on a function with a tf while loop
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary conda install tensorflow gpu in a conda env tensorflow version use command below 2 2 python version 3 8 3 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version 10 2 gpu model and memory geforce rtx 2060 super computecapability 7 5 memory 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the include code try to vectorize a function which add 10 0 to the input and do so through a while loop which add 1 0 each time for 10 time the function run perfectly when use tf map fn and fail when use tf vectorized map describe the expect behavior the function would not run when use vectorized map and the error point towards either add a converter or set op conversion fallback to while loop true which may run slow standalone code to reproduce the issue if name main import tensorflow as tf tf function def add a I tf constant 0 dtype tf int32 c tf constant 1 dtype tf float32 loop index lambda I c a I 10 def body I c a a c a I I 1 return I c a I c a tf while loop loop index body I c a shape invariant tf tensorshape tf tensorshape tf tensorshape 1 back prop false parallel iteration 1 return a counter tf reshape tf range 0 40 delta 1 dtype tf float32 shape 40 1 all tf vectorized map add counter do not work all tf map fn add counter print all should be 40 1 float32 tensor with element 10 11 49 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 2021 01 20 17 26 53 074085 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2021 01 20 17 26 53 101801 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 102114 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2060 super computecapability 7 5 coreclock 1 71ghz corecount 34 devicememorysize 7 79gib devicememorybandwidth 417 29gib s 2021 01 20 17 26 53 102265 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2021 01 20 17 26 53 103295 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2021 01 20 17 26 53 104241 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2021 01 20 17 26 53 104386 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2021 01 20 17 26 53 105247 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2021 01 20 17 26 53 105734 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2021 01 20 17 26 53 107658 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2021 01 20 17 26 53 107742 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 108011 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 108212 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2021 01 20 17 26 53 108397 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 fma 2021 01 20 17 26 53 112400 I tensorflow core platform profile util cpu util cc 102 cpu frequency 3600000000 hz 2021 01 20 17 26 53 112694 I tensorflow compiler xla service service cc 168 xla service 0x55853e36c7b0 initialize for platform host this do not guarantee that xla will be use device 2021 01 20 17 26 53 112704 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2021 01 20 17 26 53 112820 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 113110 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2060 super computecapability 7 5 coreclock 1 71ghz corecount 34 devicememorysize 7 79gib devicememorybandwidth 417 29gib s 2021 01 20 17 26 53 113143 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2021 01 20 17 26 53 113154 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2021 01 20 17 26 53 113163 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2021 01 20 17 26 53 113171 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2021 01 20 17 26 53 113180 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2021 01 20 17 26 53 113188 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2021 01 20 17 26 53 113197 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2021 01 20 17 26 53 113232 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 113508 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 113760 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2021 01 20 17 26 53 113779 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2021 01 20 17 26 53 182683 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2021 01 20 17 26 53 182703 I tensorflow core common runtime gpu gpu device cc 1108 0 2021 01 20 17 26 53 182707 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2021 01 20 17 26 53 182837 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 183090 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 183343 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 01 20 17 26 53 183550 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6620 mb memory physical gpu device 0 name geforce rtx 2060 super pci bus i d 0000 01 00 0 compute capability 7 5 2021 01 20 17 26 53 184629 I tensorflow compiler xla service service cc 168 xla service 0x55853ec63ff0 initialize for platform cuda this do not guarantee that xla will be use device 2021 01 20 17 26 53 184638 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2060 super compute capability 7 5 warning tensorflow from distranstest py 266 call while loop v2 from tensorflow python op control flow op with back prop false be deprecate and will be remove in a future version instruction for update back prop false be deprecate consider use tf stop gradient instead instead of result tf while loop c b var back prop false use result tf nest map structure tf stop gradient tf while loop c b var error tensorflow get error while pfor be convert op name loop body partitionedcall op partitionedcall input loop body gatherv2 attr key tin value list type dt float attr key tout value list type dt float attr key read only resource input value list attr key config value s attr key config proto value s n 007 n 003cpu 020 001 n 007 n 003gpu 020 0012 005 0010j 0008 001 attr key executor type value s attr key f value func name inference add 57 with input convert input wrappedtensor t be stack true be sparse stack false in user code home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for pfor py 3600 f converter convert helper x t for x in func func graph output home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for pfor py 1460 convert helper raise valueerror no converter define for s n s ninput s valueerror no converter define for statelesswhile name while op statelesswhile input while loop counter input while maximum iteration input const input const 1 input a attr key t value list type dt int32 type dt int32 type dt int32 type dt float type dt float attr key low use switch merge value b true attr key num original output value I 5 attr key read only resource input value list attr key body value func name while body 20 attr key cond value func name while cond 19 attr key output shape value list shape shape shape shape shape dim size 1 attr key parallel iteration value I 1 input wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack true be sparse stack false either add a converter or set op conversion fallback to while loop true which may run slow here be the pfor conversion stack trace error tensorflow name loop body partitionedcall op partitionedcall input loop body gatherv2 attr key tin value list type dt float attr key tout value list type dt float attr key read only resource input value list attr key config value s attr key config proto value s n 007 n 003cpu 020 001 n 007 n 003gpu 020 0012 005 0010j 0008 001 attr key executor type value s attr key f value func name inference add 57 create at file distranst py line 273 in all tf vectorized map add counter do not work file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 407 in vectorized map return pfor loop fn batch size file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 198 in pfor output f file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 580 in call result self call args kwd file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 627 in call self initialize args kwd add initializer to initializer file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 505 in initialize self stateful fn get concrete function internal garbage collect pylint disable protect access file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2446 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2777 in maybe define function graph function self create graph function args kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2657 in create graph function func graph module func graph from py func file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 441 in wrap fn return weak wrap fn wrap args kwd file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework func graph py line 957 in wrapper return autograph convert call file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 183 in f return pfor impl loop fn iter parallel iteration parallel iteration file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 237 in pfor impl loop fn outputs loop fn loop var file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 400 in loop fn return fn gather elem file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 580 in call result self call args kwd file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 650 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 1661 in filter call return self call flat file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 1760 in call flat flat output forward function call ctx args with tangent file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 621 in call output functional op partition call file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op functional op py line 1180 in partition call op graph create op op name args tout name op name attrs op attrs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework op py line 3257 in create op return self create op internal op type input dtype input type name file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework func graph py line 593 in create op internal return super funcgraph self create op internal pylint disable protect access file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework op py line 3319 in create op internal ret operation file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework op py line 1791 in init self traceback tf stack extract stack traceback most recent call last file distranst py line 273 in all tf vectorized map add counter do not work file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 407 in vectorized map return pfor loop fn batch size file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py line 198 in pfor output f file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 580 in call result self call args kwd file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 627 in call self initialize args kwd add initializer to initializer file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 505 in initialize self stateful fn get concrete function internal garbage collect pylint disable protect access file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2446 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2777 in maybe define function graph function self create graph function args kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager function py line 2657 in create graph function func graph module func graph from py func file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python eager def function py line 441 in wrap fn return weak wrap fn wrap args kwd file home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in user code home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for control flow op py 183 f return pfor impl loop fn iter parallel iteration parallel iteration home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for pfor py 3600 f converter convert helper x t for x in func func graph output home arka anaconda3 envs tensorflow lib python3 8 site package tensorflow python op parallel for pfor py 1460 convert helper raise valueerror no converter define for s n s ninput s valueerror no converter define for statelesswhile name while op statelesswhile input while loop counter input while maximum iteration input const input const 1 input a attr key t value list type dt int32 type dt int32 type dt int32 type dt float type dt float attr key low use switch merge value b true attr key num original output value I 5 attr key read only resource input value list attr key body value func name while body 20 attr key cond value func name while cond 19 attr key output shape value list shape shape shape shape shape dim size 1 attr key parallel iteration value I 1 input wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack true be sparse stack false either add a converter or set op conversion fallback to while loop true which may run slow
tensorflowtensorflow
typeerror can not pickle thread lock object in tensorflow 2 4
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 0 g582c8d236cb 2 4 0 python version 3 7 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run a simple training process with multiworkermirroredstrategy fail with typeerror can t pickle thread lock object describe the expect behavior the training should proceed without error standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook the example need to run in a distribute environment to reproduce the issue so save the script in a file and run it in 3 different terminal tf config cluster chief localhost 2222 worker localhost 2223 localhost 2224 task type chief index 0 python script py tf config cluster chief localhost 2222 worker localhost 2223 localhost 2224 task type worker index 0 python script py tf config cluster chief localhost 2222 worker localhost 2223 localhost 2224 task type worker index 1 python script py import tensorflow as tf import tensorflow dataset as tfds buffer size 10000 batch size 64 learning rate 1e 4 def input fn mode input context none tfds disable progress bar dataset tfds load name mnist with info true as supervise true mnist dataset dataset train if mode tf estimator modekeys train else dataset test def scale image label image tf cast image tf float32 image 255 return image label if input context mnist dataset mnist dataset shard input context num input pipeline input context input pipeline i d return mnist dataset map scale cache shuffle buffer size batch batch size def model fn feature label mode model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 logit model feature training false if mode tf estimator modekey predict prediction logit logit return tf estimator estimatorspec label label prediction prediction optimizer tf compat v1 train gradientdescentoptimizer learning rate learning rate loss tf keras loss sparsecategoricalcrossentropy from logit true reduction tf keras loss reduction none label logit loss tf reduce sum loss 1 batch size if mode tf estimator modekeys eval return tf estimator estimatorspec mode loss loss log hook tf estimator loggingtensorhook loss loss every n iter 10 return tf estimator estimatorspec mode mode loss loss training hook log hook train op optimizer minimize loss tf compat v1 train get or create global step strategy tf distribute experimental multiworkermirroredstrategy config tf estimator runconfig train distribute strategy classifi tf estimator estimator model fn model fn model dir tmp multiworker config config tf estimator train and evaluate classifier train spec tf estimator trainspec input fn input fn eval spec tf estimator evalspec input fn input fn other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach full log tf config cluster chief localhost 2222 worker localhost 2223 localhost 2224 task type worker index 1 python script py warn tensorflow from script py 68 collectiveallreducestrategyexperimental init from tensorflow python distribute collective all reduce strategy be deprecate and will be remove in a future version instruction for update use distribute multiworkermirroredstrategy instead 2021 01 20 18 24 44 477611 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 01 20 18 24 44 479538 I tensorflow core common runtime process util cc 146 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance 2021 01 20 18 24 44 491607 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job chief 0 localhost 2222 2021 01 20 18 24 44 491654 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job worker 0 localhost 2223 1 localhost 2224 2021 01 20 18 24 44 492211 I tensorflow core distribute runtime rpc grpc server lib cc 411 start server with target grpc localhost 2224 traceback most recent call last file script py line 73 in model fn model fn model dir tmp multiworker config config file opt conda lib python3 7 site package tensorflow estimator python estimator estimator py line 183 in init config model dir file opt conda lib python3 7 site package tensorflow estimator python estimator estimator py line 1832 in maybe overwrite model dir and session config config run config runconfig replace config session config session config file opt conda lib python3 7 site package tensorflow estimator python estimator run config py line 923 in replace copy deepcopy self file opt conda lib python3 7 copy py line 180 in deepcopy y reconstruct x memo rv file opt conda lib python3 7 copy py line 281 in reconstruct state deepcopy state memo file opt conda lib python3 7 copy py line 150 in deepcopy y copi x memo file opt conda lib python3 7 copy py line 241 in deepcopy dict y deepcopy key memo deepcopy value memo file opt conda lib python3 7 copy py line 161 in deepcopy y copi memo file opt conda lib python3 7 site package tensorflow python distribute distribute lib py line 1542 in deepcopy setattr result k copy deepcopy v memo file opt conda lib python3 7 copy py line 180 in deepcopy y reconstruct x memo rv file opt conda lib python3 7 copy py line 281 in reconstruct state deepcopy state memo file opt conda lib python3 7 copy py line 150 in deepcopy y copi x memo file opt conda lib python3 7 copy py line 241 in deepcopy dict y deepcopy key memo deepcopy value memo file opt conda lib python3 7 copy py line 180 in deepcopy y reconstruct x memo rv file opt conda lib python3 7 copy py line 281 in reconstruct state deepcopy state memo file opt conda lib python3 7 copy py line 150 in deepcopy y copi x memo file opt conda lib python3 7 copy py line 241 in deepcopy dict y deepcopy key memo deepcopy value memo file opt conda lib python3 7 copy py line 180 in deepcopy y reconstruct x memo rv file opt conda lib python3 7 copy py line 281 in reconstruct state deepcopy state memo file opt conda lib python3 7 copy py line 150 in deepcopy y copi x memo file opt conda lib python3 7 copy py line 241 in deepcopy dict y deepcopy key memo deepcopy value memo file opt conda lib python3 7 copy py line 169 in deepcopy rv reductor 4 typeerror can t pickle thread lock object
tensorflowtensorflow
libtensorflow gpu window x86 64 2 3 1 do not support gpu with compute capability 5 0
Bug
tensorflow 2 3 1 crash on my gpu with compute capability cc 5 0 with the error no kernel image be available for execution on the device I use cuobjdump cuobjdump to inspect tensorflow dll to see what binary and ptx code be include I find binary code for cc 3 5 3 7 5 2 6 0 6 1 and 7 0 as well as ptx code for cc 7 0 ptx code be forward compatible but 7 0 ptx can not run on a 5 0 gpu binary code be forward compatible but only for minor version update this mean that neither the 3 7 nor the 5 2 binary can run on a 5 0 gpu in fact the current configuration mean that tensorflow 2 3 1 can run on any gpu with cc 3 5 and up except for cc 5 0 I suggest build for cc 5 0 instead of cc 5 2 so that tensorflow 2 3 x can run on any gpu with cc 3 5 and up
tensorflowtensorflow
example code bug in documentation relate to tf datum dataset interleave
Bug
url with the issue description of issue the following have be test with tf version v2 4 0 0 g582c8d236cb on the above mention documentation page it be define def new cls num sample 3 which be ok up to some point but result in a bug for the variant use interleave since then internally a tensor be provide as second argument so that num sample win t be 3 in fact it alternate between 0 and 1 in generator one can easily check this out by use the print function both in new and in generator assume the person write this part of the documentation wasn t aware of this and this problem be not version specific it be very likely that the timing result be wrong accordingly when use interleave it be also worth mention that when use tf datum dataset range 2 interleave the generate datum in total be double in size if fix the above mention bug so that the timing result can not be compare 1 1 when use tf datum dataset range 2 interleave the timing result should be divide by two to allow for a fair comparison which isn t mention in the documentation see my gist here
tensorflowtensorflow
batchnormalization inference equation in doc be incorrect
Bug
url s with the issue description of issue what need change this documentation state the following during inference I e when use evaluate or predict or when call the layer model with the argument train false which be the default the layer normalize its output use a move average of the mean and standard deviation of the batch it have see during training that be to say it return batch self move mean self move var epsilon gamma beta this equation be incorrect testing with the source code show it give the wrong output the correct equation that give the right output and match the correct equation from literature be batch self moving mean sqrt self move var epsilon gamma beta I e it be miss square root far for more clarity to avoid confusion for some it would be well to write it as gamma batch self moving mean sqrt self move var epsilon beta
tensorflowtensorflow
unsafe conversion from pointer to uint64 t in ethos u kernel
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc ethos u describe the problem example reinterpret cast void 0x78000000 0000000078000000 reinterpret cast void 0x80000000 ffffffff80000000 reinterpret cast void 0x88000000 ffffffff88000000 this happen specifically for gcc and prevent use address at 0x80000000 or above please provide the exact sequence of command step when you run into the problem
tensorflowtensorflow
instead of x2 variable name should be x0
Bug
instead of x2 variable name should be x0 in last line x0 tf variable 3 0 x1 tf variable 0 0 with tf gradienttape as tape update x1 x1 x0 x1 assign add x0 the tape start record from x1 y x1 2 y x1 x0 2 this doesn t work print tape gradient y x0 dy dx0 2 x1 x2 url
tensorflowtensorflow
micro port op batch matmul from lite
Bug
tensorflow micro this issue track my work port operator batch matmul from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence pr 3 copy operator from lite to micro make minimal change and not include in the build pr 4 delete extra code from the micro copy of the operator pr 5 port micro copy of operator as necessary and add a corresponding test
tensorflowtensorflow
tflite converter crash when quantization be enable
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 python version 3 8 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a I have tf model in save model format when convert to tflite without quantization everything work and I m able to run inference no problem when convert use full integer quantization with a representative dataset I get the follow exception traceback most recent call last file c repository poolnet2tfv2 tfliteinf py line 93 in quantize poolnet 640 tf file c repository poolnet2tfv2 tfliteinf py line 35 in quantize quant model converter convert file c repository poolnet2tfv2 venv38local lib site package tensorflow lite python lite py line 1076 in convert return super tfliteconverterv2 self convert file c repository poolnet2tfv2 venv38local lib site package tensorflow lite python lite py line 899 in convert return super tflitefrozengraphconverterv2 file c repository poolnet2tfv2 venv38local lib site package tensorflow lite python lite py line 638 in convert result self calibrate quantize model result flag file c repository poolnet2tfv2 venv38local lib site package tensorflow lite python lite py line 450 in calibrate quantize model return calibrate quantize calibrate and quantize file c repository poolnet2tfv2 venv38local lib site package tensorflow lite python optimize calibrator py line 95 in calibrate and quantize return self calibrator quantizemodel runtimeerror quantization not yet support for op see here a zip file contain the tf model poolnet 640 tf a folder with 3 image for the representative dataset norm image and quantize py my conversion code thank you
tensorflowtensorflow
documentation miss tf keras model fit shuffle argument be be ignore when pass a tf datum dataset
Bug
url s with the issue fit description of issue what need change there s an issue with the shuffle argument description it doesn t state that the shuffle argument be be ignore when the argument x be a tf datum dataset in tf keras model fit clear description I be inspect the source code in tf keras model fit and find that when the input arg x to the datum handler l1050 be a tf datum dataset it end up use the dataset adapter l671 which silently ignore the shuffle argument this issue be not clearly explain in the description of the shuffle argument below boolean whether to shuffle the training datum before each epoch or str for batch this argument be ignore when x be a generator batch be a special option for deal with the limitation of hdf5 datum it shuffle in batch sized chunk have no effect when step per epoch be not none I would recommend to write something like this instead this argument be ignore when x be a generator or a tf datum dataset I can contribute by fix the doc if you agree that this should be do thank you in advance for your review
tensorflowtensorflow
donkey car epoch failure during train command error occur when finalize generatordataset iterator fail precondition python interpreter state be not initialize
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes use donkey car train function os platform and distribution e g linux ubuntu 16 04 I be not sure use anaconda prompt mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 4 0 have also try 2 3 0 and 2 3 2 python version 3 7 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 2 gpu model and memory gtx 1080 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when execute the train command it eventually fail with error w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator fail precondition python interpreter state be not initialize the process may be terminate node pyfunc describe the expect behavior 100 epoch should be run through standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook donkey c user kelvin python c user kelvin mycar train py tub c user kelvin mycar data model c user kelvin mycar model mypilot h5 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach sorry if anything be unclear I be fairly new to this this be in reference to autorope donkeycar 742 basically we be try to train a neural network and go through 100 epoch but run into this error I have try with 2 3 2 and 2 4 0 and update cuda 10 2 and update my graphic driver and have have no luck here be the full log when I run into the issue console with 2 4 0 txt
tensorflowtensorflow
keras save model return different result than original model use batch normalization on multiple gpu distribute training
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary via pip3 tensorflow version use command below 2 3 1 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory geforce gtx 1080 ti 4x11 gb I get a different result when use evaluate function on a save model when compare with the original model this only happen when batch normalization be include in the model and when train on multiple gpu with mirroredstrategy here be my model with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer batchnormalization tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer adam metric accuracy multiple gpu with mirroredstrategy strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync output number of device 2 evaluate after train eval loss eval acc model evaluate eval dataset print eval loss eval accuracy format eval loss eval acc output 79 79 0s 5ms step loss 0 0424 accuracy 0 9884 eval loss 0 04239395260810852 eval accuracy 0 9883999824523926 save model and evaluate again path save model model save path save format tf with strategy scope replicate model tf keras model load model path replicate model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer adam metric accuracy eval loss eval acc replicate model evaluate eval dataset print eval loss eval accuracy format eval loss eval acc output 79 79 0s 6ms step loss 0 0424 accuracy 0 9883 eval loss 0 04239019751548767 eval accuracy 0 9883000254631042 without bn output from evaluate 79 79 0s 6ms step loss 0 0450 accuracy 0 9837 eval loss 0 04498908668756485 eval accuracy 0 9836999773979187 save model and repeat evaluate 79 79 0s 4ms step loss 0 0450 accuracy 0 9837 eval loss 0 04498908668756485 eval accuracy 0 9836999773979187 I have search for similar issue and think that this be because bn compute differently during training and interference but when I try with one gpu this issue didn t occur code for reproduce result
tensorflowtensorflow
typo miss space in runtimeerror
Bug
I just sign up here to report this sorry if I m do everything wrong or something I just get the follwe runtimeerror in user code 22 fitting grad tape gradient value model trainable weight usr local lib python3 6 dist package tensorflow python eager backprop py 1027 gradient raise runtimeerror a non persistent gradienttape can only be use to runtimeerror a non persistent gradienttape can only be use tocompute one set of gradient or jacobian and I think there be a space miss in the last line s tocompute
tensorflowtensorflow
doc keras util plot model print the shape as none n but outdate n be give in doc
Bug
url s with the issue description of issue what need change keras util plot model print the shape as none n but give in doc as n clear description watch the line keras util plot model model my first model with shape info png show shape true in the doc link provide above the output show there be outdate correct link gist here to show the change tensorflow version 2 4 submit a pull request yes
tensorflowtensorflow
typeerror an op outside of the function building code be be pass
Bug
follow the documentation I re arrange function into a class I change the model and make a few other modification however the code fail to work if tf function be enable otherwise it work perfectly fine by comment out line 97 the error be go self model optimizer apply gradient zip grad self model trainable variable error traceback most recent call last file user emadboctor desktop code drl algo a2c py line 109 in agn fit file user emadboctor desktop code drl algo a2c py line 103 in fit self train step file usr local lib python3 8 site package tensorflow python eager def function py line 828 in call result self call args kwd file usr local lib python3 8 site package tensorflow python eager def function py line 888 in call return self stateless fn args kwd file usr local lib python3 8 site package tensorflow python eager function py line 2942 in call return graph function call flat file usr local lib python3 8 site package tensorflow python eager function py line 1918 in call flat return self build call output self inference function call file usr local lib python3 8 site package tensorflow python eager function py line 555 in call output execute execute file usr local lib python3 8 site package tensorflow python eager execute py line 75 in quick execute raise e file usr local lib python3 8 site package tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name while 4 code import gym import numpy as np import tensorflow as tf from tensorflow keras layer import conv2d dense flatten input from tensorflow keras loss import huber from tensorflow keras model import model from tensorflow keras optimizer import adam class a2c def init self env gamma 0 99 fc unit 512 self env env self available action env action space n self model self create model fc unit self state tf cast self env reset tf float32 self gamma gamma self division ep np finfo np float32 eps item self loss huber reduction tf keras loss reduction sum def create model self fc unit x0 input self env observation space shape x conv2d 32 8 4 activation relu x0 x conv2d 64 4 2 activation relu x x conv2d 32 3 1 activation relu x x flatten x x dense fc unit activation relu x actor dense self available action x critic dense 1 actor model model x0 actor critic model call tf function model call return model def env step self action state reward do self env step action return state astype np float32 np array reward np int32 np array do np int32 def tf env step self action return tf numpy function self env step action tf float32 tf int32 tf int32 def get return self reward standardize true n tf shape reward 0 return tf tensorarray dtype tf float32 size n reward tf cast reward 1 dtype tf float32 discount sum tf constant 0 0 discount sum shape discount sum shape for I in tf range n reward reward I discount sum reward self gamma discount sum discount sum set shape discount sum shape return return write I discount sum return return stack 1 if standardize return return tf math reduce mean return tf math reduce std return self division ep return return def play episode self max step 10000 action prob tf tensorarray dtype tf float32 size 0 dynamic size true value tf tensorarray dtype tf float32 size 0 dynamic size true reward tf tensorarray dtype tf int32 size 0 dynamic size true initial shape self state shape for I in tf range max step actor out value self model tf expand dim self state 0 action tf random categorical actor out 1 0 0 action prob tf nn softmax actor out self state reward do self tf env step action self state set shape initial shape action prob action prob write I action prob 0 action value value write I tf squeeze value reward reward write I reward if tf cast do tf bool self state tf cast self env reset tf float32 break return item stack for item in action prob value reward def compute loss self return value action prob advantage return value action log prob tf math log action prob actor loss tf math reduce sum action log prob advantage critic loss self loss value return return actor loss critic loss tf function def train step self with tf gradienttape as tape action prob value reward self play episode return self get return reward loss self compute loss return value action prob grad tape gradient loss self model trainable variable self model optimizer apply gradient zip grad self model trainable variable episode reward tf math reduce sum reward return episode reward def fit self learning rate 7e 4 self model compile optimizer adam learning rate self train step if name main gym env gym make pongnoframeskip v4 agn a2c gym env agn fit
tensorflowtensorflow
tensorrt converter fail for combinednonmaxsuppression
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below tf 2 5 0 dev20210114 python version 3 7 cuda cudnn version 11 0 8 0 4 gpu model and memory 1060 describe the current behavior tensorrt converter crash with a segmentation fault when I try to export my save model interestingly if I set minimum segment size 10 it work because it skip replace segment 5 consist of 7 node by statefulpartitionedcall decode prediction trtengineop 0 5 2021 01 15 15 21 38 915310 I tensorflow compiler tf2tensorrt convert convert graph cc 858 segment consist of nodes statefulpartitionedcall decode prediction combine non max suppression combinednonmaxsuppression statefulpartitionedcall decode prediction combine non max suppression combinednonmaxsuppression max output size per class statefulpartitionedcall decode prediction combine non max suppression const statefulpartitionedcall decode prediction combine non max suppression iou threshold statefulpartitionedcall decode prediction combine non max suppression score threshold statefulpartitionedcall decode prediction transpose 1 statefulpartitionedcall decode prediction transpose 1 perm I have attach the full log after run with these flag tf cpp vmodule trt engine op 2 convert node 2 convert graph 2 segment 2 trt shape optimization profile 2 trt engine resource op 2 python trt py standalone code to reproduce the issue python import os import tensorflow as tf download and extract the zip url param tf experimental tensorrt conversionparam precision mode fp32 maximum cache engine 1 minimum segment size 5 converter tf experimental tensorrt converter input save model dir retinanet 18 640 30x 64 tpu conversion param param converter convert def input fn step 1 for I in range step yield tf random uniform 640 640 3 tf constant 1 dtype tf int32 converter build input fn input fn converter save trt other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach trt log txt
tensorflowtensorflow
unable to load weight from directory other than the work directory
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu v2 5 0 dev20201214 python version 3 8 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda v11 0 gpu model and memory tesla t4 15109mib you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior if I use model load weight dir and specify a directory similarly to how it s do in the doc checkpoint filepath checkpoint training checkpoint test cp ckpt checkpoint dir os path dirname checkpoint filepath if os path exist checkpoint dir model load weight checkpoint filepath print checkpoint load else mkdir p checkpoint training checkpoint test print no checkpoint find however the script never actually read into the directory I m specify if I instead load checkpoint directly from the work directory checkpoint filepath training checkpoint test cp ckpt checkpoint dir os path dirname checkpoint filepath if os path exist checkpoint dir model load weight checkpoint filepath print checkpoint load else mkdir p training checkpoint test print no checkpoint find it work no problem this second approach be fine if you have few model or checkpoint but the direcotry quickly fill with folder and folder of checkpoint of different model and run and it d be well to keep all checkpoint in a folder describe the expect behavior load checkpoint from sub directory standalone code to reproduce the issue see above
tensorflowtensorflow
an issue on cross building tensorflow lite for python
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 x86 64 amd64 pc tensorflow instal from source or binary v2 4 tensorflow version commit sha if source 582c8d2 target platform e g arm mbe os arduino nano 33 etc rk3399 ubuntu 18 04 aarch64 describe the problem I be go to get tflite whl for python3 on aarch64 so I try to cross build tflite whl on ubuntu 18 04 x86 64 host pc but while compile the source I meet the follow issue image host pc environment os ubuntu 18 04 x86 64 amd64 native gcc g 7 5 0 cross compiler aarch64 linux gnu gcc aarch64 linux gnu g 7 5 0 bazel 3 1 0 python virtual env python 3 7 please provide the exact sequence of command step when you run into the problem I follow the follow step on my host pc sudo apt update sudo apt get install software property common sudo apt update sudo apt install git curl sudo apt install python3 7 python3 7 dev python3 7 venv python3 7 distutil sudo apt install mesa common dev libegl1 mesa dev libgles2 mesa dev cd python3 7 m venv py37 source py37 bin activate pip install cython pip install wheel pip install numpy git clone b r2 4 tensorflow r2 4 cd tensorflow r2 4 python configure py image tensorflow lite tool pip package build pip package with bazel sh aarch64
tensorflowtensorflow
rnn model can not run in tflite with delegate
Bug
system information os platform and distribution e g linux ubuntu 16 04 qualcomm snapdragon 865 tensorflow instal from source or binary source tensorflow version or github sha if from source nightly command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook tf model tf keras model load model my model h5 for I in range 4 tf model input I shape dim 0 tf python framework tensor shape dimension 1 model func tf function lambda a tf model a concrete func model func get concrete function tf tensorspec tf model input 0 shape tf model input 0 dtype tf tensorspec tf model input 1 shape tf model input 1 dtype tf tensorspec tf model input 2 shape tf model input 2 dtype tf tensorspec tf model input 3 shape tf model input 3 dtype tf out concrete func input 1 gru0 1 h in gru1 1 h in out quat gru1 h in converter tf lite tfliteconverter from concrete function concrete func tflite model converter convert the output from the converter invocation 2021 01 14 11 16 28 325520 I tensorflow core grappler optimizer meta optimizer cc 933 optimization result for grappler item graph to optimize function optimizer graph size after 389 node 195 590 edge 225 time 6 75ms function optimizer graph size after 389 node 0 590 edge 0 time 3 64ms optimization result for grappler item while body 2339 function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while cond 2338 function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while cond 2708 function optimizer function optimizer do nothing time 0 003ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item while body 2709 function optimizer function optimizer do nothing time 0 003ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while body 1969 function optimizer function optimizer do nothing time 0 003ms function optimizer function optimizer do nothing time 0 001ms optimization result for grappler item while cond 1968 function optimizer function optimizer do nothing time 0 003ms function optimizer function optimizer do nothing time 0 002ms 2021 01 14 11 16 28 851727 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 332 ignore output format 2021 01 14 11 16 28 851756 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 335 ignore drop control dependency 2021 01 14 11 16 28 895692 I tensorflow compiler mlir tensorflow util dump mlir util cc 194 disable mlir crash reproducer set env var mlir crash reproducer directory to enable also please include a link to the save model or graphdef my keras model contain a gru layer here be what the keras layer convert to in tflite screenshot from 2021 01 14 06 12 45 failure detail model conversion be successful and run on cpu however when I try to initialize an opencl delegate I get the following run time error error attempt to use a delegate that only support static sized tensor with a graph that have dynamic sized tensor interpreter modifygraphwithdelegate fail I suspect the problem be with those while op generate by the converter my gru layer be stateless where I manually manage the layer state tf keras layers gru be define with argument return sequence true stateful false unroll false it be call with one of the model s input pass to the initial state argument and with the sequence dimension of 1 be it possible to tell the tflite converter to generate a graph that doesn t contain a while loop since in my situation I don t actually need to have a loop in my gru rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
writer test of serialization fail for squeeznet
Bug
system information os platform and distribution linux ubuntu 16 04 tensorflow instal from source tensorflow version use command below python version python 3 5 2 bazel version if compile from source build label 3 7 2 gcc compiler version if compile from source gcc ubuntu 5 4 0 6ubuntu1 16 04 12 5 4 0 20160609 describe the current behavior download squeezenet squeezenet tflit build write test of serialization bazel build c opt tensorflow lite tool serialization writer test bazel bin tensorflow lite tool serialization writer test model folder squeezenet tflite error tensorflow lite kernel reshape cc 69 num input element num output element 1001 1 error node number 38 reshape fail to prepare allocatetensor fail on the round trip model describe the expect behavior pass write test standalone code to reproduce the issue download squeezenet squeezenet tflit build write test of serialization bazel build c opt tensorflow lite tool serialization writer test bazel bin tensorflow lite tool serialization writer test model folder squeezenet tflite other info log include any log or source code that would be helpful to error tensorflow lite kernel reshape cc 69 num input element num output element 1001 1 error node number 38 reshape fail to prepare allocatetensor fail on the round trip model a proposal of fix be at
tensorflowtensorflow
tf2 4 xla assertion in electra model when defer compilation on gpu
Bug
attach be gzip version of 1 git patch 2 dummy p1 tfrecord 3 run electra bug sh environment 1 google tf 2 4 container tensorflow tensorflow 2 4 0 gpu 2 single gv100 32 gb reproduction step 1 git clone nvidia deeplearningexample git 2 cd deeplearningexample tensorflow2 languagemodele electra 3 copy the attach git patch file to 4 cp the attach run electra bug file to script 5 cp the attached dummy p1 tfrecord file to datum 6 git apply git patch 7 bash script docker build sh 8 bash script docker launch sh 9 tf xla flag tf xla always defer compilation true bash script run electra bug sh 1 6e 3 amp 1 run this within the docker container start in 8 the result error 0 traceback most recent call last 0 file workspace electra run pretraine py line 493 in 0 args main start time 0 file workspace electra run pretraine py line 427 in main 0 local step 1 take step local step args gradient accumulation step 0 0 file usr local lib python3 6 dist package tensorflow python eager def function py line 828 in call 0 result self call args kwd 0 file usr local lib python3 6 dist package tensorflow python eager def function py line 888 in call 0 return self stateless fn args kwd 0 file usr local lib python3 6 dist package tensorflow python eager function py line 2943 in call 0 filter flat args capture input graph function capture input pylint disable protect access 0 file usr local lib python3 6 dist package tensorflow python eager function py line 1919 in call flat 0 ctx args cancellation manager cancellation manager 0 file usr local lib python3 6 dist package tensorflow python eager function py line 560 in call 0 ctx ctx 0 file usr local lib python3 6 dist package tensorflow python eager execute py line 60 in quick execute 0 input attrs num output 0 tensorflow python framework error impl cancellederror derive recvasync be cancel 0 node cluster 11 1 merge oidx 69 148 op inference train one step 88467 0 0 function call stack 0 train one step 0 process 0 exit with status code 1 traceback most recent call last file usr local bin horovodrun line 8 in sys exit run commandline file usr local lib python3 6 dist package horovod runner launch py line 768 in run commandline run args file usr local lib python3 6 dist package horovod runner launch py line 758 in run return run static args file usr local lib python3 6 dist package horovod runner launch py line 615 in run static launch job args setting nic command file usr local lib python3 6 dist package horovod runner launch py line 731 in launch job args verbose file usr local lib python3 6 dist package horovod runner launch py line 704 in run controller gloo run file usr local lib python3 6 dist package horovod runner launch py line 720 in gloo run fn gloo run setting nic env driver ip command file usr local lib python3 6 dist package horovod runner gloo run py line 284 in gloo run launch gloo command exec command setting nic env server ip file usr local lib python3 6 dist package horovod runner gloo run py line 271 in launch gloo format name name code exit code
tensorflowtensorflow
when I use custom loss and gpu fit kernel die
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow version use command below tf nightly gpu 2 5 0 210112 python version 3 8 cuda cudnn version 11 1 maybe 8 gpu model and memory rtx 3090 24 gb mem 2 tf 2 0 v1 12 1 48890 g670cc3fa48f 2 5 0 dev20210113 my custom loss function look like this import tensorflow as tf from keras import backend as k from tensorflow keras loss import loss tf function def mase y true y pre seasonality 1 def naive forecasting actual seasonality int 1 return actual seasonality def error actual predict return actual predict def mae actual predict return k mean k abs error actual predict k print tensor y true message ny true k print tensor y pre message ny pre k print tensor mae y true y pre mae y true seasonality naive forecasting y true seasonality message nminus print y true y pre return mae y true y pre mae y true seasonality naive forecasting y true seasonality and use like this model compile loss mase 1 optimizer adam lr 0 001 model fit x concat datum train y concat datum train batch size batch size epoch epoch verbose 2 shuffle true model evaluate x concat datum validation y concat datum validation batch size batch size callback early stop I use rtx 3090 and want to train with gpu cpu training be good but when I use gpu python kernel be dead kernel output like this 2021 01 14 09 33 55 296929 I tensorflow compiler mlir mlir graph optimization pass cc 127 none of the mlir optimization pass be enable register 2 epoch 1 1000 d what be my problem
tensorflowtensorflow
multiworkermirroredstrategy document old communication tf distribute experimental collectivecommunication nccl
Bug
url s with the issue description of issue what need change the page say to override the automatic choice specify a valid value to the communication parameter of multiworkermirroredstrategy s constructor e g communication tf distribute experimental collectivecommunication nccl however multiworkermirroredstrategy s ctor have change to expect a communication option parameter instead it should be describe on this page how to actually change the distribution implementation with current tf 2 4 note that the deprecate tf distribute experimental multiworkermirroredstrategy still have that parameter but tf warn about usage of this class and the tutorial page doesn t mention that old class
tensorflowtensorflow
kernel silently die while generate an image
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 pro mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 5 0 dev20210111 python version 3 8 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 1 8 0 5 gpu model and memory rtx 3070 8 gb describe the current behavior the kernel silently die when it try to generate the image make via the generator describe the expect behavior should generate the image and continue the code standalone code to reproduce the issue use the example here on the tensorflow site for gan while execute it till we generate the image via the untrained generator the notebook just stop look at the logs show that the kernel then get restart other info log I 2021 01 12 16 02 30 618 serverapp kernel start 0227f971 d1fe 44a3 8db7 a948217de0bb 2021 01 12 16 02 51 057848 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cudart64 110 dll 2021 01 12 16 02 54 205477 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library nvcuda dll 2021 01 12 16 02 54 226746 I tensorflow core common runtime gpu gpu device cc 1760 find device 0 with property pcibusid 0000 0a 00 0 name geforce rtx 3070 computecapability 8 6 coreclock 1 77ghz corecount 46 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2021 01 12 16 02 54 226918 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cudart64 110 dll 2021 01 12 16 02 54 258395 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cublas64 11 dll 2021 01 12 16 02 54 258550 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cublaslt64 11 dll 2021 01 12 16 02 54 279408 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cufft64 10 dll 2021 01 12 16 02 54 284138 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library curand64 10 dll 2021 01 12 16 02 54 345037 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cusolver64 10 dll 2021 01 12 16 02 54 348822 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cusparse64 11 dll 2021 01 12 16 02 54 350356 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cudnn64 8 dll 2021 01 12 16 02 54 350512 I tensorflow core common runtime gpu gpu device cc 1902 add visible gpu device 0 2021 01 12 16 02 57 436724 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 01 12 16 02 57 437783 I tensorflow core common runtime gpu gpu device cc 1760 find device 0 with property pcibusid 0000 0a 00 0 name geforce rtx 3070 computecapability 8 6 coreclock 1 77ghz corecount 46 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2021 01 12 16 02 57 438036 I tensorflow core common runtime gpu gpu device cc 1902 add visible gpu device 0 2021 01 12 16 02 57 821516 I tensorflow core common runtime gpu gpu device cc 1300 device interconnect streamexecutor with strength 1 edge matrix 2021 01 12 16 02 57 821707 I tensorflow core common runtime gpu gpu device cc 1306 0 2021 01 12 16 02 57 821818 I tensorflow core common runtime gpu gpu device cc 1319 0 n 2021 01 12 16 02 57 822077 I tensorflow core common runtime gpu gpu device cc 1446 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 5474 mb memory physical gpu device 0 name geforce rtx 3070 pci bus i d 0000 0a 00 0 compute capability 8 6 2021 01 12 16 03 00 249533 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cublas64 11 dll 2021 01 12 16 03 00 804621 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cublaslt64 11 dll 2021 01 12 16 03 00 805722 I tensorflow stream executor cuda cuda blas cc 1838 tensorfloat 32 will be use for the matrix multiplication this will only be log once 2021 01 12 16 03 00 810208 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library cudnn64 8 dll 2021 01 12 16 03 01 400017 I tensorflow stream executor cuda cuda dnn cc 334 load cudnn version 8005 the kernel then crash after this and jupyter restart it I 2021 01 12 16 03 39 603 serverapp kernelrestarter restart kernel 1 5 keep random port kernel 0227f971 d1fe 44a3 8db7 a948217de0bb restart kernel 0227f971 d1fe 44a3 8db7 a948217de0bb restart I 2021 01 12 16 03 39 626 serverapp start buffer for 0227f971 d1fe 44a3 8db7 a948217de0bb c751d222 0788 4d6b 8fb6 b564f4dd14f5
tensorflowtensorflow
layering check mismatch between internal and open source tflm bazel build
Bug
tensorflow micro the google internal bazel build have layer check turn on while the open source build do not this result in prs pass external check but then fail internally see as an example if we can have the same behavior in the oss bazel build then there will be one less discrepancy between the internal and open source build
tensorflowtensorflow
error when compile a fully integer quantize model for the edgetpu
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source tf nightly command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook import cv2 import tensorflow as tf import numpy as np def representative dataset gen for image in raw image path image cv2 imread image image cv2 resize image 128 256 image image 1 image image 1 255 image 0 image 0 0 485 0 229 image 1 image 1 0 456 0 224 image 2 image 2 0 406 0 225 image tf expand dim tf convert to tensor image dtype tf float32 axis 0 yield image with open select file txt as f raw image path f read split n 1 full integer quantization input output float32 converter tf lite tfliteconverter from save model home parth internship clutterbot trial 4 save model converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter allow custom op true converter representative dataset representative dataset gen tflite quant model converter convert the output from the converter invocation no issue here also please include a link to the save model or graphdef the zip file contain the save model as well as the full integer quantize tflite model osnet x0 25 msmt17 batch 1 zip failure detail if the conversion be successful but the generate model be wrong state what be wrong the conversion be successful the issue arise when try to compile the tflite model for the edgetpu edge tpu compiler version 15 0 340273435 loc model depthwise conv2d 3 depthwise error invalid argument quantize tensor must have non zero scale error could not translate function quantize tensor must have non zero scale internal compiler error abort any other info log I have also mention the error in the issue here
tensorflowtensorflow
micro port op elu from lite
Bug
tensorflow micro this issue track my work porting operator elu from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence pr 3 copy operator from lite to micro make minimal change and not include in the build pr 4 delete extra code from the micro copy of the operator pr 5 port micro copy of operator as necessary and add a corresponding test pr 6 extract common activation code into activation cc and activation util h file extract common test code into activation test util h file
tensorflowtensorflow
in different tf2 version the weight naming rule of create keras model be different
Bug
system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow cpu tensorflow version use command below 2 1 2 2 2 3 2 4 python version 3 6 3 8 describe the current behavior I create keras model use same code but get different result different weight name in different tf version this will prevent I from loading network weight base on variable name in different tf version describe the expect behavior I can get same result in different tf version standalone code to reproduce the issue I create kear model python from tensorflow keras import layer model sequential class convbnrelu layer layer def init self out channel kernel size 3 stride 1 kwargs super convbnrelu self init kwargs layer list layer conv2d filter out channel kernel size kernel size stride stride padding same use bias false name conv2d layer batchnormalization momentum 0 9 epsilon 1e 5 name batchnorm layer relu max value 6 0 self combine layer sequential layer list name combine def call self input training false kwargs x self combine layer input training training return x def main input image layer input shape 224 224 3 dtype float32 conv1 x convbnrelu 32 stride 2 input image output convbnrelu 64 stride 2 x model model input input image output output for I in model weight print I name if name main main in tf2 0 2 1 and 2 2 the print weight name information be as follow conv bn re lu combine conv2d kernel 0 conv bn re lu combine batchnorm gamma 0 conv bn re lu combine batchnorm beta 0 conv bn re lu combine batchnorm move mean 0 conv bn re lu combine batchnorm move variance 0 conv bn re lu 1 combine conv2d kernel 0 conv bn re lu 1 combine batchnorm gamma 0 conv bn re lu 1 combine batchnorm beta 0 conv bn re lu 1 combine batchnorm move mean 0 conv bn re lu 1 combine batchnorm move variance 0 but in tf2 3 and 2 4 I get different result conv2d kernel 0 batchnorm gamma 0 batchnorm beta 0 batchnorm move mean 0 batchnorm move variance 0 conv2d kernel 0 batchnorm gamma 0 batchnorm beta 0 batchnorm move mean 0 batchnorm move variance 0
tensorflowtensorflow
gtest header result in conflicting clang format requirement
Bug
tensorflow micro while port op from lite to micro one of the intermediate step result in test code copy over from lite that have gt header while this code be not and can not be compile for tflm it still trip up the format check as describe in discussion r553568396 delete these include the workaround in discussion r553568396 work just fine and it would be nice to give pull request author this feedback directly via the tf micro ci instead of wait for the change to be import internally before the error be detect the overarching goal be to get to a place where if a pull request pass the external ci it also pass the internal ci unless the code that a pr be break be internal only
tensorflowtensorflow
can not create tf constant inside tf function with integer tensor
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary google colab tensorflow version use command below v2 4 0 0 g582c8d236cb 2 4 0 python version python 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior it be impossible to create a tf constant inside a function wrap by tf function if the argument to tf constant be an integer tensor describe the expect behavior it be expect that such operation do not raise an error for example in case of slightly more advanced postprocessing unless this behaviour be desire this issue can be close I would however greatly appreciate an explanation standalone code to reproduce the issue the follow snippet will work with eager execution python def function a int tf random normal shape tf print a constant tf constant a tf print constant will output 1 1 however after wrap in tf function an error be raise python wrap tf function function wrap raise usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 975 except exception as e pylint disable broad except 976 if hasattr e ag error metadata 977 raise e ag error metadata to exception e 978 else 979 raise typeerror in user code 5 function constant tf constant a usr local lib python3 6 dist package tensorflow python framework constant op py 265 constant allow broadcast true usr local lib python3 6 dist package tensorflow python framework constant op py 283 constant impl allow broadcast allow broadcast usr local lib python3 6 dist package tensorflow python framework tensor util py 457 make tensor proto assertcompatible value dtype usr local lib python3 6 dist package tensorflow python framework tensor util py 334 assertcompatible raise typeerror expect any non tensor type get a tensor instead typeerror expect any non tensor type get a tensor instead other info log n a
tensorflowtensorflow
tf io gfile gfile do not raise an error when give a directory
Bug
python s open do not accept directory the follow result in an isadirectoryerror with open r as f print f readline however if you use tf io gfile gfile instead you will get an empty list and no error with tf io gfile gfile r as f print f readline not sure if this be a feature or a bug but it be not the behavior I would expect give that the documentation say that tf io gfile be mean to provide an api close to python s file io object
tensorflowtensorflow
different tflm build use the same output directory
Bug
tensorflow micro in discussion r553049656 I be suggest that the linker be not correctly drop unused symbol in fact what be very likely happen be that I do not do a make clean between switch to build type release and since the tflm makefile currently use the same directory for all build type only the modify file be be rebuild with the small release build we can reproduce this with the follow sequence of command first check what the binary size be for the release build make f tensorflow lite micro tool make makefile clean make f tensorflow lite micro tool make makefile j8 target xtensa optimize kernel dir xtensa target arch hifimini xtensa core mini1m1 m rg keyword benchmark build type release xt size tensorflow lite micro tool make gen xtensa hifimini bin keyword benchmark text datum bss dec hex filename 46080 40204 24952 111236 1b284 tensorflow lite micro tool make gen xtensa hifimini bin keyword benchmark next have some intermediate non release object and then do a release build make f tensorflow lite micro tool make makefile clean build non release make f tensorflow lite micro tool make makefile j8 target xtensa optimize kernel dir xtensa target arch hifimini xtensa core mini1m1 m rg keyword benchmark touch tensorflow lite micro kernels xtensa fully connect cc build for release make f tensorflow lite micro tool make makefile j8 target xtensa optimize kernel dir xtensa target arch hifimini xtensa core mini1m1 m rg keyword benchmark build type release xt size tensorflow lite micro tool make gen xtensa hifimini bin keyword benchmark text datum bss dec hex filename 54736 48168 25032 127936 1f3c0 tensorflow lite micro tool make gen xtensa hifimini bin keyword benchmark what we really should be do be to change the output directory base on the build type
tensorflowtensorflow
xla dense layer test py throw internal error in fallback path
Bug
the failure happen in master as well as r2 4 these be the 2 branch that I ve test even with lazy compilation turn on via tf xla flag tf xla enable lazy compilation true the first execution always compile as per the current implementation if we tweak this behaviour such that the first execution doesn t compile and use the fallback path or use tf xla always defer compilation true to force the fallback path the test tensorflow compiler test dense layer test py fail with the follow signature tensorflow python framework error impl invalidargumenterror function node cluster 3 function node cluster 3 try to assign variable with wrong dtype expect invalid get float node dense kernel assign cluster 3 1 partition call there be 3 test point in the test base on the description they test that the dense layer node be properly compile in jit scope I be not sure if this test be suppose to be use for the fallback path however the failure be not merely a test failure but an internal error try to assign variable with wrong dtype expect invalid get float which lead I to think there might be a bug the error come from handling of resource variable l397 l401
tensorflowtensorflow
get typeerror when use label smoothing in categorical cross entropy
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 3 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version colab default gpu model and memory v100 sxm2 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I be use label smoothing with categorical cross entropy when I turn off label smooth the model work fine but when I turn it the model give the follow error typeerror input y of mul op have type float32 that do not match type float16 of argument x I be not able to understand what be the problem with my code here be my code I have try set the dtype to float16 in image data generator but that also do not work can somebody help I I have not get any answer on stack overflow also
tensorflowtensorflow
many error in the example of tf feature column categorical column with vocabulary file
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change in the sentence use either but not both of num oov bucket and default value to specify how to include out of vocabulary value since either be use or should be use instead of and when run the example code it be result in the error mention below 1 nameerror name categorical column with vocabulary file be not define 2 nameerror name linear model be not define 3 valueerror all feature column must be featurecolumn instance give ellipsis 4 nameerror name input layer be not define the code in the documentation should be modify to fix all the above error please find the github gist demonstrate the error
tensorflowtensorflow
many error in the example of tf feature column categorical column with identity
Bug
url s with the issue please provide a link to the documentation entry for example linear model description of issue what need change when run the example code it be result in the error mention below 1 nameerror name categorical column with identity be not define 2 nameerror name linear model be not define 3 valueerror all feature column must be featurecolumn instance give ellipsis 4 nameerror name input layer be not define the code in the documentation should be modify to fix all the above error please find the github gist demonstrate the error
tensorflowtensorflow
many error in the example of tf feature column categorical column with hash bucket
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change when run the example code it be result in the error mention below 1 file line 3 keyword categorical column with hash bucket keyword 10k syntaxerror invalid syntax 2 nameerror name categorical column with hash bucket be not define 3 nameerror name linear model be not define 4 nameerror name input layer be not define 5 valueerror all feature column must be featurecolumn instance give ellipsis the code in the documentation should be modify to fix all the above error please find the github gist demonstrate the error
tensorflowtensorflow
error in example because of incomplete api name for tf keras experimental sequencefeature
Bug
url s with the issue please provide a link to the documentation entry for example example description of issue what need change the apis sequence numeric column sequence categorical column with identity embed column etc be incomplete consequently it be result in the error nameerror name sequence numeric column be not define it should be tf feature column sequence numeric column tf feature column sequence categorical column with identity tf feature column embed column instead please find the github gist there be an error in the above gist because of ellipsis and it be be track in 46128
tensorflowtensorflow
run a single test with renode be break
Bug
tensorflow micro while the follow command pass make f tensorflow lite micro tool make makefile target bluepill test run a single test with renode for example bash make f tensorflow lite micro tool make makefile target bluepill test kernel add test fail with tensorflow lite micro testing test with renode sh tensorflow lite micro tool make gen bluepill cortex m3 bin kernel add test all test pass tensorflow lite micro testing test with renode sh line 69 robot script ambiguous redirect make tensorflow lite micro tool make makefile 663 test kernel add test error 1 the reason be that the change from be incompatible with how the makefile call the test script when run an individual test as oppose to make test
tensorflowtensorflow
micro port op add n from lite
Bug
tensorflow micro this issue track my work porting operator add n from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence pr 3 copy operator from lite to micro make minimal change and not include in the build pr 4 delete extra code from the micro copy of the operator pr 5 port micro copy of operator as necessary and add a corresponding test
tensorflowtensorflow
micro port op leaky relu from lite
Bug
tensorflow micro this issue track my work porting operator leaky relu from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence pr 3 copy operator from lite to micro make minimal change and not include in the build pr 4 delete extra code from the micro copy of the operator pr 5 port micro copy of operator as necessary and add a corresponding test
tensorflowtensorflow
micro in contribute md the path a test script be wrong
Bug
tensorflow micro tensorflow version commit sha if source 556fa126 in micro s contribute md the path to a script to run test prior to submit a pr use what be now at least an invalid path l204 l208 the fix be trivial and on its way
tensorflowtensorflow
panda pct change function result in nan loss when train
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version python 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version run on cpu gpu model and memory describe the current behavior when train a model with datum from a panda dataframe the model train fine unless I use the df pct change panda function on it if I do so the loss be always nan I specifically make sure to remove any nan or inf value from the dataset but the problem persist I don t know if I m just miss something obvious and the issue be on my end I create a simplified jupyter notebook to demonstrate the issue describe the expect behavior the loss should be a real number in the last cell of the notebook after I remove all nan and inf value but it be still nan standalone code to reproduce the issue here be the notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow and tensorflowlite model produce different result
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow simple test script include os platform and distribution e g linux ubuntu 16 04 google colaboratory mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device n a tensorflow instal from source or binary tensorflow version use command below v2 4 0 0 g582c8d236cb 2 4 0 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce test script attach describe the problem I have a tensorflow model attach as a zip file which be convert to tensorflowlite use the attach script both model be evaluate use a single test exemplar expect the two model should produce similar if not identical prediction actual the two model produce completely different prediction the tflite model produce the same wrong prediction when run on a mobile device but the test can entirely reproduce in colaboratory use the attach file model zip tflite model test ipynb txt source code log attach model zip and tflite model test ipynb
tensorflowtensorflow
this throw error feature tf io parse example feature make parse example spec column
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below late from this week python version 3 8 x bazel version if compile from source 3 1 0 gcc compiler version if compile from source cuda cudnn version 11 2 gpu model and memory describe the current behavior I be try this code but it throw exception return op eagertensor value ctx device name dtype valueerror attempt to convert a value ellipsis with an unsupported type to a tensor here be the full code from that page behavior of some cell or feature column may depend on whether we be in training or inference mode e g apply dropout training true rating sequence numeric column rating watch sequence categorical column with identity watch num bucket 1000 watch embed embed column watch dimension 10 column rating watch embed sequence input layer sequencefeature column feature tf io parse example feature make parse example spec column sequence input sequence length sequence input layer feature training training sequence length mask tf sequence mask sequence length rnn cell tf keras layers simplernncell hide size training training rnn layer tf keras layer rnn rnn cell training training output state rnn layer sequence input mask sequence length mask
tensorflowtensorflow
wrong output dimension calculation of strideslice operation on tensorflow lite
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento 8 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary both pip and source build tensorflow version use command below v2 4 0 rc4 71 g582c8d236cb 2 4 0 python version 3 6 8 bazel version if compile from source 3 7 1 gcc compiler version if compile from source 8 3 1 cuda cudnn version cuda 11 1 cudnn 8 gpu model and memory rtx3090 24 gb describe the current behavior the stridedslice operation on tensorflow lite calculate output dimension incorrectly I m develop some custom operation for both tensorflow and tensorflow lite while I debug my custom operation inference fail only on tensorflow lite I find that stridedslice operation of tensorflow lite calculate output dimension incorrectly I add some printf function to tflite op builtin stride slice eval l185 c tflitestatus eval tflitecontext context tflitenode node stridedslicecontext op context context node printf stride slice input for int I 0 I numdimension op context input I printf d sizeofdimension op context input I printf n stride slice begin for int I 0 I sizeofdimension op context begin 0 I printf d gettensordata op context begin I printf n stride slice end for int I 0 I sizeofdimension op context end 0 I printf d gettensordata op context end I printf n stride slice stride for int I 0 I sizeofdimension op context stride 0 I printf d gettensordata op context stride I printf n if isdynamictensor op context output tf lite ensure ok context resizeoutputtensor context op context stridedsliceparam op param buildstridedsliceparam op context printf stride slice output for int I 0 I numdimension op context output I printf d sizeofdimension op context output I printf n and I get follow log stride slice input 1656 8 32 stride slice begin 0 0 0 stride slice end 893 stride slice stride 1 1 1 stride slice output 893 8 0 describe the expect behavior after strided slicing output dimension should be 893 8 32 in above example standalone code to reproduce the issue I write a small reproducible code the simplelayer keras layer create read only tensor with size of 128 8 32 after ten model call s I save the model into tflite model file on tensorflow python no code modification instal by pip tensorflow 2 4 0 cp36 cp36 m manylinux2010 x86 64 whl it calculate output dimension correctly test input 128 8 32 siliceto 53 output 53 8 32 test input 128 8 32 siliceto 38 output 38 8 32 test input 128 8 32 siliceto 64 output 64 8 32 test input 128 8 32 siliceto 100 output 100 8 32 test input 128 8 32 siliceto 106 output 106 8 32 test input 128 8 32 siliceto 90 output 90 8 32 test input 128 8 32 siliceto 126 output 126 8 32 test input 128 8 32 siliceto 122 output 122 8 32 test input 128 8 32 siliceto 62 output 62 8 32 test input 128 8 32 siliceto 99 output 99 8 32 I check the store model with netron image and it show that stridedslice have input tensor of size 128 8 32 and output tensor have size unknown 8 32 to test on tensorflow lite I write simple inference code with tensorflow lite c api it randomly select input tensor slice index and print the output dimension I use the source code 582c8d23 v2 4 0 with above strideslice printf modification I build libtensorflowlite so with follow command bazel build verbose failure c opt define no aw support true define no gcp support true define no hdfs support true define no nccl support true define build with mkl false config monolithic tensorflow lite libtensorflowlite so and the content of tf configure bazelrc be build action env python bin path home kukdh1 virtualenvs tf develop bin python3 build action env python lib path home kukdh1 virtualenvs tf develop lib python3 6 site package build python path home kukdh1 virtualenvs tf develop bin python3 build config xla build action env cuda toolkit path usr local cuda 11 1 build action env tf cuda compute capability 7 5 8 6 build action env ld library path usr local cuda 11 1 lib64 build action env gcc host compiler path usr bin gcc build config cuda build opt copt march native build opt copt wno sign compare build opt host copt march native build opt define with default optimization true test flaky test attempt 3 test test size filter small medium test test env ld library path test v1 test tag filter benchmark test no oss no gpu oss serial test v1 build tag filter benchmark test no oss no gpu test v2 test tag filter benchmark test no oss no gpu oss serial v1only test v2 build tag filter benchmark test no oss no gpu v1only build action env tf configure io 0 I build test program with mkdir build cmake dtf source dir cmakelist txt when I run the test program I get follow result cuda visible device 1 tflite strided slice stride slice tflite 2020 12 31 19 47 18 088431 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 stride slice input 128 8 32 stride slice begin 0 0 0 stride slice end 103 stride slice stride 1 1 1 stride slice output 103 0 32 input 103 output 103 0 32 print interpreter state begin interpreter have 6 tensor and 2 node input 0 output 5 tensor 0 input 1 ktfliteint32 ktflitecustom 4 byte 0 0 mb tensor 1 simple layer readvariableop resource ktflitefloat32 ktflitemmapro 131072 byte 0 1 mb 128 8 32 tensor 2 simple layer stride slice ktfliteint32 ktflitemmapro 12 byte 0 0 mb 3 tensor 3 simple layer stride slice1 ktfliteint32 ktflitemmapro 12 byte 0 0 mb 3 tensor 4 simple layer stride slice stack 1 ktfliteint32 ktflitearenarw 4 byte 0 0 mb 1 tensor 5 identity ktflitefloat32 ktflitedynamic 0 byte 0 0 mb 103 0 32 node 0 operator builtin code 83 pack input 0 output 4 node 1 operator builtin code 45 strided slice input 1 2 4 3 output 5 print interpreter state end the output tensor have size of 103 0 32 not 103 8 32 I think this be a bug on tflite op builtin stride slice resizeoutputtensor l101 please let I know if I do something wrong other info log you can find all the code snippet I use to reproduce the problem at here download all dependency by run download dependency sh at tensorflow lite tool make you can find tflite model too p s the dimension error be quite random sometimes it calculate correctly sometimes innermost axis 2 dimension become zero sometimes middle axis 1 dimension becom zero sometimes both axis 1 axis 2 dimension become zero p s 2 I double check with unmodified tensorflow source download with wget on ubuntu 18 04 5 different machine with gcc 7 5 0 bazel 3 7 2 and python 3 6 9 same command use to build libtensorflowlite so and tf configure bazelrc be build action env python bin path home kukdh1 virtualenvs tensorflow bin python3 build action env python lib path home kukdh1 virtualenvs tensorflow lib python3 6 site package build python path home kukdh1 virtualenvs tensorflow bin python3 build config xla build opt copt march native build opt copt wno sign compare build opt host copt march native build opt define with default optimization true test flaky test attempt 3 test test size filter small medium test v1 test tag filter benchmark test no oss gpu oss serial test v1 build tag filter benchmark test no oss gpu test v2 test tag filter benchmark test no oss gpu oss serial v1only test v2 build tag filter benchmark test no oss gpu v1only build action env tf configure io 0 the output on ubuntu machine be one example input 41 output 41 0 0 print interpreter state begin interpreter have 6 tensor and 2 node input 0 output 5 tensor 0 input 1 ktfliteint32 ktflitecustom 4 byte 0 0 mb tensor 1 simple layer readvariableop resource ktflitefloat32 ktflitemmapro 131072 byte 0 1 mb 128 8 32 tensor 2 simple layer stride slice ktfliteint32 ktflitemmapro 12 byte 0 0 mb 3 tensor 3 simple layer stride slice1 ktfliteint32 ktflitemmapro 12 byte 0 0 mb 3 tensor 4 simple layer stride slice stack 1 ktfliteint32 ktflitearenarw 4 byte 0 0 mb 1 tensor 5 identity ktflitefloat32 ktflitedynamic 0 byte 0 0 mb 41 0 0 node 0 operator builtin code 83 pack input 0 output 4 node 1 operator builtin code 45 strided slice input 1 2 4 3 output 5 print interpreter state end
tensorflowtensorflow
datum input pipeline do not implement batch for from tensor slice dict df
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary late from source tensorflow version use command below late from source python version 3 8 bazel version if compile from source 3 6 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory 2070 describe the current behavior consume csv datum titanic slice tf datum dataset from tensor slice dict df that code will have dictionary object slice element spec return dict xx this throw exception if we use batch from the same documentation def make window dataset ds window size 5 shift 1 stride 1 window ds window window size shift shift stride stride def sub to batch sub return sub batch window size drop remainder true window window flat map sub to batch return window
tensorflowtensorflow
attributeerror tensorflow python framework op eagertensor object have no attribute be fully define
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 windows10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 0 gpu model and memory 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior can not reshapes sparsetensor describe the expect behavior reshape a sparsetensor standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf sp tf sparsetensor 1 1 2 2 1 2 4 4 new sp tf sparse reshape sp 8 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file c program file python37 lib site package tensorflow python op sparse op py line 902 in sparse reshape and sp input shape be fully define attributeerror tensorflow python framework op eagertensor object have no attribute be fully define
tensorflowtensorflow
when use activation tanh the training program will crash
Bug
def make generator model model tf keras sequential model add layer dense 7 7 256 use bias false input shape 100 model add layer batchnormalization model add layer leakyrelu model add layer reshape 7 7 256 assert model output shape none 7 7 256 model add layer conv2dtranspose 128 5 5 stride 1 1 padding same use bias false assert model output shape none 7 7 128 model add layer batchnormalization model add layer leakyrelu model add layer conv2dtranspose 64 5 5 stride 2 2 padding same use bias false assert model output shape none 14 14 64 model add layer batchnormalization model add layer leakyrelu model add layer conv2dtranspose 1 5 5 stride 2 2 padding same use bias false activation tanh model add layers leakyrelu assert model output shape none 28 28 1 return model generator make generator model noise tf random normal 1 100 generate image generator noise training false issue when use activation tanh the training program will crash use other activation function be normal tensorflow version from tf nightly2 5 0 dev20201217 to tf nightly2 5 0 dev20201228 cuda version 11 0 python version 3 8 os window 10