repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
huhuuh
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
attributeerror float object have no attribute dtype when execute tfp optimizer differential evolution one step
Bug
this error be show whenever tf optimizer differential evolution one step be call this error be due to line 125 in the tensorflow optimizer file differential evolution py which be crossover prob 0 9 in this line the variable crossover prob be give as a float whereas it be require to be a tensorflow variable later in the file on line 652 which be dtype crossover prob dtype base dtype change crossover prob to a tf variable before line 652 will solve this problem also set the default value of crossover prob in line 125 from crossover prob 0 9 to crossover prob tf variable 0 9 will also solve this problem and allow for this function to run thank you
tensorflowtensorflow
tf histogram fix width bin miss input check for nbin
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue value range 0 0 5 0 new value 1 0 0 0 1 5 2 0 5 0 15 indice tf histogram fix width bin new value value range nbin 5 indice numpy output array 0 0 0 0 0 0 dtype int32 expect output expect nbin to be a positive interger and raise an error if it s negative
tensorflowtensorflow
tf datum experimental sample from dataset non deterministic in multi gpu
Bug
see system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device do not apply tensorflow instal from source or binary pip tensorflow version use command below 2 4 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 gpu model and memory
tensorflowtensorflow
xla docs for map be unclear and incomplete
Bug
url s with the issue map description of issue what need change clear description no the section the map function be an arbitrary computation with the restriction that it have n input of scalar type t and a single output with type s the output have the same dimension as the operand except that the element type t be replace with s be not clear do it mean the output of the map function or the output of map I think it mean the latter but it s not word like that correct link yes parameter define no it s not clear what dimension be for and the argument static operand be miss though it it possibly allude to in the definition of computation see mention of m indice return define yes raise list and define as much as any other function in these doc usage example no an example would be amazing should the devs feel like add one request visual if applicable n a submit a pull request no
tensorflowtensorflow
description of tf raggedtensor s method xx be unclear
Bug
documentation bug for and description of issue usage example be unmatched tf raggedtensor have a number of class method for example ab add the usage example for these method be unmatched for instance the example code for add do not contain any ragged tensor instead the example be all normal tensor of type tf tensor the example code be a tf constant true b tf constant false tf math logical and a b that be not expect for a tf raggedtensor add method
tensorflowtensorflow
file system scheme s3 not implement
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary pip tensorflow version use command below tensorflow 2 7 0 tensorflow io version tensorflow io 0 23 1 python version python 3 8 12 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior tensorflow python framework error impl unimplementederror file system scheme s3 not implement describe the expect behavior should be able to connect the s3 filesystem contribute do you want to contribute a pr yes no na briefly describe your candidate solution if contribute rollback to tensorflow io 0 17 and tensorflow 2 4 4 seem to resolve issue standalone code to reproduce the issue from tensorflow python lib io import file io print file io stat s3 bucketname path from tensorflow io import gfile print gfile exist s3 bucketname path other info log similar old issue here 40302
tensorflowtensorflow
deprecation message for tf compat v1 batch gather suggest invalid migration
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below v2 7 0 0 gc256c071bb2 2 7 0 python version 3 7 12 describe the current behavior use tf compat v1 batch gather trigger the follow warning python warn tensorflow from usr local lib python3 7 dist package tensorflow python util dispatch py 1096 batch gather from tensorflow python op array op be deprecate and will be remove after 2017 10 25 instruction for update tf batch gather be deprecate please use tf gather with batch dim 1 instead this warning be last edit in this commit however the suggest migration from tf compat v1 batch gather to tf gather batch dim 1 be invalid e g see this colab notebook for an example from the implementation l5166 of batch gather it seem that the correct migration be from python tf compat v1 batch gather datum indice to python tf gather datum indice batch dim tf rank indice 1 describe the expect behavior the warning should be amend
tensorflowtensorflow
update gpu advanced md
Bug
update ththis this as per 53775
tensorflowtensorflow
use enable op determinism fix seed same hardware partial dgx still get different result in 2 8
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento tensorflow instal from source or binary pip tensorflow version use command below 2 8rc0 python version 3 8 12 gpu model and memory 8 v100 after I use enable op determinism in tensorflow 2 8 my model still get random result around 0 3 miou after each run note that I set pythonhasheed to a fix value before start python and also set a fix value as the random seed for tensorflow and numpy etc I wonder why the result be still random in same hardware since the unimplementederror should be throw if nondeterministic op be use
tensorflowtensorflow
tf scatter nd document refer to deprecate apis
Bug
url s with the issue description of issue what need change clear description here be a paragraph describe the relationship between tf scatter nd and tf tensor scatter add this operation be similar to tf tensor scatter add except that the tensor be zero initialize call tf scatter nd indice value shape be identical to call tf tensor scatter add tf zeros shape value dtype indice value however tf tensor scatter add do not exist in tensorflow 2 7 tf tensor scatter add should be replace with tf tensor scatter nd add
tensorflowtensorflow
a spelling mistake in gpu delegate serialization doc
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue gpu delegate serialization description of issue what need change ththis improvement be be achieve by exchange disk space for time saving this improvement be be achieve by exchange disk space for time saving
tensorflowtensorflow
small mathematical typo at
Bug
url s with the issue description of issue what need change clear description currently at x 1 0 y f x 1 2 3 5 2 propose at x 1 0 y f x 1 2 2 5 2 submit a pull request yes I can submit a pull request if issue approve
tensorflowtensorflow
tf trt efficientdet d0 pre build trt engine fail tf 2 7 0 trt 7 2 3
Bug
ubuntu 18 04 tensorflow 2 7 0 cuda 11 1 1 tenosrrt 7 2 3 4 cudnn 8 1 1 33 cuda compute capability 7 5 hardware ec2 g4dn 8xlarge tesla ture t4 tensor core tf2 model be from tensorflow 2 detection model zoo efficientdet d0 512x512 I try to convert save model to trt model and then pre build trt engine the conversion work fine but pre build trt engine step fail note other model such as fast r cnn resnet50 v1 640x640 and ssd mobilenet v2 320x320 work fine so most probably it be not cudnn installation issue as indicate in the error message below import tensorflow as tf from tensorflow python compiler tensorrt import trt convert as trt conversion param trt trtconversionparam converter trt trtgraphconverterv2 input save model dir save model conversion param conversion param converter convert import numpy as np def my input fn input shape 1 512 512 3 for shape in input shape yield np zero shape astype np uint8 converter build input fn my input fn error converter build input fn my input fn 2022 01 12 05 55 45 849139 I tensorflow compiler tf2tensorrt common util cc 58 link tensorrt version 7 2 3 2022 01 12 05 55 45 849218 I tensorflow compiler tf2tensorrt common util cc 60 load tensorrt version 7 2 3 2022 01 12 05 57 02 621885 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 630204 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 630291 w tensorflow compiler tf2tensorrt kernel trt engine op cc 945 tf trt warn engine creation for statefulpartitionedcall efficientdet d0 bifpn trtengineop 0 17 fail the native segment will be use instead reason internal fail to build tensorrt engine 2022 01 12 05 57 02 630314 w tensorflow compiler tf2tensorrt kernel trt engine op cc 797 tf trt warn engine retrieval for input shape 1 64 64 64 1 32 32 64 fail run native segment for statefulpartitionedcall efficientdet d0 bifpn trtengineop 0 17 2022 01 12 05 57 02 644741 I tensorflow stream executor cuda cuda dnn cc 366 load cudnn version 8101 2022 01 12 05 57 02 647927 I tensorflow core platform default subprocess cc 304 start can not spawn child process no such file or directory 2022 01 12 05 57 02 778160 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 778302 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 778392 w tensorflow compiler tf2tensorrt kernel trt engine op cc 945 tf trt warn engine creation for statefulpartitionedcall trtengineop 0 19 fail the native segment will be use instead reason internal fail to build tensorrt engine 2022 01 12 05 57 02 778412 w tensorflow compiler tf2tensorrt kernel trt engine op cc 797 tf trt warn engine retrieval for input shape 1 64 32 32 1 64 64 64 1 64 32 32 fail run native segment for statefulpartitionedcall trtengineop 0 19 2022 01 12 05 57 02 808375 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 808446 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 808480 w tensorflow compiler tf2tensorrt kernel trt engine op cc 945 tf trt warn engine creation for statefulpartitionedcall weightsharedconvolutionalboxpredictor classpredictiontower conv2d 1 trtengineop 0 41 fail the native segment will be use instead reason internal fail to build tensorrt engine 2022 01 12 05 57 02 808495 w tensorflow compiler tf2tensorrt kernel trt engine op cc 797 tf trt warn engine retrieval for input shape 1 64 64 64 fail run native segment for statefulpartitionedcall weightsharedconvolutionalboxpredictor classpredictiontower conv2d 1 trtengineop 0 41 2022 01 12 05 57 02 809402 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 809463 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 809507 w tensorflow compiler tf2tensorrt kernel trt engine op cc 945 tf trt warn engine creation for statefulpartitionedcall weightsharedconvolutionalboxpredictor boxpredictiontower conv2d 1 trtengineop 0 31 fail the native segment will be use instead reason internal fail to build tensorrt engine 2022 01 12 05 57 02 809522 w tensorflow compiler tf2tensorrt kernel trt engine op cc 797 tf trt warn engine retrieval for input shape 1 64 64 64 fail run native segment for statefulpartitionedcall weightsharedconvolutionalboxpredictor boxpredictiontower conv2d 1 trtengineop 0 31 2022 01 12 05 57 02 821888 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 821946 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 1 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 821980 w tensorflow compiler tf2tensorrt kernel trt engine op cc 945 tf trt warn engine creation for statefulpartitionedcall weightsharedconvolutionalboxpredictor classpredictiontower conv2d 2 trtengineop 0 46 fail the native segment will be use instead reason internal fail to build tensorrt engine 2022 01 12 05 57 02 821995 w tensorflow compiler tf2tensorrt kernel trt engine op cc 797 tf trt warn engine retrieval for input shape 1 64 64 64 fail run native segment for statefulpartitionedcall weightsharedconvolutionalboxpredictor classpredictiontower conv2d 2 trtengineop 0 46 2022 01 12 05 57 02 823642 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 4 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 823698 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 4 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 825524 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 4 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 825598 e tensorflow compiler tf2tensorrt util trt logg cc 40 defaultlogger safecontext cpp 124 cudnn error in initializecommoncontext 4 could not initialize cudnn please check cudnn installation 2022 01 12 05 57 02 832321 e tensorflow stream executor dnn cc 764 cudnn status execution fail in tensorflow stream executor cuda cuda dnn cc 5553 cudnnpoolingforward cudnn handle pooling desc handle alpha src desc handle input datum opaque beta d desc handle output datum opaque 2022 01 12 05 57 02 833347 w tensorflow core framework op kernel cc 1745 op require fail at trt engine op cc 542 internal function node statefulpartitionedcall trtengineop 0 21 native segment dnn poolforward launch fail node statefulpartitionedcall efficientdet d0 bifpn node 23 3 up lvl 5 input 3 up lvl 4 downsample max x2 maxpool traceback most recent call last file line 1 in file usr local lib python3 7 dist package tensorflow python compiler tensorrt trt convert py line 1223 in build func map op convert to tensor inp file usr local lib python3 7 dist package tensorflow python eager function py line 1707 in call return self call impl args kwargs file usr local lib python3 7 dist package tensorflow python eager wrap function py line 249 in call impl args kwargs cancellation manager file usr local lib python3 7 dist package tensorflow python eager function py line 1725 in call impl return self call with flat signature args kwargs cancellation manager file usr local lib python3 7 dist package tensorflow python eager function py line 1774 in call with flat signature return self call flat args self capture input cancellation manager file usr local lib python3 7 dist package tensorflow python eager function py line 1960 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 7 dist package tensorflow python eager function py line 603 in call ctx ctx file usr local lib python3 7 dist package tensorflow python eager execute py line 59 in quick execute input attrs num output tensorflow python framework error impl internalerror 2 root error s find 0 internal dnn poolforward launch fail node statefulpartitionedcall efficientdet d0 bifpn node 23 3 up lvl 5 input 3 up lvl 4 downsample max x2 maxpool statefulpartitionedcall postprocessor batchmulticlassnonmaxsuppression trtengineop 0 30 168 1 internal dnn poolforward launch fail node statefulpartitionedcall efficientdet d0 bifpn node 23 3 up lvl 5 input 3 up lvl 4 downsample max x2 maxpool statefulpartitionedcall postprocessor batchmulticlassnonmaxsuppression multiclassnonmaxsuppression non max suppression with score 77 nonmaxsuppressionv5 584 0 successful operation 0 derive error ignore op inference prune 135734 function call stack prune prune
tensorflowtensorflow
model subclasse access layer save as class variable to self dict for dynamic model definition
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 20 04 lt tensorflow instal from binary tensorflow version 2 5 0 python version 3 7 9 cuda cudnn version 11 2 gpu model and memory 2x rtx3090 24 gb gist with run able code hi my goal be to define a dynamically generate fcn model where I define the amount of filter as a list and the model be generate with len filter encoder and len filter decoder block I define my block and model accord to this tensorflow guide the model class now to achieve my dynamic model class I have write all class variable into the model dict class fullyconvolutionalnetworkdynamic model abc def init self num filter list int kernel size list int stride list int name str fullyconvolutionalnetwork activation str relu datum format str channel last args kwargs super fullyconvolutionalnetworkdynamic self init name name args kwargs define symmetric encoder and decoder block for I f k s in enumerate zip num filter kernel size stride self dict f encoder I encoderblock num conv f kernel k stride s activation activation name f enc I datum format datum format self dict f decoder I decoderblock num conv f kernel k stride s activation activation name f dec I datum format datum format output section self conv1x1 conv1d 1 1 stride 1 padding same name 1x1 datum format datum format self flatten flatten now that my layer be write to model dict I try to add they to my call function from here tf function def call self x train false encoder path for layer in sorted key for key in self dict if encoder in key x self dict layer x decoder path for layer in sorted key for key in self dict if decoder in key reverse true x self dict layer x output section x self conv1x1 x return self flatten x however when I now create an instance of my model the model summary state that only the layer define in the output section be add to the model model fullyconvolutionalnetwork num filter 4 8 16 kernel size 3 3 3 stride 1 1 1 model build input shape none 256 10 model summary model fullyconvolutionalnetwork layer type output shape param 1x1 conv1d multiple 5 flatten flatten multiple 0 total param 5 trainable param 5 non trainable param 0 I already use the model but hard code to three layer depth on each path when I try to recreate my hard code model with the dynamic approach above the model dict indeed contain my encoderblock and decoderblock layer they be just not add to the model the hard code version work flawlessly any help to solve my problem be greatly appreciate thank you in advance ps I have allow my self to re post my issue here since the tensorflow repository seem much more active than the keras repository
tensorflowtensorflow
update c api h
Bug
fix
tensorflowtensorflow
tf sparse softmax lack support for float16
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf logit tf random uniform 16 1 10 dtype tf float16 r1 tf nn softmax logit axis 1 pass logit sp tf sparse from dense logit r2 tf sparse softmax logit sp invalidargumenterror describe the current behavior tf sparse softmax can not accept a tensor of type float16 however tf nn softmax do support half for the above code snippet the error message be invalidargumenterror value for attr t of half be not in the list of allow value float double nodedef node sparsesoftmax op output t attr t type allow dt float dt double op sparsesoftmax describe the expect behavior accord to the document for tf sparse softmax it be equivalent to tf nn softmax but for sparse tensor so tf sparse softmax should also support float16 input
tensorflowtensorflow
tf sparse to dense don t support complex dtype
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf x tf cast tf constant 1 0 2 0 tf complex128 x sparse tf sparse from dense x print from dense pass x sparse pass x dense tf sparse to dense x sparse fail print to dense pass x dense describe the current behavior tf sparse from dense can convert a complex dense tensor to a sparse tensor however tf sparse to dense fail to convert a complex sparse tensor back to a dense tensor for the above code snippet the output be from dense pass sparsetensor notfounderror could not find device for node node sparsetodense sparsetodense t dt complex128 tindice dt int64 validate indice true all kernel register for op sparsetodense device cpu t in dt string tindice in dt int64 device cpu t in dt string tindice in dt int32 device cpu t in dt bool tindice in dt int64 device cpu t in dt bool tindice in dt int32 device cpu t in dt double tindice in dt int64 device cpu t in dt double tindice in dt int32 device cpu t in dt float tindice in dt int64 device cpu t in dt float tindice in dt int32 device cpu t in dt bfloat16 tindice in dt int64 device cpu t in dt bfloat16 tindice in dt int32 device cpu t in dt half tindice in dt int64 device cpu t in dt half tindice in dt int32 device cpu t in dt int32 tindice in dt int64 device cpu t in dt int32 tindice in dt int32 device cpu t in dt int8 tindice in dt int64 device cpu t in dt int8 tindice in dt int32 device cpu t in dt uint8 tindice in dt int64 device cpu t in dt uint8 tindice in dt int32 device cpu t in dt int16 tindice in dt int64 device cpu t in dt int16 tindice in dt int32 device cpu t in dt uint16 tindice in dt int64 device cpu t in dt uint16 tindice in dt int32 device cpu t in dt uint32 tindice in dt int64 device cpu t in dt uint32 tindice in dt int32 device cpu t in dt int64 tindice in dt int64 device cpu t in dt int64 tindice in dt int32 device cpu t in dt uint64 tindice in dt int64 device cpu t in dt uint64 tindice in dt int32 device gpu t in dt bool tindice in dt int64 device gpu t in dt bool tindice in dt int32 device gpu t in dt int32 tindice in dt int64 device gpu t in dt int32 tindice in dt int32 device gpu t in dt int8 tindice in dt int64 device gpu t in dt int8 tindice in dt int32 device gpu t in dt uint8 tindice in dt int64 device gpu t in dt uint8 tindice in dt int32 device gpu t in dt int16 tindice in dt int64 device gpu t in dt int16 tindice in dt int32 device gpu t in dt uint16 tindice in dt int64 device gpu t in dt uint16 tindice in dt int32 device gpu t in dt uint32 tindice in dt int64 device gpu t in dt uint32 tindice in dt int32 device gpu t in dt int64 tindice in dt int64 device gpu t in dt int64 tindice in dt int32 device gpu t in dt uint64 tindice in dt int64 device gpu t in dt uint64 tindice in dt int32 device gpu t in dt double tindice in dt int64 device gpu t in dt double tindice in dt int32 device gpu t in dt float tindice in dt int64 device gpu t in dt float tindice in dt int32 device gpu t in dt half tindice in dt int64 device gpu t in dt half tindice in dt int32 op sparsetodense describe the expect behavior tf sparse to dense should also support complex dtype
tensorflowtensorflow
how to build tensorflow 1 15 with docker build I can only compile the late tensorflow2 9 accord to the official tutorial
Bug
image I want to switch branch r1 15 but it didn t work thankyou
tensorflowtensorflow
dataset batch change tensor shape and abort lstm fit
Bug
system information tensorflow version v2 4 0 49 g85c8b2a817f 2 4 1 platform kaggle tpu hi my lstm code give this error use dataset batch opt conda lib python3 7 site package tensorflow python keras engine training py 805 train function return step function self iterator opt conda lib python3 7 site package tensorflow python keras engine training py 795 step function output model distribute strategy run run step args datum opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 540 run return self extend tpu run fn args kwargs option opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 1296 tpu run return func args kwargs opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 1364 tpu function xla option tpu xlaoption use spmd for xla partition false opt conda lib python3 7 site package tensorflow python tpu tpu py 968 replicate xla option xla option 1 opt conda lib python3 7 site package tensorflow python tpu tpu py 1439 split compile and replicate output computation computation input opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 1325 replicate fn result 0 fn replica args replica kwargs opt conda lib python3 7 site package tensorflow python keras engine training py 788 run step output model train step datum opt conda lib python3 7 site package tensorflow python keras engine training py 754 train step y pre self x training true opt conda lib python3 7 site package tensorflow python keras engine base layer py 998 call input spec assert input compatibility self input spec input self name opt conda lib python3 7 site package tensorflow python keras engine input spec py 223 assert input compatibility str tuple shape valueerror input 0 of layer model be incompatible with the layer expect ndim 3 find ndim 4 full shape receive none none 3 13 meanwhile without dataset batch the code work correctly I m use tensorflow dataset from tfrecord file def read tfrecord example proto feature description x tf io fixedlenfeature tf string y tf io fixedlenfeature tf string example tf io parse single example example proto feature description vect tf io parse tensor example x tf float16 label tf io parse tensor example y tf int8 vect tf reshape vect 1 ws num feature label tf reshape label 1 1 return vect label I have to add the reshape because without I get this error typeerror can t multiply sequence by non int of type nonetype and here suggest to add reshape link 1 def load dataset filename ignore order tf datum option ignore order experimental deterministic false disable order increase speed dataset tf datum tfrecorddataset filename automatically interleave read from multiple file dataset dataset with option ignore order use datum as soon as it stream in rather than in its original order dataset dataset map read tfrecord num parallel call autotune dataset dataset batch bs dataset dataset prefetch buffer size autotune return dataset model sequential name model model add lstm unit unit input shape ws num feature return sequence false dropout dropout model add dense unit num label activation sigmoid model compile loss binary crossentropy optimizer nadam metric binaryaccuracy auc precision recall here you can see the dataset with for parsed record in parsed dataset take 10 print repr parse record microsoftteam image 1
tensorflowtensorflow
tensorflow lite c api tfliteinterpreter mislead documentation
Bug
this be documentation bug in tflite c api I be not sure if I choose a proper issue tag description of issue what need change clear description see the documentation l146 l158 of tfliteinterpretercreate accord to this documentation the snippet below be valid c std vector model datum fill model datum tflitemodel model tflitemodelcreate model datum datum model datum size create the interpreter tfliteinterpreter interpreter tfliteinterpretercreate model option accord to documentation I can delete the model hence the lifetime of the model end tflitemodeldelete model from now on accord to the documentation l98 l104 of tflitemodelcreate model datum can be modify like the snippet below c for char e model datum e 0 this be just a demo to show that current or another process may modify the memory region however this be not valid and corrupt the interpreter severely delete a tflitemodel object be ok if one read it from a file because the interpreter rely on the file which be assume to stay unmodified during the lifetime of the interpreter on the other hand lifetime of an interpreter must be bound by the model which be bound by the lifetime of the model datum to avoid error similar statement can be find in the documentation summary of the c api see the usage snippet I think jdduke be the one who can fix the documentation properly but my suggestion be to replace and can destroy it immediately after create the interpreter the interpreter will maintain its own reference to the underlying model datum part with and the model datum must outlive the interpreter to avoid confusion
tensorflowtensorflow
model build documentation absent on wensite
Bug
the build method for model class be not document or and be not present on the page method 2 can this be add thank you
tensorflowtensorflow
tensorflow lite schema upgrade schema py move see schema fbs up to operator type arg descriptor
Bug
close 53566 I have custom docstre parser that be read in your codebase to make change down the track like automatically infer type and add they as annotation but this line of your codebase it s hiccup on because it s on a new line and have a colon in it and low indentation then the line above indicate same scope as that arg but doesn t actually refer to a new argument this pr move it to the same line as above so it refer to that arg
tensorflowtensorflow
attributeerror nonetype object have no attribute outer context when build a token classification model
Bug
hi I be train a token classification model and I use elmo embedding as one layer of my model I m use elmo embedding from tensorflow hub and I be use they with tensorflow 2 4 1 the same model architecture work when build in tensorflow 1 x but I need to migrate it to tf2 I follow the instruction on how to replace hub module with hub keraslayer or hub load in order to use elmo in tf2 the example be with hub load when use hub keraslayer I can not use the token signature please note that the same model architecture work in tf1 but I need a solution to make it work in tf2 this be how I m build the model def elmoembedding x return elmo model signature token token tf squeeze tf cast x tf string sequence len tf constant batch size max len elmo def build model max len n word n tag word input layer input shape max len 40 elmo input layer input shape max len dtype tf string word output layer dense n tag activation softmax word input layer elmo output layer lambda elmoembedde output shape none 1024 elmo input layer output layer concatenate word output layer elmo output layer output layer batchnormalization output layer output layer bidirectional lstm unit 512 return sequence true recurrent dropout 0 2 dropout 0 2 output layer output layer timedistribute dense n tag activation softmax output layer model model elmo input layer word input layer output layer return model and therefore want to train the model as follow elmo model hub load model build model max len n word n tag model compile optimizer adam loss sparse categorical crossentropy metric accuracy model summary history model fit np array x1 train np array x2 train reshape len x2 train max len 40 y train validation datum np array x1 valid np array x2 valid reshape len x2 valid max len 40 y valid batch size 32 epoch 2 verbose 1 to clarify x1 train be a list of tokenize sentence and x2 train be a list of hand pick feature for every token in every sentence the error that I m get when call model fit be the follow one attributeerror traceback most recent call last in 51 y train 52 validation datum np array x1 valid np array x2 valid reshape len x2 valid max len 40 y valid 53 batch size 32 epoch 2 verbose 1 54 hist pd dataframe history history opt conda lib python3 7 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 1098 r 1 1099 callback on train batch begin step 1100 tmp log self train function iterator 1101 if datum handler should sync 1102 context async wait opt conda lib python3 7 site package tensorflow python eager def function py in call self args kwd 826 trace count self experimental get trace count 827 with trace trace self name as tm 828 result self call args kwd 829 compiler xla if self experimental compile else nonxla 830 new tracing count self experimental get trace count opt conda lib python3 7 site package tensorflow python eager def function py in call self args kwd 869 this be the first call of call so we have to initialize 870 initializer 871 self initialize args kwd add initializer to initializer 872 finally 873 at this point we know that the initialization be complete or less opt conda lib python3 7 site package tensorflow python eager def function py in initialize self args kwd add initializer to 724 self concrete stateful fn 725 self stateful fn get concrete function internal garbage collect pylint disable protect access 726 args kwd 727 728 def invalid creator scope unused args unused kwd opt conda lib python3 7 site package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 2967 args kwargs none none 2968 with self lock 2969 graph function self maybe define function args kwargs 2970 return graph function 2971 opt conda lib python3 7 site package tensorflow python eager function py in maybe define function self args kwargs 3359 3360 self function cache miss add call context key 3361 graph function self create graph function args kwargs 3362 self function cache primary cache key graph function 3363 opt conda lib python3 7 site package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 3204 arg name arg name 3205 override flat arg shape override flat arg shape 3206 capture by value self capture by value 3207 self function attribute 3208 function spec self function spec opt conda lib python3 7 site package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 988 original func tf decorator unwrap python func 989 990 func output python func func args func kwargs 991 992 invariant func output contain only tensor compositetensor opt conda lib python3 7 site package tensorflow python eager def function py in wrap fn args kwd 632 xla context exit 633 else 634 out weak wrap fn wrap args kwd 635 return out 636 opt conda lib python3 7 site package tensorflow python framework func graph py in wrapper args kwargs 975 except exception as e pylint disable broad except 976 if hasattr e ag error metadata 977 raise e ag error metadata to exception e 978 else 979 raise attributeerror in user code opt conda lib python3 7 site package tensorflow python keras engine training py 805 train function return step function self iterator opt conda lib python3 7 site package tensorflow python keras engine training py 795 step function output model distribute strategy run run step args datum opt conda lib python3 7 site package tensorflow python distribute distribute lib py 1259 run return self extend call for each replica fn args args kwargs kwargs opt conda lib python3 7 site package tensorflow python distribute distribute lib py 2730 call for each replica return self call for each replica fn args kwargs opt conda lib python3 7 site package tensorflow python distribute distribute lib py 3417 call for each replica return fn args kwargs opt conda lib python3 7 site package tensorflow python keras engine training py 788 run step output model train step datum opt conda lib python3 7 site package tensorflow python keras engine training py 754 train step y pre self x training true opt conda lib python3 7 site package tensorflow python keras engine base layer py 1012 call output call fn input args kwargs opt conda lib python3 7 site package tensorflow python keras engine functional py 425 call input training training mask mask opt conda lib python3 7 site package tensorflow python keras engine functional py 560 run internal graph output node layer args kwargs opt conda lib python3 7 site package tensorflow python keras engine base layer py 1012 call output call fn input args kwargs opt conda lib python3 7 site package tensorflow python keras layers core py 917 call result self function input kwargs 7 elmoembedde return elmo model signature token token tf squeeze tf cast x tf string sequence len tf constant batch size max len elmo opt conda lib python3 7 site package tensorflow python eager function py 1669 call return self call impl args kwargs opt conda lib python3 7 site package tensorflow python eager wrap function py 247 call impl args kwargs cancellation manager opt conda lib python3 7 site package tensorflow python eager function py 1687 call impl return self call with flat signature args kwargs cancellation manager opt conda lib python3 7 site package tensorflow python eager function py 1736 call with flat signature return self call flat args self capture input cancellation manager opt conda lib python3 7 site package tensorflow python eager function py 1924 call flat forward function args with tangent forward backward forward opt conda lib python3 7 site package tensorflow python eager function py 1448 forward self inference args self input tangent opt conda lib python3 7 site package tensorflow python eager function py 1207 forward self forward and backward function inference args input tangent opt conda lib python3 7 site package tensorflow python eager function py 1407 forward and backward function output inference args input tangent opt conda lib python3 7 site package tensorflow python eager function py 910 build function for output src graph self func graph opt conda lib python3 7 site package tensorflow python op gradient util py 552 gradientshelper to op from op colocate gradient with op func graph xs set opt conda lib python3 7 site package tensorflow python op gradient util py 125 pendingcount between op list between op colocate gradient with op opt conda lib python3 7 site package tensorflow python op control flow state py 780 maybecreatecontrolflowstate loop state addwhilecontext op between op list between op opt conda lib python3 7 site package tensorflow python op control flow state py 577 addwhilecontext outer forward ctxt forward ctxt outer context attributeerror nonetype object have no attribute outer context any tip would help
tensorflowtensorflow
tensorflow lite schema upgrade schema py have a docstring arg out of line
Bug
url s with the issue l234 description of issue what need change clear description the see schema fbs be an odd syntax which my custom docstre parser pick up as an argument what function do it have can it be move to the line above it python def remapoperatortype operator type remap operator struct from old name to new name args operator type string represent the builtin operator datum type string see schema fbs raise valueerror when the model have consistency problem return upgrade builtin operator datum type as a string parameter define no there be a parameter that be define that doesn t exist namely see schema fbs submit a pull request yes I can send a pr
tensorflowtensorflow
tensorboard load save model pb fail
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information window you can collect some of this information use our environment capture script you can also obtain the tensorflow version with tag v2 6 0 describe the current behavior image describe the expect behavior show save model pb on tensorboard contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
pluggabledevice tf resource be not deserializable since 2 7
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 python version 3 9 describe the current behavior when call tf tensordata in tf 2 7 0 on a tensor contain a tf resource the return datum be the resource handle itself instead of be a string serialization of it this seem to have be cause by this commit which remove the tf resource serialization logic from tf 2 describe the expect behavior call tf tensordata on a tensor contain a tf resource should return some kind of cross abi serialization of the resource handle or the resource proto for example revert this commit produce the expect behavior from a plugin standpoint contribute do you want to contribute a pr yes no I can if need briefly describe your candidate solution if contribute standalone code to reproduce the issue cpp void tensor datum tf tensordata tensor size t tensor size tf tensorbytesize tensor std string serialize datum reinterpret cast tensor datum tensor size auto resource handle std make share if resource handle parsefromstre serialized datum std abort
tensorflowtensorflow
specify what be ag in the output of tf autograph to code
Bug
url s with the issue description of issue what need change I try to look everywhere but I could not find what be ag how could I use the output of tf autograph to code with the python interpreter I think the name ag be tf autograph but this module do not have functionscope for example tf compat v1 autograph also do not work I do not seem to be able to find online any explanation of this shorthand ag parameter define could this variable be define to know how can we interpret the output source code
tensorflowtensorflow
doc improvement
Bug
l812 dz dy tape gradient z w the variable dz dy could be more intuitive by rename to dz dw
tensorflowtensorflow
save model evaluate and validation always give 0 50 accuracy despite reach 96 on exact same datum during train
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 21h2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 7 0 rc1 69 gc256c071bb2 2 7 0 python version 3 9 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 3 8 2 1 gpu model and memory rtx 3080ti 12 gb describe the current behavior when train a normal cnn model use the horse vs human dataset binary image classification the training accuracy reach 0 96 but validation be forever stick at 0 5 what s more interesting be after I load the save model and call model evaluate on the training set it also give 0 50 despite the training set be exactly same and be yield 0 96 during the actual training describe the expect behavior model to perform normally in both training and testing model weight contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import urllib import zipfile from tensorflow kera preprocesse image import imagedatagenerator from tensorflow keras layers import def download datum train url test url urllib request urlretrieve train url horse or human zip local zip horse or human zip zip ref zipfile zipfile local zip r zip ref extractall tmp horse or human zip ref close urllib request urlretrieve test url testdata zip local zip testdata zip zip ref zipfile zipfile local zip r zip ref extractall tmp testdata zip ref close def solution model train datagen imagedatagenerator rescale 1 255 rotation range 40 width shift range 0 2 height shift range 0 2 shear range 0 2 zoom range 0 2 horizontal flip true fill mode near test datagen imagedatagenerator rescale 1 255 train generator train datagen flow from directory tmp horse or human this be the source directory for training image target size 300 300 batch size 16 class mode binary print validation datum validation generator test datagen flow from directory tmp testdata target size 300 300 batch size 32 opt tf keras optimizer adam learning rate 1e 3 epsilon 1e 6 loss tf keras loss binarycrossentropy from logit true epoch 20 batch size 16 callback tf keras callback earlystopping monitor val accuracy min delta 0 patience 5 verbose 1 mode auto baseline none restore good weight true tf keras callbacks reducelronplateau monitor val accuracy factor 0 05 patience 5 verbose 1 xinput input 300 300 3 x tf cast xinput tf float32 resnet50 tf keras application efficientnet efficientnetb0 include top false weight imagenet x resnet50 x x flatten x x dropout 0 2 x x dense 256 x x batchnormalization epsilon 1 001e 5 x x activation relu x xoutput dense 1 x from logit so no need activation model tf keras model model xinput xoutput model compile optimizer opt loss loss metric accuracy model summary model fit train generator validation datum validation generator step per epoch train generator sample batch size epoch epoch callback callback verbose 1 return model if name main download datum only need to run this once model solution model model save mymodel h5 evaluation code from tensorflow kera preprocesse image import imagedatagenerator from tensorflow keras model import load model test datagen imagedatagenerator rescale 1 255 print validation datum validation generator test datagen flow from directory tmp horse or human target size 300 300 batch size 32 class mode categorical model load model mymodel h5 model summary model evaluate validation generator should be 0 96 other info log include any log or source code that would be helpful to 33 33 13 125ms step loss 0 7544 accuracy 0 5000 as you can see loss value be actually normal it start train with 2 loss now only 0 75
tensorflowtensorflow
remove reference to hcc
Bug
l739 hcc be remove from rocm in summer 2020 reference to it be likely cruft removable mark
tensorflowtensorflow
tflite io app crash with exc resource when build with xcode 13
Bug
system information have I write custom code yes os platform and distribution io 15 mobile device if the issue happen on mobile device iphone tensorflow instal from source tensorflow version 2 6 0 python version python 3 9 7 bazel version 4 2 1 homebrew gcc compiler version apple clang version 13 0 0 clang 1300 0 29 3 cuda cudnn version n a gpu model and memory n a describe the current behavior the app crash with exc resource when run tflite model on io device this problem occur only when the app be build with xcode 13 0 13 1 13 2 and use core ml backend the problem do not occur when the app be build with xcode 12 describe the expect behavior app run without crash when build with xcode 13 contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute freedomtan have identify the issue fyr I upgrade my macos io and xcode to 12 1 15 2 and xcode 13 2 yesterday then I spend some time profiling and find that the culprit be coreml delegate s invoke function l207 l230 add autoreleasepool to it resolve the exc resource issue test on iphone 11 pro and iphone 13 run ios 15 2 supposedly it work for early xcode 13 x and io standalone code to reproduce the issue not possible since the app need to be build and run other info log the app use be locate here crash log flutter running benchmark be float32 in performance mode li cpp flutter main cc 252 run backend4 2021 10 02 14 21 30 769776 I cpp backend external cc 135 use default allocator enable coreml delegate 0x2836b5a00 coreml delegate 76 node delegate out of 77 node with 1 partition info coreml delegate 76 node delegate out of 77 node with 1 partition li cpp flutter main cc 257 run backend4 2021 10 02 14 21 31 456619 e cpp dataset ade20k cc 76 fail to list all the ground truth file in provide path only measure performance li cpp flutter main cc 285 run backend4 li cpp flutter main cc 289 run backend4 thread 26 name dartworker queue com apple coremlbatchprocessingqueue stop reason exc resource resource type memory limit 2098 mb unused 0x0 frame 0 0x00000001f37d864c libsystem platform dylib platform memmove 76 libsystem platform dylib platform memmove 0x1f37d864c 76 stnp q0 q1 x3 0x1f37d8650 80 add x3 x3 0x20 0x20 0x1f37d8654 84 ldnp q0 q1 x1 0x1f37d8658 88 add x1 x1 0x20 0x20 target 0 runner stop 135967965 1e284466 a072 4b15 8895 9bbd6001a70a
tensorflowtensorflow
description error of non max suppression with score
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description 1 in line 3755 I think it we should modify tf image non max suppression pad to be tf image non max suppression with score 2 in line 3760 I think it we should modify tf image non max suppression pad to be tf image non max suppression with score
tensorflowtensorflow
tf keras dense layer output nan with kernel initializer random normal before fit
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below v2 7 0 rc1 69 gc256c071bb2 2 7 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 gpu model and memory describe the current behavior before fit I use model predict with my train set it be ensure that train set be free of nan value next I print output of each layer the output be 0 1 000000e 03 1 310000e 01 9 504200e 02 0 000000e 00 0 000000e 00 1 000000e 00 0 000000e 00 1 000000e 02 1 556190e 00 0 000000e 00 0 000000e 00 1 000000e 00 1 000000e 00 5 763000e 00 3 027034e 00 0 000000e 00 0 000000e 00 1 000000e 00 8 930000e 01 5 760000e 01 4 266911e 00 0 000000e 00 0 000000e 00 1 000000e 00 1 000000e 00 5 978000e 00 4 650479e 00 0 000000e 00 0 000000e 00 1 000000e 00 8 500000e 01 8 910000e 01 7 704330e 01 0 000000e 00 0 000000e 00 1 000000e 00 1 1 000000e 03 1 310000e 01 9 504200e 02 0 000000e 00 0 000000e 00 1 000000e 00 0 000000e 00 1 000000e 02 1 556190e 00 0 000000e 00 0 000000e 00 1 000000e 00 1 000000e 00 5 763000e 00 3 027034e 00 0 000000e 00 0 000000e 00 1 000000e 00 8 930000e 01 5 760000e 01 4 266911e 00 0 000000e 00 0 000000e 00 1 000000e 00 1 000000e 00 5 978000e 00 4 650479e 00 0 000000e 00 0 000000e 00 1 000000e 00 8 500000e 01 8 910000e 01 7 704330e 01 0 000000e 00 0 000000e 00 1 000000e 00 2 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook input input shape data shape x flatten input x dense datum classnum kernel initializer random normal x x tf keras layers softmax x model tf keras model inputs input output x for idx in range len model layer print model layers idx name model layer idx get weight intermediate layer model keras model inputs model input output model layers idx output intermediate output intermediate layer model predict datum train dataset print idx intermediate output
tensorflowtensorflow
tuner tutorial rebuild model rather than save it
Bug
url s with the issue description of issue what need change the bottom of this tutorial say re instantiate the hypermodel and train it with the optimal number of epoch from above but there s no reason to train the model twice when you could use the callback to save the model at the good epoch I and other have speculate here that the call to re train the model be suppose to be on the full datum with no validation split if that s the intention the result change would be to remove validation split 0 2 from the final fit call otherwise it seem you should add a modelcheckpoint callback to the save the good model when you first fit the model and then not refit the model
tensorflowtensorflow
tfjs v3 12 0 and wasm 3 12 0 for tfjs model pose detection movenet multipose wasm
Bug
when I try to run the demo of the model tfjs model pose detection with tfjs v3 12 0 and wasm 3 12 0 and configuration model movenet type multipose and backend wasm I get the follow error error kernel reciprocal not register for backend wasm image the demo wok well for the singlepose type lightning regard
tensorflowtensorflow
quantize convolution layer operation in tf lite
Bug
hello to everyone for academic and research purpose I be try to understand the operation behind a quantize convolution layer in tensorflow lite for this purpose I choose effiecientnet lite0 model so I download pretraine efficientnet lite0 float32 and int8 tflite file from the official repository and run inference of these model use a sample jpg image firstly I check model s architecture and some detail use netron tool and decide to pick first conv2d layer as my case study netron I start with fp32 model inference as I think it will be simpli and use the code below for preprocess the image and for the inference of the model import tensorflow as tf import numpy as np import pil image as image mean rgb 127 0 stddev rgb 128 0 crop pad 32 image size 224 def decode and center crop image image size resize method image bicubic crop to center of image with padding then scale image size image width image height image size pad center crop size int image size image size crop padding min image height image width offset height image height pad center crop size 1 2 offset width image width pad center crop size 1 2 crop window offset width offset height offset width pad center crop size offset height pad center crop size resize image image crop crop window resize image resize image resize image size image size resize method return resize image with open image net class txt as f line f readline image image open beagle jpg resize image decode and center crop image image size resize image np array resize image astype np float32 resize image mean rgb resize image stddev rgb resize image np expand dim resize image axis 0 load tflite model and allocate tensor interpreter tf lite interpreter model path efficientnet lite0 fp32 tflite experimental preserve all tensor true interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail input shape input detail 0 shape interpreter set tensor input detail 0 index resize image interpreter invoke for t in interpreter get tensor detail if t index 102 test interpreter get tensor t index print test shape output datum interpreter get tensor output detail 0 index string str np argmax output datum for line in line if string in line print the image be a line break so after I download conv2d layer s parameters kernel I implente fuse relu6 conv2d layer use simple python and later come back to compare result and everything be work pretty good so the next step be to implement quantize conv2d layer of efficientnet lite0 int8 and use the code below mean rgb 127 0 stddev rgb 128 0 image size 224 scale 0 012566016986966133 zero point 131 image image open beagle jpg resize image decode and center crop image image size resize image np array resize image astype np float32 resize image mean rgb resize image stddev rgb resize image np expand dim resize image axis 0 resize image resize image scale zero point resize image np array resize image astype np uint8 interpreter tf lite interpreter model path efficientnet lite0 int8 tflite experimental preserve all tensor true interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail input shape input detail 0 shape interpreter set tensor input detail 0 index resize image interpreter invoke for t in interpreter get tensor detail if t index 102 test interpreter get tensor t index print test shape output datum interpreter get tensor output detail 0 index for line in line if string in line print the image be a line break I also study this paper that provide this conv layer s implemetation here l248 l314 so in my understanding the quantize convolution operation be the same as the full precision one but you have also to take into account offset and scale I be able to extract input output kernel and bias scale and offset and manage to transform double multiplier to quantize multiplier and right shift use fuction define here to be able to do the needed operation below acc multiplybyquantizedmultipliersmallerthanone acc output multipli output shift that I assume be a per axis operation so my question here be that for this operation the multiplier we use be a quantize multipli that equal mo sinput skernel soutput or it s something else so after I write a python implementation of the above quantize con2d layer and check the result use for t in interpreter get tensor detail if t index 102 test interpreter get tensor t index there be a big deviation firstly I think that the above layer be a fused relu6 conv2d layer so I be expect output tensor s value to be between 0 and 6 but that be not the case could you please provide I a more detailed description of a quantize fuse relu6 conv2d layer s operation parameter define the sample image I use for inference beagle I be use google collaboratory to run the above code snippet tensorflow version 2 7 0 python version 3 7 12 numpy version 1 19 5
tensorflowtensorflow
interpreter invoke call segmentation fault
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 with usb coral tensorflow instal from source or binary source tensorflow version or github sha if from source v2 5 0 v2 6 0 description when I call inittfliteinterpret tflite work be correct output have all information box label and class time of invoke be good 15 ms but when I call processingframe cv mat in other class with equalent code I get seg fault on interpreter invoke from other class for int I 0 I 10 I cv mat testimage cv imread testclass example frame testclass processingframe testimage I get this error with tf 2 5 0 2 6 0 source of my programm testclass h build edge interpreter for coral void buildedgetpuinterpreter const tflite flatbuffermodel model edgetpu edgetpucontext edgetpu context load graph to coral void inittfliteinterpreter process the receive frame void processingframe cv mat frame int num thread 1 std unique ptr interpreter std share ptr tpu context tflitetensor input tensor tflitetensor output location tflitetensor output class tflitetensor output score tflitetensor num detection int height int width int channel int row elem testclass cxx void testclass buildedgetpuinterpreter const tflite flatbuffermodel model edgetpu edgetpucontext edgetpu context tflite op builtin builtinopresolver resolver resolver addcustom edgetpu kcustomop edgetpu registercustomop if tflite interpreterbuilder model resolver interpreter ktfliteok std cerr fail to build interpreter std endl return allocate tensor buffer bind give context with interpreter interpreter setexternalcontext ktfliteedgetpucontext edgetpu context interpreter setnumthread 1 if interpreter allocatetensor ktfliteok std cerr fail to allocate tensor std endl void testclass inittfliteinterpreter void auto model tflite flatbuffermodel buildfromfile graph c str tpu context edgetpu edgetpumanager getsingleton opendevice std cout check readiness of coral device std endl if tpu context isready std cout coral device be not ready std endl throw 1 std cout edge tpu path tpu context getdeviceenumrecord path std endl buildedgetpuinterpreter model tpu context get input tensor interpreter tensor interpreter input 0 output location interpreter tensor interpreter outputs 0 output class interpreter tensor interpreter output 1 output score interpreter tensor interpreter output 2 num detection interpreter tensor interpreter output 3 height input tensor dim datum 1 width input tensor dim datum 2 channel input tensor dim datum 3 row elem width channel for int I 0 I 10 I cv mat testimage cv imread example frame processingframe testimage util dual write cnn be ready example frame be process m readyflag store true void testclass processingframe cv mat frame q assert q ptr const clock t begin time clock qmutexlocker locker m mutex qdebug cv mat size width height cvtcolor frame frame cv color bgr2rgb resize for model input cv resize frame frame cv size width height if input tensor type ktfliteuint8 input tensor dim datum 0 1 input tensor dim datum 1 height input tensor dim datum 2 width input tensor dim datum 3 channel std cerr input tensor shape do not match input image std endl return uint8 t dst input tensor datum uint8 for int row 0 row height row memcpy dst frame ptr row row elem dst row elem if interpreter invoke ktfliteok qdebug invoke be break qdebug invoke be do const float detection location output location datum f const float detection class output class datum f const float detection score output score datum f const int num detection num detection datum f for int I 0 I num detection I const float score detection score I const std string label std to stre uint8 t detection class I const float ymin detection location 4 I 0 const float xmin detection location 4 I 1 const float ymax detection location 4 I 2 const float xmax detection location 4 I 3 if score thresholdscore std cout label score score std endl emit q ptr returnboundingboxe frame ymin xmin ymax xmax score label true std cout time float clock begin time clock per sec std endl emit q ptr finishedcnnprocesse frame log check readiness of coral device edge tpu path sys bus usb device 2 1 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 022215 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 012465 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 011841 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 011659 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 014413 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 011502 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 012496 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 012136 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 012898 cv mat size 640 480 invoke be do 1 score 0 902344 time 0 012129 thu dec 9 11 52 59 2021 cnn be ready example frame be process cv mat size 640 480 segmentation fault core dump gdb out 0x000000000067d47c in tflite op custom detection postprocess decodecentersizeboxe tflitecontext tflitenode tflite op custom detection postprocess opdata
tensorflowtensorflow
tutorial for autoencoder might be apply denoise to the wrong input datum
Bug
url s with the issue second example image denoise description of issue what need change the model be train to remove noise use the x train noisy as input and x train as target output later when demonstrate the denoise capability x test be use as input which be the original not noisy test datum in the plot it be also present as original noise noisy image vs the reconstruction from the noisy image if I be not miss something right now the autoencoder be use to denoise the already not noisy image instead of denoise the noisy image which be also show as a comparison besides the reconstructed image clear description use the autoencoder to denoise the actually noisy image in the same way as it be present below and the same way as the model be train and validate before change this python encode imgs autoencod encoder x test numpy decode imgs autoencoder decoder encode imgs numpy to this python encode imgs autoencod encoder x test noisy numpy decode imgs autoencoder decoder encode imgs numpy
tensorflowtensorflow
tensorflow textvectorization produce ragged tensor with no padding after load it from pickle
Bug
this be my textvectorization layer strip char string punctuation strip chars strip char replace strip chars strip char replace vocab size 15000 sequence length 20 batch size 64 def custom standardization input string lowercase tf string low input string return tf string regex replace lowercase s re escape strip char eng vectorization textvectorization max token vocab size output mode int output sequence length sequence length spa vectorization textvectorization max token vocab size output mode int output sequence length sequence length 1 standardize custom standardization train eng text pair 0 for pair in train pair train spa text pair 1 for pair in train pair eng vectorization adapt train eng text spa vectorization adapt train spa text I have save it use pickle dump config eng vectorization get config weight eng vectorization get weight open english vocab pkl wb but after load it again from disk pickle load open english vocab pkl rb new eng textvectorization from config from disk config new eng adapt tf datum dataset from tensor slice xyz new eng set weight from disk weight it be not behave as the original one it be output raggedtensor how to resolve this here be the link to my google colab
tensorflowtensorflow
unit test tensorflow python op rag ragged dispatch test fail on aarch64
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 8 10 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 11 2 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior test fail with the follow error fail testraggeddispatch18 op kwargs datum tf raggedtensorvalue value array 1 2 3 4 6 row split array 0 2 5 segment ids tf raggedtensorvalue value array 0 1 0 0 0 row split array 0 2 5 num segment 2 expect 7 0 2 0 main raggeddispatcht raggeddispatchtest testraggeddispatch18 op kwargs datum tf raggedtensorvalue value array 1 2 3 4 6 row split array 0 2 5 segment ids tf raggedtensorvalue value array 0 1 0 0 0 row split array 0 2 5 num segment 2 expect 7 0 2 0 testraggeddispatch op kwargs datum tf raggedtensorvalue value array 1 2 3 4 6 row split array 0 2 5 segment ids tf raggedtensorvalue value array 0 1 0 0 0 row split array 0 2 5 num segment 2 expect 7 0 2 0 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python op rag ragged dispatch test runfiles org tensorflow tensorflow python framework test util py line 1407 in decorate f self args kwargs file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python op rag ragged dispatch test runfiles absl py absl testing parameterized py line 314 in bind param test return test method self testcase param file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python op rag ragged dispatch test runfiles org tensorflow tensorflow python op rag ragged dispatch test py line 892 in testraggeddispatch assert fn result expect file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python op rag ragged dispatch test runfiles org tensorflow tensorflow python framework test util py line 1447 in decorate return f args kwd file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python op rag ragged dispatch test runfiles org tensorflow tensorflow python framework test util py line 3163 in assertallequal np testing assert array equal a b err msg n join msgs file usr local lib python3 8 dist package numpy testing private util py line 930 in assert array equal assert array compare operator eq x y err msg err msg file usr local lib python3 8 dist package numpy testing private util py line 840 in assert array compare raise assertionerror msg assertionerror array be not equal not equal where array 0 1 not equal lhs array 7 2 not equal rhs array 7 2 mismatch element 2 2 100 max absolute difference 8 8817842e 16 max relative difference 1 26882631e 16 x array 7 2 y array 7 2 describe the expect behavior test pass contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute add a tolerance value to the test to allow minor difference in value standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no remote http cache remote cache proxy noremote accept cache config nonccl build tag filter no oss oss serial gpu tpu benchmark test v1only test tag filter no oss oss serial gpu tpu benchmark test v1only copt ffp contract off verbose failure tensorflow python op rag ragged dispatch test other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
wrong information
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change run a prediction on a new sentence if the prediction be 0 0 it be positive else it be negative clear description if the prediction be 0 5 it be positive else it be negative
tensorflowtensorflow
create an op documentation be outdate
Bug
url s with the issue description of issue what need change g std c 11 the option std c 11 be wrong I think it need at least std c 14 for tensorflow 2 7 this issue also exist in the custom op repo I think c 14 as default be introduce here it can be see here l320 note on gcc version 5 gcc use the new c abi since version 5 the binary pip package available on the tensorflow website be build with gcc4 that use the old abi if you compile your op library with gcc 5 add d glibcxx use cxx11 abi 0 to the command line to make the library compatible with the old abi this be long outdate the tf pip package use a more recent gcc version since a long time I don t exactly know since when
tensorflowtensorflow
unit test tensorflow compiler xla test xla hlo profile test cpu give illegal instruction on aarch64
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 8 10 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 11 2 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior test fail as illegal instruction be throw describe the expect behavior test pass contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no remote http cache remote cache proxy noremote accept cache config nonccl build tag filter no oss oss serial gpu tpu benchmark test v1only test tag filter no oss oss serial gpu tpu benchmark test v1only copt ffp contract off cxxopt ffp contract off copt og copt ggdb verbose failure tensorflow compiler xla test xla hlo profile test cpu other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach bazel bin tensorflow compiler xla test xla hlo profile test cpu run 2 test from 1 test suite global test environment set up 2 test from hloprofilet run hloprofilet profilesinglecomputation 2021 11 24 17 03 03 560320 I tensorflow compiler xla service service cc 171 xla service 0x3c904070 initialize for platform host this do not guarantee that xla will be use device 2021 11 24 17 03 03 560415 I tensorflow compiler xla service service cc 179 streamexecutor device 0 host default version 2021 11 24 17 03 03 560854 I tensorflow compiler xla service service cc 171 xla service 0x3c904d60 initialize for platform interpreter this do not guarantee that xla will be use device 2021 11 24 17 03 03 560885 I tensorflow compiler xla service service cc 179 streamexecutor device 0 interpreter illegal instruction core dump when run under gdb thread 70 xla hlo profile receive signal sigill illegal instruction switching to thread 0xfffec6ffcd80 lwp 420853 0x0000fffff7ff8014 in profilesinglecomputation 5 gdb disass dump of assembler code for function profilesinglecomputation 5 0x0000fffff7ff8000 0 str d12 sp 48 0x0000fffff7ff8004 4 stp d11 d10 sp 16 0x0000fffff7ff8008 8 stp d9 d8 sp 32 0x0000fffff7ff800c 12 mov x10 xzr 0x0000fffff7ff8010 16 ldp x9 x13 x3 8 0x0000fffff7ff8014 20 mrs x8 pmccntr el0 0x0000fffff7ff8018 24 add x11 x9 0x20 0x0000fffff7ff801c 28 ldr x9 x3 0x0000fffff7ff8020 32 add x12 x9 0x30 0x0000fffff7ff8024 36 add x13 x13 0x20 0x0000fffff7ff8028 40 mov x14 xzr 0x0000fffff7ff802c 44 add x15 x11 x14 0x0000fffff7ff8030 48 add x16 x13 x14 0x0000fffff7ff8034 52 ldp q0 q1 x15 32 0x0000fffff7ff8038 56 ldp q2 q3 x16 32 0x0000fffff7ff803c 60 ldp q4 q5 x15 0x0000fffff7ff8040 64 fadd v0 4s v0 4s v2 4s 0x0000fffff7ff8044 68 fadd v1 4s v1 4s v3 4s 0x0000fffff7ff8048 72 ldp q2 q3 x16 0x0000fffff7ff804c 76 fadd v2 4s v4 4s v2 4s 0x0000fffff7ff8050 80 add x15 x12 x14 0x0000fffff7ff8054 84 stp q0 q1 x15 48 0x0000fffff7ff8058 88 fadd v0 4s v5 4s v3 4s 0x0000fffff7ff805c 92 stp q2 q0 x15 16 0x0000fffff7ff8060 96 add x14 x14 0x40 0x0000fffff7ff8064 100 cmp x14 0x400 0x0000fffff7ff8068 104 b ne 0xfffff7ff802c b any 0x0000fffff7ff806c 108 add x10 x10 0x1 0x0000fffff7ff8070 112 add x11 x11 0x400 0x0000fffff7ff8074 116 add x12 x12 0x400 0x0000fffff7ff8078 120 add x13 x13 0x400 0x0000fffff7ff807c 124 cmp x10 0x100 0x0000fffff7ff8080 128 b ne 0xfffff7ff8028 b any 0x0000fffff7ff8084 132 mov x10 xzr 0x0000fffff7ff8088 136 mrs x11 pmccntr el0 0x0000fffff7ff808c 140 mov w12 0xb717 46871 0x0000fffff7ff8090 144 movk w12 0x39d1 lsl 16 0x0000fffff7ff8094 148 dup v0 4s w12 0x0000fffff7ff8098 152 mov w12 0x25c0 9664 0x0000fffff7ff809c 156 movk w12 0xa59f lsl 16 0x0000fffff7ff80a0 160 dup v1 4s w12 0x0000fffff7ff80a4 164 mov w12 0x337e 13182 0x0000fffff7ff80a8 168 movk w12 0x2a61 lsl 16 0x0000fffff7ff80ac 172 dup v2 4s w12 0x0000fffff7ff80b0 176 mov w12 0x37ff 14335 0x0000fffff7ff80b4 180 movk w12 0xaebd lsl 16 0x0000fffff7ff80b8 184 dup v3 4s w12 0x0000fffff7ff80bc 188 ldr x12 x5 24 0x0000fffff7ff80c0 192 sub x11 x11 x8 0x0000fffff7ff80c4 196 add x11 x11 x12 0x0000fffff7ff80c8 200 str x11 x5 24 0x0000fffff7ff80cc 204 mov w11 0x41 65 0x0000fffff7ff80d0 208 movk w11 0x335c lsl 16 type for more q to quit c to continue without page q so the problem seem to be read the performance counter register as the illegal instruction flag be mrs x8 pmccntr el0
tensorflowtensorflow
unit test tensorflow core ir fail or crash depend on optimization level
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 rhel 8 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 6 8 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 10 3 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior test fail with error like 4 22 error custom op tfg addv2 attribute mlir device occur more than once in the attribute list addv2 ctl 0 addv2 placeholder placeholder 1 device gpu assign device tpu mlir device gpu some attribute some attr tensor xi32 tensor xi32 tensor xi32 filecheck error be empty filecheck command line home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfile llvm project llvm filecheck home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test op mlir describe the expect behavior all test pass contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute correct improper use of arrayref and stringref standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no remote http cache remote cache proxy noremote accept cache config nonccl build tag filter no oss oss serial gpu tpu benchmark test v1only test tag filter no oss oss serial gpu tpu benchmark test v1only copt ffp contract off copt o0 copt ggdb verbose failure tensorflow core ir test op mlir test other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach test output for tensorflow core ir test op mlir test test 1 test 1 worker fail mlir test op mlir 1 of 1 test mlir test op mlir fail script run at line 1 home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test tfg opt no pass home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test op mlir home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test tfg opt no pass home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfile llvm project llvm filecheck home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test op mlir exit code 2 command output stderr 4 22 error custom op tfg addv2 attribute mlir device occur more than once in the attribute list addv2 ctl 0 addv2 placeholder placeholder 1 device gpu assign device tpu mlir device gpu some attribute some attr tensor xi32 tensor xi32 tensor xi32 filecheck error be empty filecheck command line home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfile llvm project llvm filecheck home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out k8 opt bin tensorflow core ir test op mlir test runfiles org tensorflow tensorflow core ir test op mlir fail test 1 mlir test op mlir testing time 0 11 fail 1
tensorflowtensorflow
update documentation of dataformatvecpermute operation
Bug
resolve 53157 cc bixia1 fix invalid shape in the example update description of support input
tensorflowtensorflow
outdate documentation for dataformatvecpermute
Bug
this come up while create a tf trt converter for dataformatvecpermute in 52942 cc bixia1 url s with the issue python c java description of the issue what need change example imply that an input of shape 2 4 be valid but that should be 4 2 the format string can be of size 4 or 5 e g nhwc ndhwc the first dimension of the input shape vector can be src format size or src format size 2 in which case it be assume that non spatial dimension be omit as an example here be a valid python code that be not cover by the documentation python import tensorflow as tf a tf constant 1 2 3 4 5 print tf raw ops dataformatvecpermute x a src format ndhwc dst format ncdhw output tf tensor 1 5 2 3 4 shape 5 dtype int32 submit a pull request I m plan to submit a pr
tensorflowtensorflow
undefined reference to tensorflow str util endswith
Bug
on cento 8 tensorflow 2 6 instal from source python 3 8 2 gcc version 8 4 1 20200928 first I build libtensorflow use cmd bazel build tensorflow libtensorflow cc so second I build example tensorflow tensorflow example label image main cc g g std c 14 dlinux fpermissive fpic dhave inttype h dhave netinet in h I usr local include I opensource tf I usr local include eigen3 I opensource tf third party I opensource tf third party eigen3 c src main cc o src main o this be wrong message undefined reference to tensorflow str util endswith absl string view absl string view undefined reference to tensorflow status status tensorflow error code absl string view std vector undefined reference to tensorflow string internal catpiece abi cxx11 std initializer list however I run cmd nm can libtensorflow cc so grep endswith nm can libtensorflow framework so the function be in the lib I use std c 11 c 17 it doesn t work any help thank
tensorflowtensorflow
mislead behavior on matrix and vector division which be in contradiction with the documentation in nn softmax
Bug
system information have I write custom code yes os platform and distribution linux 5 4 104 x86 64 with ubuntu 18 04 bionic tensorflow version use command below 2 7 0 python version 3 7 12 describe the current behavior I find a misleading behavior when use broadcast between matrix and vector I will use the example of softmax to demonstrate the issue which if it s not a bug then the follow documentation I believe it be wrong random matrix a tf random uniform 4 4 maxval 10 dtype tf float64 softmax by hand as suggest in a softmax tf exp a tf reduce sum tf exp a axis 1 tf reduce sum a norm axis 1 a nn softmax tf nn softmax a axis 1 tf reduce sum a nn softmax axis 1 in the above example I use the tf nn softmax as a baseline and then use the corresponding computation as describe in the documentation that be also the way I usually implement the softmax operation however as you can see a softmax direct computation and a nn softmax from the function nn softmax give two different output which accord to the documentation should have not happen I expect the same output for both matrix after some time I realise that the problem be relate to the column wise division that have what I believe to be a misleading behaviour tf exp a tf reduce sum tf exp a axis 1 in the above computation I be divide a 4 4 matrix per 4 vector which I expect to perform a column wise division since it be a column vector however it perform a row wise division I would like to know if this be an intend behavior and if so I believe that the documentation of nn softmax be wrong and should be tf exp logit tf reduce sum tf expand dim tf exp logit axis 1 axis 1 describe the expect behavior I expect that a softmax equal a nn softmax to achieve the expect behavior I need to implicitly add an additional dimension to the denominator vector become 4 1 in code correspond to tf exp logit tf expand dim tf reduce sum tf exp logit axis 1 axis 1 in alternative I can also use keepdim true when do the reduce sum standalone code to reproduce the issue
tensorflowtensorflow
segmentation fault when pass tf string tensor argument in tflite
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source at least tf 2 7 and tf nightly be affect 2 code python import tensorflow as tf class testmodel tf keras model model def init self kwargs super init kwargs self hash tf lookup statichashtable tf lookup keyvaluetensorinitializer tf constant testing this thing tf constant 1 2 3 default value 1 tf function def test self word return self hash lookup word test model testmodel signature test model test get concrete function tf tensorspec none tf string converter tf lite tfliteconverter from concrete function signature test model converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert interpreter tf lite interpreter model content tflite model cause segmentation fault run test model test directly work fine result interpreter get signature runner word tf constant testing that thing 3 failure after conversion conversion raise the follow warning but complete 2021 11 18 22 35 48 110926 w tensorflow compiler mlir lite flatbuffer export cc 1880 graph contain the follow resource op s that use s resource type currently the resource type be not natively support in tflite please consider not use the resource type if there be issue with either tflite converter or tflite runtime resource op hashtablev2 lookuptablefindv2 lookuptableimportv2 detail tf hashtablev2 tensor container device key dtype tf type string share name 13 use node name share false value dtype i32 tf lookuptablefindv2 tensor tensor tensor tensor xi32 device tf lookuptableimportv2 tensor tensor 3x tf type string tensor 3xi32 device however try to run the signature fail with a segmentation fault I think that static hash table lookup be support in tflite since many version ago any possible temporary workaround until this be fix would be very much appreciated
tensorflowtensorflow
detail about tf keras layer conv2dtranspose param padding and output padding
Bug
description of issue what need change the documentation tell developer that output padding have only one restriction that it should be low that stride param and it may not be all inclusive clear description different from pytorch conv2dtranspose implement keras api which use padding param as same or valid instead of concrete int value that give developer a false impression that we can give padding cal task safely to tf however padding param should work with output padding param and output padding can not be set casually I check the source code of tf and find that while use padding param as same tf set pad to be kernel size 2 defaultly that mean output padding param shoud be set to stride 1 or mistake may occur in some situation however the documentation tell developer that output padding have only one restriction that it should be low that stride param so it may not be all inclusive suggestion actually set param padding in same and valid in this layer may be an ill consider decision but since tf should implement keras api may be we have two solution one be to tell the developer in the documentation the relation of output pad with padding param and add the implementation theory of same padding in this layer or just to adapt the actual pad value to the output padding param as far as possible and to warn the developer the risk and give they suggestion
tensorflowtensorflow
tf distribute mirroredstrategy make wrong output with tf keras layer dense
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution linux ubuntu 18 04 tensorflow instal from binary tensorflow version 2 6 2 python version 3 8 cuda cudnn version cuda 11 3 8 2 1 gpu model and memory rtx 5000 2 ea 16 gb when I m use mirroredstrategy the model make wrong output image this be log and the code be like image it s part of music transformer the logit change when I use 2 gpu I don t no why it make wrong output shape there be no problem with 1 gpu I don t no why it work like this
tensorflowtensorflow
unit test tensorflow compiler xla service cpu vectorize reduce with no vector register test fail on aarch64
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 6 9 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 10 3 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior test fail with test output for tensorflow compiler xla service cpu vectorize reduce with no vector register test run 1 test from 1 test suite global test environment set up 1 test from codegenreduceonarchwithnovectorregister run codegenreduceonarchwithnovectorregister test tensorflow compiler xla service cpu vectorize reduce with no vector register test cc 85 failure value of status or value7 status ok actual false expect true internal targetregistry lookuptarget fail no available target be compatible with triple i686 none android fail codegenreduceonarchwithnovectorregister test 13 ms 1 test from codegenreduceonarchwithnovectorregister 13 ms total global test environment tear down 1 test from 1 test suite run 13 ms total pass 0 test fail 1 test list below fail codegenreduceonarchwithnovectorregister test 1 fail test describe the expect behavior test pass contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute add tag to allow test to be exclude on aarch64 build standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the test have a hard code link to an x86 platform triple but more than that the test simply do not make sense on aarch64 as it want to test the behaviour if there be no vector register available but that be never go to be the case on aarch64 so rather than attempt to fix the test which would probably bypass any content on aarch64 just add a tag to the definition to allow it to be exclude
tensorflowtensorflow
xnnpack delegate not enable in tensorflow lite python interpreter
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 x64 and raspberry pi os 64bit raspio arm64 2021 04 09 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device raspberry pi 4 tensorflow instal from source or binary source tensorflow version use command below v2 7 0 and 7b290f9fd9fbf2ac4352b3cbe327e1067e5a3574 python version 3 7 3 raspberry pi os 64bit bazel version if compile from source build with cmake gcc compiler version if compile from source 8 3 0 raspberry pi os 64bit cuda cudnn version gpu model and memory describe the current behavior xnnpack delegate not enable in tensorflow lite python interpreter building with either cmake or bazel will not take effect the follow log be not output info create tensorflow lite xnnpack delegate for cpu delegate lazy initialization be include in the 3d3c6db1ca2d50f6f07722cd800144f8f736167c commit for c if interpreter allocatetensor call applylazydelegateprovider to enable the xnnpack delegate l176 however for python if the xnnpack delegate be not enable because applylazydelegateprovider be not call in interpreterwrapper allocatetensor l259 describe the expect behavior the xnnpack delegate be enable in the tensorflow lite python interpreter contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute standalone code to reproduce the issue other info log include any log or source code that would be helpful to
tensorflowtensorflow
wrong tensorflow lite link to hexagon nn skel run v1 21 actually be point to the v1 20 0 1
Bug
hi I want to download the hexagon nn skel run v1 21 in the tensorflow lite hexagon guide there be many link to many version the problem be that v1 21 be point to v1 20 so the question be where can I find the version 1 21 of hexagon nn skel run also in the guide it be specify that we must use the hexagon nn library with the compatible version of interface library as state in bazel config from bazel config I understand that I must use the version 20 0 3 of the interface so where can I find the version 20 0 3 of the interface library
tensorflowtensorflow
crash with tensorflow lite
Bug
system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert copy and paste here standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible I be use this model model sequential model add conv2d 32 3 3 input shape inputshape model add activation relu model add conv2d 32 3 3 model add activation relu model add maxpooling2d pool size 2 2 model add dropout 0 5 model add conv2d 64 3 3 model add activation relu model add conv2d 64 3 3 model add activation relu model add maxpooling2d pool size 2 2 model add dropout 0 5 model add flatten model add dense 768 1024 256 model add activation relu model add dropout 0 5 model add dense nbclasse model add activation softmax then I convert it like this converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter target spec support op tf lite opsset experimental tflite builtin activation int16 weight int8 and on android with with tensorflowlite 2 6 0 I get a crash because java lang unsatisfiedlinkerror fail to load native tensorflow lite method check that the correct native library be present and if use a custom native library have be properly load via system loadlibrary java lang unsatisfiedlinkerror dlopen fail can not locate symbol wcsnrtomb reference by libtensorflowlite jni so then I switch to tensorflowlite 2 7 0 rc0 and I get this crash java lang unsatisfiedlinkerror fail to load native tensorflow lite method check that the correct native library be present and if use a custom native library have be properly load via system loadlibrary java lang unsatisfiedlinkerror dlopen fail can not locate symbol getpagesize reference by libtensorflowlite jni so I ll now try without this tf lite opsset experimental tflite builtin activation int16 weight int8 any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
can not register 2 metric with the same name tensorflow api kera optimizer
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker image tensorflow version 2 6 1 keras version get the above error when import python version python 3 6 instal use virtualenv pip conda pip docker image bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 4 gpu model and memory nvidia a100 40 gb describe the problem look like this issue be it the right one because after apply the fix mention in above link pip install tensorflow 2 6 2 get below error root 6a893d98bbb1 app project python3 download process py help warn tensorflow from usr local lib python3 6 dist package tensorflow python compat v2 compat py 101 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term traceback most recent call last file download process py line 13 in from util import file app project util py line 4 in from object detection input import train input file usr local lib python3 6 dist package object detection input py line 26 in from object detection builder import model builder file usr local lib python3 6 dist package object detection builder model builder py line 29 in from object detection builder import matcher builder file usr local lib python3 6 dist package object detection builder matcher builder py line 23 in from object detection matcher import bipartite matcher pylint disable g import not at top file usr local lib python3 6 dist package object detection matcher bipartite matcher py line 20 in from tensorflow contrib image python op import image op modulenotfounderror no module name tensorflow contrib provide the exact sequence of command step that you execute before run into the problem build image by run below dockerfile from tensorflow tensorflow 2 3 0 gpu run apt get update fix miss apt get install y ffmpeg git git core g pkg config python3 pip unzip vim wget zip zlib1 g dev workdir app copy requirement txt run pip3 install r requirement txt run pip3 install git subdirectory pythonapi env tf cpp min log level 2 run wget unzip protoc 3 13 0 linux x86 64 zip d app protobuf env path path app protobuf bin run git clone cd app model research protoc object detection proto proto python out cp object detection package tf2 setup py python m pip install spawn container and inside it try to run a pre processing script below be package use in it import argparse import io import os import subprocess import ray import tensorflow compat v1 as tf tf disable v2 behavior import tensorflow as tf from pil import image from psutil import cpu count tf disable v2 behavior from util import from object detection util import dataset util label map util error log output 2021 11 04 10 25 18 325660 e tensorflow core lib monitor collection registry cc 77 can not register 2 metric with the same name tensorflow api kera optimizer traceback most recent call last file download process py line 13 in from util import file app project util py line 4 in from object detection input import train input file usr local lib python3 6 dist package object detection input py line 26 in from object detection builder import model builder file usr local lib python3 6 dist package object detection builder model builder py line 25 in from object detection builder import box predictor builder file usr local lib python3 6 dist package object detection builder box predictor builder py line 20 in from object detection predictor import convolutional box predictor file usr local lib python3 6 dist package object detection predictor convolutional box predictor py line 26 in from object detection core import box predictor file usr local lib python3 6 dist package object detection core box predictor py line 137 in class kerasboxpredictor tf keras layers layer file usr local lib python3 6 dist package tensorflow python util lazy loader py line 62 in getattr module self load file usr local lib python3 6 dist package tensorflow python util lazy loader py line 45 in load module importlib import module self name file usr lib python3 6 importlib init py line 126 in import module return bootstrap gcd import name level package level file usr local lib python3 6 dist package kera init py line 25 in from keras import model file usr local lib python3 6 dist package kera model py line 20 in from keras import metric as metric module file usr local lib python3 6 dist package kera metric py line 26 in from keras import activation file usr local lib python3 6 dist package keras activation py line 20 in from keras layers import advanced activation file usr local lib python3 6 dist package keras layers init py line 23 in from keras engine input layer import input file usr local lib python3 6 dist package kera engine input layer py line 21 in from keras engine import base layer file usr local lib python3 6 dist package kera engine base layer py line 43 in from keras mixed precision import loss scale optimizer file usr local lib python3 6 dist package kera mixed precision loss scale optimizer py line 18 in from keras import optimizer file usr local lib python3 6 dist package kera optimizers py line 26 in from keras optimizer v2 import adadelta as adadelta v2 file usr local lib python3 6 dist package kera optimizer v2 adadelta py line 22 in from keras optimizer v2 import optimizer v2 file usr local lib python3 6 dist package kera optimizer v2 optimizer v2 py line 37 in tensorflow api keras optimizers keras optimizer usage method file usr local lib python3 6 dist package tensorflow python eager monitor py line 361 in init len label name description label file usr local lib python3 6 dist package tensorflow python eager monitor py line 135 in init self metric self metric method self label length create args tensorflow python framework error impl alreadyexistserror another metric with the same name already exist any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf keras import raise an alreadyexistserror with keras 2 7
Bug
hello there wave I encounter a ci problem with a build job today that wasn t happen yesterday so I check the difference in term of dependency and the only difference be keras so I inspect the traceback and end up track the import from kera that cause trouble I already report this to the keras team in but I figure it might be of importance to you folk consider this impact several import from tensorflow itself system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 8 10 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 11 4 100 cudnn 8 2 2 gpu model and memory nvidia geforce rtx 2070 with max q design describe the current behavior run the standalone code throw an alreadyexistserror describe the expect behavior not raise any error do you want to contribute a pr yes no happy to do so but I m not sure how to solve this briefly describe your candidate solution if contribute standalone code to reproduce the issue python from tensorflow keras util import img to array other info log include any log or source code that would be helpful to alreadyexistserror traceback most recent call last in 1 from tensorflow keras util import img to array miniconda3 lib python3 8 site package keras api v2 kera init py in 8 import sys as sys 9 10 from keras import version 11 from keras api v2 keras import internal 12 from keras api v2 kera import activation miniconda3 lib python3 8 site package kera init py in 23 24 see b 110718070 comment18 for more detail about this import 25 from keras import model 26 27 from keras engine input layer import input miniconda3 lib python3 8 site package kera model py in 18 import tensorflow compat v2 as tf 19 from keras import backend 20 from keras import metric as metric module 21 from keras import optimizer v1 22 from keras engine import functional miniconda3 lib python3 8 site package kera metric py in 24 25 import numpy as np 26 from keras import activation 27 from keras import backend 28 from keras engine import base layer miniconda3 lib python3 8 site package keras activation py in 18 19 from keras import backend 20 from keras layers import advanced activation 21 from keras util generic util import deserialize keras object 22 from keras util generic util import serialize kera object miniconda3 lib python3 8 site package keras layers init py in 21 22 generic layer 23 from keras engine input layer import input 24 from keras engine input layer import inputlayer 25 from keras engine input spec import inputspec miniconda3 lib python3 8 site package kera engine input layer py in 19 from keras import backend 20 from keras distribute import distribute training util 21 from keras engine import base layer 22 from keras engine import keras tensor 23 from keras engine import node as node module miniconda3 lib python3 8 site package kera engine base layer py in 41 from keras engine import node as node module 42 from keras mixed precision import autocast variable 43 from keras mixed precision import loss scale optimizer 44 from keras mixed precision import policy 45 from keras save save model import layer serialization miniconda3 lib python3 8 site package kera mix precision loss scale optimizer py in 16 17 from keras import backend 18 from keras import optimizer 19 from keras mixed precision import loss scale as keras loss scale module 20 from keras optimizer v2 import optimizer v2 miniconda3 lib python3 8 site package keras optimizer py in 24 from keras optimizer v1 import optimizer 25 from keras optimizer v1 import tfoptimizer 26 from keras optimizer v2 import adadelta as adadelta v2 27 from keras optimizer v2 import adagrad as adagrad v2 28 from keras optimizer v2 import adam as adam v2 miniconda3 lib python3 8 site package kera optimizer v2 adadelta py in 20 import numpy as np 21 from keras import backend config 22 from keras optimizer v2 import optimizer v2 23 from tensorflow python util tf export import keras export 24 miniconda3 lib python3 8 site package kera optimizer v2 optimizer v2 py in 34 35 36 keras optimizer gauge tf internal monitoring boolgauge 37 tensorflow api keras optimizers keras optimizer usage method 38 miniconda3 lib python3 8 site package tensorflow python eager monitor py in init self name description label 358 label the label list of the new metric 359 360 super boolgauge self init boolgauge bool gauge method 361 len label name description label 362 miniconda3 lib python3 8 site package tensorflow python eager monitor py in init self metric name metric method label length args 133 self metric name len self metric method 134 135 self metric self metric method self label length create args 136 137 def del self alreadyexistserror another metric with the same name already exist
tensorflowtensorflow
linker command not proper as it do not look under usr local lib
Bug
url s with the issue description of issue what need change in the linker section the give command be sudo ldconfig but as per the description it only link to the trust directory lib and usr lib this need to be replace with sudo ldconfig usr local lib v v be optional only for verbose assume that the libtensorflow file have be extract in usr local clear description if this method be not use then we get this error while build the example hello world program libtensorflow so 2 can not open share object file no such file or directory
tensorflowtensorflow
model weight and training loss turn to nan during and after train
Bug
I m use keras layer to define a multi input single output model I ve check the calculation by pass a dummy datum to the layer use functional api and model predict before call model fit and the model work fine it compute the output as a float and calculate the loss value as expect however whenever I call model fit the training loss turn to nan I try to see the model weight before and after training and it turn out the weight be exist before call model fit but turn to nan after fit the model I assume this issue be cause by gradient calculation but I don t know for sure here be the sample of what happen I m experience this issue on multiple device aw sagemaker cpu and gpu instance and my macos local jupyter notebook tensorflow version 2 6 0 tensorflow instal from pypi python ver 3 7 10
tensorflowtensorflow
inconsistent result on every run with gpu
Bug
system information os platform and distribution colab tensorflow version 2 6 0 gpu describe the current behavior hello I m use tensorflow 2 4 0 with gpu support on my local machine to train a small cnn model I can get the same training result when I train it on cpu mode if I fix the random seed but always get inconsistent result on gpu mode so I test it on colab which have tensorflow 2 6 0 instal in default but still get inconsistent result on gpu mode describe the expect behavior the result should be the same contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute x standalone code to reproduce the issue here be the test code colab run it for many time with gpu enable you will get different result on each run other info log I find that only when I use relu leaky relu or tf maximum as the activation function the result change on every run but if I use sigmoid tanh or other non relu like activation function the result will remain the same
tensorflowtensorflow
shape be incompatible when train pose classification
Bug
error valueerror shape 16 1 and 16 5 be incompatible occur when I m run pose classification ipynb on custom dataset at this line history model fit x train y train epoch 200 batch size 16 validation datum x val y val callback checkpoint earlystopping update it seem not a bug the problem should be the dataset and its labeling I ll close it now sorry for inconvenience
tensorflowtensorflow
import tensorflow experimental numpy as tnp
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
undefine behaviour in range
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 all mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 6 8 bazel version if compile from source 3 7 2 gcc compiler version if compile from source 10 3 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior l1484 have undefined behaviour when size be great than std numeric limit max this lead to the unit test ranget testlargestart fail on aarch64 where the g implement different behaviour from x86 on x86 the result of the cast be large and ve on aarch64 it be large and ve neither be incorrect as the behaviour of cast into a type that can not hold the value be undefined describe the expect behavior the code should be write to avoid rely on undefined behaviour of the source contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute test the variable size for exceed the great possible value that can be safely cast to int64 t and throw an error if find standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bazel test flaky test attempt 3 test output all cache test result no remote http cache remote cache proxy noremote accept cache config nonccl verbose failure tensorflow python kernel test init op test other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach error testlargestart main ranget ranget testlargestart traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python kernel test init op test py line 553 in testlargestart v math op range start 1e 38 limit 1 file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python util traceback util py line 141 in error handler return fn args kwargs file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python util dispatch py line 1092 in op dispatch handler return dispatch target args kwargs file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python ops math op py line 2113 in range return gen math op range start limit delta name name file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python ops gen math op py line 7737 in range op raise from not ok status e name file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow python kernel test init op test runfiles org tensorflow tensorflow python framework op py line 7131 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl resourceexhaustederror oom when allocate tensor with shape 9223372036854775807 and type float on job localhost replica 0 task 0 device cpu 0 by allocator cpu op range
tensorflowtensorflow
numpy 1 21 2 3 cause feature column unit test failure
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below late test be with an oct 25 2021 commit on main branch python version 3 8 bazel version if compile from source we have bazel 3 7 2 instal gcc compiler version if compile from source gcc 7 cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior this commit bump up numpy version many ci rely on the above file to run their nightlie tensorflow python feature column feature column test unit test failure with numpy 1 21 2 or numpy 1 21 3 with the follow error message numpy 1 19 5 be successful 2021 10 25t12 40 03 025z error test fill col to var main linearmodelt 2021 10 25t12 40 03 025z linearmodelt test fill col to var 2021 10 25t12 40 03 025z 2021 10 25t12 40 03 025z traceback most recent call last 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python feature column feature column test py line 1612 in test fill col to var 2021 10 25t12 40 03 025z self assertallequal col to var bias bias 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python framework test util py line 1390 in decorate 2021 10 25t12 40 03 025z return f args kwd 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python framework test util py line 3055 in assertallequal 2021 10 25t12 40 03 025z a self getndarray a 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python framework test util py line 2799 in getndarray 2021 10 25t12 40 03 025z return np array a 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python op resource variable op py line 534 in array 2021 10 25t12 40 03 025z return np asarray self numpy dtype dtype 2021 10 25t12 40 03 025z file home tensorflow ci jenkins workspace workspace workspace tensorflow eigen test bazel ci build cache cache bazel bazel tensorflow ci jenkins eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out k8 opt bin tensorflow python feature column feature column test runfiles org tensorflow tensorflow python op resource variable op py line 674 in numpy 2021 10 25t12 40 03 025z raise notimplementederror 2021 10 25t12 40 03 025z notimplementederror numpy be only available when eager execution be enable describe the expect behavior the unit test should pass contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute downgrade numpy to 1 19 2 as workaround not sure about the root cause and fix standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook simply run the tensorflow python feature column feature column test unit test other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach an example full failure log can be find in intel s public ci page via subject to expiration
tensorflowtensorflow
typo in the model compile code example tf keras optimizer not tf keras optimizer
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue compile description of issue what need change error in the model compile code example tf keras optimizer not tf keras optimizer the code be misspell see link
tensorflowtensorflow
valueerror convert a list of object of custom class extensiontype to tensor error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 dev20211023 python version 3 8 10 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version nil gpu model and memory nil I have implement a simple class in python extend from tf exerimental extensiontype which store three tensor into it my goal be to create a list of these object which then can be return from a function after convert this list into tensor I have try create a tf variable object tf tensorarray object tf constant obect pass numpy object of this list into these function too but get an error that I can not convert to tensor or classname be not identify as proper tf dtype standalone code to reproduce the issue other info log include any log or source code that would be helpful to valueerror attempt to convert a value array value match iou with an unsupported type to a tensor
tensorflowtensorflow
typeerror to be compatible with tf eager defun python function must return zero or more tensor
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 8 10 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version cuda 11 2 gpu model and memory nvidia geforce rtx 2060 6 gb I be implement a function which be calld by tf keras model fit function while run the validation dataset after every epoch hence the fucntion will be runnig in graph mode the problem be when I be reurne parameter from this function I be get a typeerror state that I should return 0 or more tensor whereas currently I be return a list since the list contain numpy array and each np array be collection of custom class in python I have try convert this numpy array in tensor variable tensorarray dtype be a problem to no avail hence I be unable to figure out how should I return the box object in form of tensor as this function be run in eager mode prefectly fine I be wonder be it a strict signature constraint to return tensor and not even numpy array from a function decorate under tf fucntion decorator standalone code to reproduce the issue other info log include any log or source code that would be helpful to typeerror traceback most recent call last usr local lib python3 7 dist package tensorflow python framework func graph py in convert x 962 try 963 x op convert to tensor or composite x 964 except valueerror typeerror 20 frame usr local lib python3 7 dist package tensorflow python framework op py in convert to tensor or composite value dtype name 1688 return internal convert to tensor or composite 1689 value value dtype dtype name name as ref false 1690 usr local lib python3 7 dist package tensorflow python framework op py in internal convert to tensor or composite value dtype name as ref 1727 as ref as ref 1728 accept result type tensor composite tensor compositetensor 1729 usr local lib python3 7 dist package tensorflow python profiler trace py in wrap args kwargs 162 return func args kwargs 163 return func args kwargs 164 usr local lib python3 7 dist package tensorflow python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1565 if ret be none 1566 ret conversion func value dtype dtype name name as ref as ref 1567 usr local lib python3 7 dist package tensorflow python framework tensor conversion registry py in default conversion function fail resolve argument 51 del as ref unused 52 return constant op constant value dtype name name 53 usr local lib python3 7 dist package tensorflow python framework constant op py in constant value dtype shape name 271 return constant impl value dtype shape name verify shape false 272 allow broadcast true 273 usr local lib python3 7 dist package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 289 value dtype dtype shape shape verify shape verify shape 290 allow broadcast allow broadcast 291 dtype value attr value pb2 attrvalue type tensor value tensor dtype usr local lib python3 7 dist package tensorflow python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 563 element type not support in tensorproto s numpy dtype name 564 append fn tensor proto proto value 565 tensorflow python framework fast tensor util pyx in tensorflow python framework fast tensor util appendobjectarraytotensorproto usr local lib python3 7 dist package tensorflow python util compat py in as bytes byte or text encode 86 raise typeerror expect binary or unicode string get r 87 byte or text 88 typeerror expect binary or unicode string get main boundingbox object at 0x7f3f3b798a50 during handling of the above exception another exception occur typeerror traceback most recent call last in 29 30 if name main 31 test postprocess in test postprocess 24 batch label np random uniform 0 1 size 1 13 13 3 7 astype np float32 np random uniform 0 1 size 1 26 26 3 7 astype np float32 np random uniform 0 1 size 1 52 52 3 7 astype np float32 25 box object post process batch label anchor 26 box object post process batch label anchor 27 print box object 28 usr local lib python3 7 dist package tensorflow python eager def function py in call self args kwd 883 884 with optionalxlacontext self jit compile 885 result self call args kwd 886 887 new tracing count self experimental get trace count usr local lib python3 7 dist package tensorflow python eager def function py in call self args kwd 931 this be the first call of call so we have to initialize 932 initializer 933 self initialize args kwd add initializer to initializer 934 finally 935 at this point we know that the initialization be complete or less usr local lib python3 7 dist package tensorflow python eager def function py in initialize self args kwd add initializer to 758 self concrete stateful fn 759 self stateful fn get concrete function internal garbage collect pylint disable protect access 760 args kwd 761 762 def invalid creator scope unused args unused kwd usr local lib python3 7 dist package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 3064 args kwargs none none 3065 with self lock 3066 graph function self maybe define function args kwargs 3067 return graph function 3068 usr local lib python3 7 dist package tensorflow python eager function py in maybe define function self args kwargs 3461 3462 self function cache miss add call context key 3463 graph function self create graph function args kwargs 3464 self function cache primary cache key graph function 3465 usr local lib python3 7 dist package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 3306 arg name arg name 3307 override flat arg shape override flat arg shape 3308 capture by value self capture by value 3309 self function attribute 3310 function spec self function spec usr local lib python3 7 dist package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape acd record initial resource use 1010 tensorarray and none s 1011 func output nest map structure convert func output 1012 expand composite true 1013 1014 check mutation func args before func args original func usr local lib python3 7 dist package tensorflow python util nest py in map structure func structure kwargs 867 868 return pack sequence as 869 structure 0 func x for x in entry 870 expand composite expand composite 871 usr local lib python3 7 dist package tensorflow python util nest py in 0 867 868 return pack sequence as 869 structure 0 func x for x in entry 870 expand composite expand composite 871 usr local lib python3 7 dist package tensorflow python framework func graph py in convert x 967 must return zero or more tensor in compilation of s find 968 return value of type s which be not a tensor 969 str python func type x 970 if add control dependency 971 x dep ctx mark as return x typeerror to be compatible with tf eager defun python function must return zero or more tensor in compilation of find return value of type which be not a tensor
tensorflowtensorflow
miss link to nightly version of binarycrossentropy
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue the follow link throw 404 error l494 l593 l145 l155 l247 l252 l106 l143 description of issue what need change nightly doc link be throw 404 error
tensorflowtensorflow
mirroredvariable have different value on replicas only first device be correct
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes two line os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary pip tensorflow version use command below 2 6 0 but I also try tf nightly with the same result python version 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 8 1 1 same behavior also with 11 4 8 2 4 gpu model and memory 4 x nvidia a40 48 gb describe the current behavior minimal code example python import tensorflow as tf with tf distribute mirroredstrategy scope print tf variable 1 output be info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device gpu 0 job localhost replica 0 task 0 device gpu 1 job localhost replica 0 task 0 device gpu 2 job localhost replica 0 task 0 device gpu 3 mirroredvariable 0 1 2 3 the problem be as see above that the replica do not contain the correct variable value all be zero value except on the first device the numpy 0 0 part this be the same with 2 or 3 device as well not just with all 4 the same code do produce the expect behavior on a different machine with 2x titan rtx gpus this be simply the minimal reproducing example the real world consequence when perform multi gpu training be that the first forward pass succeed but after the first sgd update thing become nan describe the expect behavior expect output would be info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device gpu 0 job localhost replica 0 task 0 device gpu 1 job localhost replica 0 task 0 device gpu 2 job localhost replica 0 task 0 device gpu 3 mirroredvariable 0 1 2 3 note the numpy 1 0 part contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf with tf distribute mirroredstrategy scope print tf variable 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the server in question be a dell poweredge r750xa with 4x nvidia a40 gpu
tensorflowtensorflow
fail to invoke the interpreter with error provide datum count xxx must match the require count 3
Bug
1 system information osx 11 5 2 io 14 7 1 cento 8 4 2105 tensorflow v2 6 0 custom build for server cento 8 4 2105 tensorflow library tensorflowliteswift 2 6 the cento custom build tensorflow be build with the follow optimization flag march native msse4 1 msse4 2 mssse3 mcx16 mpopcnt on the centos 8 box I successfully train a custom model use fast rcnn resnet152 v1 for it s configuration I m able to run the model on image and mp4 file successfully I convert the model on the centos box for tflite I copy the model into the tensorflow example tree master lite example object detection io example project and update the podfile to use tflite 2 6 2 code import tensorflow as tf save model dir export model barbell 3 save model convert the model converter tf lite tfliteconverter from save model save model dir converter target spec support op tf lite opsset tflite builtin enable tensorflow lite op tf lite opsset select tf op enable tensorflow op tflite model converter convert save the model with open model tflite wb as f f write tflite model modeldatahandler swift case 1 mark model parameter let batchsize 1 let inputchannel 3 let inputwidth 1024 let inputheight 1024 case 2 mark model parameter let batchsize 1 let inputchannel 3 let inputwidth 1 let inputheight 1 3 failure after conversion case 1 interpreter can t load model fail to invoke the interpreter with error provide datum count 3145728 must match the require count 3 note 1024 1024 3 3145728 note the netron graph entry point show the input should be 1x1x1 3 follow by a while loop whose 3rd element be 1x1024x1024x3 presumably these be the correct parameter however see case 2 case 2 if I set the inputwidth and inputheight to 1x1 the interpreter load the app start and then crash with an out of memory error in a popup window however nothing be log in the xcode console and no error be give for tensorflow lite 2021 10 20 09 28 46 565241 0500 objectdetection 723 92493 initialize tensorflow lite runtime info initialize tensorflow lite runtime be it the case that the correct parameter be inputheight 1 and inputwidth 1 and that my model be use too much os memory on the phone if that s true be there guideline on which model training config param be suitable for phone both io and andriod that be model with well input size and hide layer size etc 4 optional rnn conversion support 5 optional any other info log
tensorflowtensorflow
update readme md
Bug
update tf logo as per brand guideline fix for 52355
tensorflowtensorflow
error occur during quantization usingtfliteconvert
Bug
1 system information linux ubunut 20 04 pip install tensorflow 2 6 0 pip install tensorflow model optimization 0 7 0 2 code 3 failure after conversion error tfl max pool 2d op quantization parameter violate the same scale constraint quant uniform vs quant uniform failure on convert 5 optional any other info log I build two very similar model but one be successfully convert to quantize model while other be fail same with avgpool instead of maxpool there be a similar issue 46754 but I think concat isn t problem because one of my model include concat work thank you
tensorflowtensorflow
meshgrid do not work with tf function
Bug
the follow code fail at runtime import tensorflow as tf def f x y return tf meshgrid x y tf function def g x y return tf meshgrid x y def main print f tensorflow version tf version version all value tf range 0 0 1 0 1 x y tf expand dim all value 1 print f x y this work print g x y this fail if name main main the output 2021 10 14 12 51 02 388948 I tensorflow stream executor platform default dso loader cc 53 successfully open dynamic library libcudart so 11 0 2021 10 14 12 51 03 952753 I tensorflow stream executor platform default dso loader cc 53 successfully open dynamic library libcuda so 1 2021 10 14 12 51 04 028174 e tensorflow stream executor cuda cuda driver cc 328 fail call to cuinit cuda error no device no cuda capable device be detect 2021 10 14 12 51 04 028234 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host remove proc driver nvidia version do not exist 2021 10 14 12 51 04 028962 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag tensorflow version 2 5 1 traceback most recent call last file mnt workspace tmp pycharm project 951 python gamma meshgrid bug py line 23 in main file mnt workspace tmp pycharm project 951 python gamma meshgrid bug py line 19 in main print g x y file usr local lib64 python3 7 site package tensorflow python eager def function py line 889 in call result self call args kwd file usr local lib64 python3 7 site package tensorflow python eager def function py line 933 in call self initialize args kwd add initializer to initializer file usr local lib64 python3 7 site package tensorflow python eager def function py line 764 in initialize args kwd file usr local lib64 python3 7 site package tensorflow python eager function py line 3050 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr local lib64 python3 7 site package tensorflow python eager function py line 3444 in maybe define function graph function self create graph function args kwargs file usr local lib64 python3 7 site package tensorflow python eager function py line 3289 in create graph function capture by value self capture by value file usr local lib64 python3 7 site package tensorflow python framework func graph py line 999 in func graph from py func func output python func func args func kwargs file usr local lib64 python3 7 site package tensorflow python eager def function py line 672 in wrap fn out weak wrap fn wrap args kwd file usr local lib64 python3 7 site package tensorflow python framework func graph py line 986 in wrapper raise e ag error metadata to exception e notimplementederror in user code mnt workspace tmp pycharm project 951 python gamma meshgrid bug py 11 g return tf meshgrid x y usr local lib64 python3 7 site package tensorflow python util dispatch py 206 wrapper return target args kwargs usr local lib64 python3 7 site package tensorflow python op array op py 3644 meshgrid mult fact one shape output dtype usr local lib64 python3 7 site package tensorflow python util dispatch py 206 wrapper return target args kwargs usr local lib64 python3 7 site package tensorflow python op array op py 3212 one output constant if small one shape dtype name usr local lib64 python3 7 site package tensorflow python op array op py 2896 constant if small if np prod shape 1000 array function internal 6 prod mnt workspace python numpy core fromnumeric py 3052 prod keepdim keepdim initial initial where where mnt workspace python numpy core fromnumeric py 86 wrapreduction return ufunc reduce obj axis dtype out passkwargs usr local lib64 python3 7 site package tensorflow python framework op py 870 array a numpy call which be not support format self name notimplementederror can not convert a symbolic tensor meshgrid size 1 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support this be run the aw dlami for tf 2 5 1 use the follow ami ami 09ddcd88a97c092e5 the ec2 instance type be a t3 small
tensorflowtensorflow
tf selu op be neither a custom op nor a flex op
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os window 10 tensorflow instal from source or binary binary tensorflow version use command below v2 6 0 rc2 32 g919f693420e 2 6 0 python version 3 8 cuda cudnn version 11 4 gpu model and memory gtx 1060 describe the current behavior error when quantize a model with selu activation describe the expect behavior quantize selu activation contribute do you want to contribute a pr yes no no sorry standalone code to reproduce the issue import tensorflow as tf import pathlib def get model input layer tf keras input shape 128 dtype tf float32 dense tf keras layer dense 128 kernel initializer lecun normal activation selu input layer return tf keras model model input input layer output dense def quantize model converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter target spec support type tf float16 tflite model converter convert tflite model dir pathlib path tflite model dir mkdir exist ok true parent true tflite model file tflite model dir unquantizable tflite tflite model file write bytes tflite model pass if name main model get model quantize model pass other info log error tf selu op be neither a custom op nor a flex op error fail while convert main some op be not support by the native tflite runtime you can enable tf kernel fallback use tf select see instruction tf select op selu detail tf selu tensor tensor device
tensorflowtensorflow
use vector version of the logo in readme file
Bug
please replace the raster logo in the repository readme md file with its vector svg alternative available in the brand asset also I think the horizontal lockup be mark as the prefer one in the brand guideline
tensorflowtensorflow
documentation problem for gradient clip parameter
Bug
this issue reflect documentation of this as state in the above documentation relevant section c and paste below there be no difference between clipnorm and global clipnorm however look at the code show these parameter do 2 different thing recommend to update documentation clipnorm float or none if set clip gradient to a maximum norm clipvalue float or none if set clip gradient to a maximum value global clipnorm float or none if set clip gradient to a maximum norm iteration
tensorflowtensorflow
migration script insert loss reduction argument
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow use code from oss repo mention below os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pypi binary tensorflow version use command below 1 15 3 python version 3 7 11 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 7 6 5 gpu model and memory tesla t4 16 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior after apply the migration script tf upgrade v2 to the recommender repo test on this code fail with the error e attributeerror module tensorflow python keras api v1 kera loss have no attribute reduction the cause of the error be that the script have insert an additional argument loss reduction tf keras loss reduction sum in line l176 l186 l200 describe the expect behavior it seem that the migration script should not insert the loss reduction argument in the code because after remove this argument the test pass successfully contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute n a standalone code to reproduce the issue from a conda environment with python 3 7 cudatoolkit 10 0 cudnn 7 tensorflow 1 15 do git clone cd recommender tf upgrade v2 intree recommender outtree recommender v2 recommender reportfile recommender report txt tf upgrade v2 intree test outtree recommender v2 test reportfile test report txt mv recommender v2 recommender recommender mv recommender v2 test test deactivate this env and then do conda create n tf1 15 python 3 7 cudatoolkit 10 0 cudnn 7 conda activate tf1 15 pip install upgrade pip setuptool pip install gpu dev pyt test unit recommender model test wide deep util py test wide model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log txt
tensorflowtensorflow
unspecified datum type in image dataset from directory
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue imagedatasetfromdirectory function there be issue with the follwoe entry label either infer label be generate from the directory structure none no label or a list tuple of integer label of the same size as the number of image file find in the directory label should be sort accord to the alphanumeric order of the image file path obtain via os walk directory in python label mode int mean that the label be encode as integer e g for sparse categorical crossentropy loss categorical mean that the label be encode as a categorical vector e g for categorical crossentropy loss binary mean that the label there can be only 2 be encode as float32 scalar with value 0 or 1 e g for binary crossentropy none no label description of issue what need change the datum type for input be not clearly describe and there be no example of what would be an acceptable input if its be not feasible to supply a data directory organize into file by class label currently when I try this approach the fucntion do not even recognize that there be file in the directory I be give it but when I move up in the directory and supply the file contain the image instead I get a dataset object for train a single class the error be not useful and confusing it be not clear why the function do not recognize that there be image file in the folder in one case and not another example would make implement this usecase much easy
tensorflowtensorflow
tutorial link be miss
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change 2021 10 11 16 41 12 tutorial link be miss 2021 10 11 16 42 33 2021 10 11 16 43 19 clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
runtimeerror encounter unresolved custom op reorderaxe see instruction number 284 reorderaxe fail to prepare
Bug
system information os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary colab tensorflow version or github sha if from source convert model in tf1 x interpreter in late tf nightly I convert a model to tflite use follow code co tf compat v1 lite tfliteconverter from save model modelname co target spec support op tf lite opsset tflite builtin tf lite opsset select tf op co allow custom op true there be no error in converting but when I try to interpreter the tfliter model I encoder error 916 self interpreter invoke 917 918 def reset all variable self runtimeerror encounter unresolved custom op reorderaxe see instruction number 284 reorderaxe fail to prepare could you please tell I how to solve this
tensorflowtensorflow
prune tflite model have invalid model identifier after post training quantisation
Bug
1 system information linux ubuntu 20 04 pip tensorflow package version 2 6 0 2 code reproducable code in this gist scrollto rd0cefccgn2c 3 description after prune a keras model accord to tf colab the post training quantize tflite can t be invoke the quantisation work for the un prune model interpreter tf lite interpreter model path model tf prune lead to the error valueerror traceback most recent call last in 1 interpreter tf lite interpreter model path model tf prune usr local lib python3 7 dist package tensorflow lite python interpreter py in init self model path model content experimental delegate num thread experimental op resolver type experimental preserve all tensor 365 interpreter wrapper createwrapperfromfile 366 model path op resolver i d custom op registerer by name 367 custom op registerer by func experimental preserve all tensor 368 if not self interpreter 369 raise valueerror fail to open format model path valueerror model provide have model identifier should be tfl3
tensorflowtensorflow
model maker object detection tutorial bug
Bug
I run the model maker object detection tutorial via colab scrollto qhl8lqvamety however a problem occur in model object detector create train datum model spec spec batch size 8 train whole model true validation datum validation datum epoch 1 50 unknownerror traceback most recent call last in 1 model object detector create train datum model spec spec batch size 8 train whole model true validation datum validation datum 9 frame usr local lib python3 7 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e 62 if name be not none unknownerror 2 root error s find 0 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node keras layer statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall efficientnet lite0 statefulpartitionedcall stem conv2d conv2d 1 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node keras layer statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall efficientnet lite0 statefulpartitionedcall stem conv2d conv2d func cond then 3378 input 6828 56 0 successful operation 0 derive error ignore op inference train function 96849 function call stack train function train function please solve this problem
tensorflowtensorflow
issue with tf save model save
Bug
I can not understand why this serious issue be not pay any attention for such a long time
tensorflowtensorflow
incorrect link under api reference
Bug
url s with the issue please provide a link to the documentation entry api reference description of issue what need change clear description under the description of api reference api reference there be a link to a google doc for tensorflow 2 api doc advice this doc be have some link which be not work and may create confusion for any beginner who want to start contribute to doc link like 1 link of tensorflow doc task tracker this doc be not currently under consideration so this along with its related info should be remove to avoid any confusion 2 docstre link this link be not work need to be correctly link 3 link give for example guide tutorial incorrect link 4 link for python api documentation generate python api documentation incorrect link so overall from a beginner s point of view I can say that this faq doc for api be not much useful helpful so I ll suggest either update it or remove it
tensorflowtensorflow
mac m1 tf2 5 0 no layer for integerlookup normalization stringlookup
Bug
system information os platform and distribution macos big sur version 11 3 1 with mac m1 chip tensorflow instal from tensorflow version use command below tf 2 5 python version 3 8 debug output import tensorflow as tf print tf version git version tf version version unknown 2 5 0 describe the current behavior from tensorflow keras layer import integerlookup from tensorflow keras layer import normalization from tensorflow keras layer import stringlookup importerror traceback most recent call last var folder 1z 77jbzk11477gc69 s8n2y0r00000gn t ipykernel 27569 1759035919 py in 1 from tensorflow keras layer import integerlookup importerror can not import name integerlookup from tensorflow keras layer user username miniforge3 envs env lib python3 8 site package tensorflow keras layer init py standalone code to reproduce the issue follow link to install tf2 5 on mac and then run the import
tensorflowtensorflow
create dataset from large numpy array via from tensor slice crash without any error message or warning
Bug
I use tf 2 6 and when I try to create dataset from large numpy array 10 gb via from tensor slice the code break when I try to train via fit or even just attempt to iterate over the dataset the code just break and exist no warning no error message nothing I have not find any similar issue mention anywhere else what be the actual limitation here I have over 128 gb ram and due to the code break already on the dataset iteration part it be surely unrelated to my gpu and its memory 24 gb the numpy array load without issue into memory but once the iterator cause the execution of from tensor slice the code break shortly after what be workable solution here datagenerator create tfrecord file I try to avoid go the route of tfrecord because it appear very poorly document how to create such binary file as no warning or error message be output there be no log to show at all the problem can be reproduce with a large numpy array random datum work and the follow code obtain training datum print loading training datum train datum np load os path join os getcwd source dataset f train datum i d feature npy train target np load os path join os getcwd source dataset f train datum i d target npy print training dataset construction train ds tf datum dataset from tensor slice train datum train target for x y in train ds do something
tensorflowtensorflow
different prediction on gpu between tf keras model load model and tf save model load
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 8 2 cuda cudnn version 11 0 gpu model and memory geforce gtx 1050 4 gb describe the current behavior I have a cnn base regression model surprisingly the model be predict different output when load with tf keras model load model and tf save model load however this only occur when I be use a gpu and also not always but 5 of inference time on cpu they both produce the same output always the difference be rather small happen after 1e 7 but still big for my use case describe the expect behavior irrespective of the loading method and whether gpu be use for inference or not predict value should always be the same standalone code to reproduce the issue the link to colab colab use different tf and python version but the issue still exist other info log assertionerror 0 12652352 and 0 12652355
tensorflowtensorflow
error
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 and 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung galaxy m12 tensorflow instal from source or binary I use the venv code tensorflow version use command below none read below python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory mediatek and 6 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version venv om localhost twitchtubemain python3 c import tensorflow as tf print tf git version tf version import tensorflow as tf print tf git version tf version describe the current behavior venv om localhost twitchtubemain pip install upgrade tensorflow collect tensorflow cache entry deserialization fail entry ignore could not find a version that satisfy the requirement tensorflow from version no match distribution find for tensorflow venv om localhost twitchtubemain describe the expect behavior tensor flow get instal use pip contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach venv om localhost twitchtubemain pip install upgrade tensorflow collect tensorflow cache entry deserialization fail entry ignore could not find a version that satisfy the requirement tensorflow from version no match distribution find for tensorflow venv om localhost twitchtubemain
tensorflowtensorflow
a bug for tf keras layer textvectorization when build from save config and weight
Bug
I have try write a python program to save tf keras layer textvectorization to disk and load it with the answer of the textvectorization layer build from save config output a vector with wrong length when the arg output sequence length be not none and output mode int for example if I set output sequence length 10 and output mode int it be expect that give a text textvectorization should output a vector with length of 10 see vectorizer and new v2 in the code below however if textvectorization s arg output mode int be set from save config it doesn t output a vector with length of 10 actually it be 9 the real length of the sentence it seem like output sequence length be not set successfully see the object new v1 in the code below the interesting thing be I have compare from disk config output mode and int they equal to each other import tensorflow as tf from tensorflow keras model import load model import pickle in max len 10 sequence length to pad the output to text dataset tf datum dataset from tensor slice I like natural language processing you like computer vision I like computer game and computer science fit a textvectorization layer vocab size 10 maximum vocab size vectorizer tf keras layer textvectorization max token none standardize low and strip punctuation split whitespace output mode int output sequence length max len vectorizer adapt text dataset batch 64 in print vectorizer get vocabulary print vectorizer get config print vectorizer get weight in pickle the config and weight pickle dump config vectorizer get config weight vectorizer get weight open model tv layer pkl wb later you can unpickle and use config to create object and weight to load the train weight from disk pickle load open model tv layer pkl rb new v1 tf keras layer textvectorization max token none standardize low and strip punctuation split whitespace output mode from disk config output mode output sequence length from disk config output sequence length you have to call adapt with some dummy datum bug in keras new v1 adapt tf datum dataset from tensor slice xyz new v1 set weight from disk weight new v2 tf keras layer textvectorization max token none standardize low and strip punctuation split whitespace output mode int output sequence length from disk config output sequence length you have to call adapt with some dummy datum bug in keras new v2 adapt tf datum dataset from tensor slice xyz new v2 set weight from disk weight print 10 in test sentence jack like computer scinece computer game and foreign language print vectorizer test sentence print new v1 test sentence print new v2 test sentence print from disk config output mode int here be the print output tf tensor 1 1 3 1 3 11 12 1 10 0 shape 10 dtype int64 tf tensor 1 1 3 1 3 11 12 1 10 shape 9 dtype int64 tf tensor 1 1 3 1 3 11 12 1 10 0 shape 10 dtype int64 true do anyone know why I have also raise a same issue as this in the repo of kera
tensorflowtensorflow
bug
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
share embed column
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
a few documentation error and omission for install tensorflow for c
Bug
the page install tensorflow for c have several error and omission 1 the first link binding for other language be a dead link error 404 2 it say that it be build nightly but the second link libtensorflow nightly gcs bucket say that it be last build about a year ago 3 it look as though it be last build about a month ago it would be good to have the date that it be last build somewhere on the page 4 for support platform it say that it work for macos version 10 12 6 sierra or high but the download file say that it be for a x86 64 ie intel cpu the late mac for almost a year now use the m1 processor do it support the m1 mac
tensorflowtensorflow
unimplemented deterministic gpu implementation of unsorted segment reduction op not available with auc metric and tf deterministic op
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 opensuse leap 15 2 tensorflow instal from source or binary binary tensorflow version use command below v2 6 0 rc2 32 g919f693420e 2 6 0 python version python 3 9 6 cuda cudnn version 11 2 and 8 1 1 I believe gpu model and memory quadro rtx 6000 reproduce on colab with gpu describe the current behavior traceback most recent call last file home ber proj bug py line 12 in model fit x datum y datum file data2 ber opt pyenv version 3 9 6 lib python3 9 site package kera engine training py line 1184 in fit tmp log self train function iterator file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager def function py line 885 in call result self call args kwd file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager def function py line 950 in call return self stateless fn args kwd file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager function py line 3039 in call return graph function call flat file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager function py line 1963 in call flat return self build call output self inference function call file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager function py line 591 in call output execute execute file data2 ber opt pyenv version 3 9 6 lib python3 9 site package tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl unimplementederror 2 root error s find 0 unimplemented deterministic gpu implementation of unsorted segment reduction op not available node unsortedsegmentsum define at home ber proj bug py 12 assert less equal assert assertguard pivot f 13 39 1 unimplemented deterministic gpu implementation of unsorted segment reduction op not available node unsortedsegmentsum define at home ber proj bug py 12 0 successful operation 0 derive error ignore op inference train function 513 function call stack train function train function describe the expect behavior no error work in tf 2 5 0 standalone code to reproduce the issue python import os os environ tf deterministic op true import tensorflow as tf datum tf one 1 1 layer tf keras layers input shape 1 model tf keras model model input layer output layer model compile loss categorical crossentropy metric auc model fit x datum y datum
tensorflowtensorflow
gpu delegate issue
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 android os oreo api level 27 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device surveillance camera with qcs605 qualcomm ship tensorflow instal from source or binary binary tensorflow version use command below 2 5 python version 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory adreno gpu 615 exact command to reproduce final interpreter option option new interpreter option compatibilitylist compatlist new compatibilitylist if compatlist isdelegatesupportedonthisdevice if the device have a support gpu add the gpu delegate gpudelegate option delegateoption compatlist getbestoptionsforthisdevice gpudelegate gpudelegate new gpudelegate delegateoption option adddelegate gpudelegate log I logtag run use gpu delegate else if the gpu be not support run on numthread thread option setnumthread numthread setallowfp16precisionforfp32 allowfp16precisionforfp32 setusennapi usennapi log I logtag run use cpu delegate describe the problem try to run movinet tflite model on a vendor camera with adreno gpu 615 the network run on the cpu without any issue but it can t be run on the gpu I m use java tflite run time with version 2 5 I try the nightly version but didn t work also source code log I adreno qualcomm build e3ea17d i2eff518144 I adreno build config s l 4 0 10 aarch64 I zygote64 android hardware configstore v1 0 isurfaceflingerconfig haswidecolordisplay retrieve 0 e libegl call to opengl es api with no current context log once per thread
tensorflowtensorflow
init node head prediction class string lookup table init lookuptableimportv2 doesn t exist in graph
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 debian 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior 2021 09 11 12 21 06 193462 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory 2021 09 11 12 21 06 193522 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2021 09 11 12 21 09 808281 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2021 09 11 12 21 09 808359 w tensorflow stream executor cuda cuda driver cc 269 fail call to cuinit unknown error 303 2021 09 11 12 21 09 808396 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host penguin proc driver nvidia version do not exist 2021 09 11 12 21 09 808686 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 09 11 12 21 10 927031 I tensorflow compiler mlir mlir graph optimization pass cc 185 none of the mlir optimization pass be enable register 2 2021 09 11 12 21 11 938166 I tensorflow core grappler device cc 66 number of eligible gpu core count 8 compute capability 0 0 0 2021 09 11 12 21 11 938620 I tensorflow core grappler cluster single machine cc 357 start new session 2021 09 11 12 21 11 961714 e tensorflow core grappler grappler item builder cc 669 init node head prediction class string lookup table init lookuptableimportv2 doesn t exist in graph traceback most recent call last file tflite py line 29 in model converter convert file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python lite py line 1396 in convert return super tfliteconverterv2 self convert file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python lite py line 729 in wrapper return self convert and export metric convert func args kwargs file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python lite py line 715 in convert and export metric result convert func self args kwargs file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python lite py line 1201 in convert self freeze concrete function file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python convert phase py line 218 in wrapper raise error from none re throw the exception file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python convert phase py line 208 in wrapper return func args kwargs file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow lite python lite py line 1177 in freeze concrete function self func 0 low control flow false file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow python framework convert to constant py line 1229 in convert variable to constant v2 as graph aggressive inline aggressive inlining file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow python framework convert to constant py line 809 in init aggressive inlining file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow python framework convert to constant py line 1043 in run inline graph optimization return tf optimizer optimizegraph config meta graph file home fsx950223 anaconda3 envs venv lib python3 6 site package tensorflow python grappler tf optimizer py line 58 in optimizegraph graph i d strip default attribute valueerror fail to import metagraph check error log for more info describe the expect behavior work same as converter tf lite tfliteconverter from save model save model signature key tf save model default serve signature def key contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import tensorflow text as text model tf save model load save model concrete func model signature tf save model default serve signature def key concrete func input 0 set shape 1 converter tf lite tfliteconverter from concrete function concrete func converter tf lite tfliteconverter from save model save model signature key tf save model default serve signature def key converter optimization tf lite optimize default converter inference type tf float32 converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op model converter convert tf io write file guesslang tflite model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach save model save model zip
tensorflowtensorflow
tf keras layers maxpooling3d crash
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf keras layers maxpooling3d crash when pool size contain 0 and output a all inf tensor when pool size contain negative value describe the expect behavior expect a valueerror to be throw if the input pool size contain zero or negative value standalone code to reproduce the issue if the pool size have 0 import tensorflow as tf pool size 2 2 0 layer tf keras layers maxpooling3d strides 1 pool size pool size input tensor tf random uniform 3 4 10 11 12 dtype tf float32 re layer input tensor crash output float point exception core dump if the pool size have negative value import tensorflow as tf pool size 2 2 2 layer tf keras layers maxpooling3d strides 1 pool size pool size input tensor tf random uniform 3 4 10 11 12 dtype tf float32 re layer input tensor print re the output be a tensor with shape 3 3 9 14 12 and all inf value
tensorflowtensorflow
bug load the old version of tfs and kera
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device no 4 1 4 2 I want to run a bioinformatics library a deep learning model which support particular version the before model which I want to run show error in tensorflow also but this model also requirement can t be meet I be use google collab I think the issue be with integrate with google collab or maybe with tensorflow only thank
tensorflowtensorflow
custom metrics doc doesn t mention assign method
Bug
url s with the issue custom metric description of issue what need change first small issue reset state be deprecate il should be reset state shouldn t it furthermore this method be not require when create a custom metric second issue it be not necessary to subclass metric cf issuecomment 505098700 this should be write in the documentation I be sure many user would be interested by the easy way I e pass a function def my metric y true y pre as a metric third issue the doc show an example with assign add when update the metric however it may not fit user need for example let s consider I want to compute a peak signal to noise ratio metric a kind of logarithmic mse if I follow the doc I would write something like this python import tensorflow as tf from tensorflow import kera import math class psnr keras metric metric peak signal to noise ratio metric def init self name psnr kwargs super init name kwargs self psnr self add weight name psnr initializer zero def update state self y true y pre sample weight none y pre y true tf cast y pre tf float32 tf cast y true tf float32 mse tf reduce mean keras metric mean square error y true y pre psnr 10 0 tf divide tf math log tf divide 10000 2 mse math log 10 self psnr assign add psnr def result self return self psnr as a high level api user I be not familiar with tf variable method thus I use assign add like in the documentation instead of assign because I have no clue assign exist and it take I some time to figure why my metric be so high and increase so fast clear description the documentation should mention the easy way to create a metric I e pass a function def my metric y true y pre it should be quite the same as the custom loss section custom loss I think an exemple with assign would benefit some user who would need to implement a metric that be not a sum correct link yes parameter define yes return define yes raise list and define yes usage example yes request visual if applicable no submit a pull request don t know
tensorflowtensorflow
tf linalg diag issue
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os tensorflow instal from source or binary binary tensorflow version use command below 2 6 0 python version 3 8 11 describe the current behavior when provide the tf linalg diag function an input length of 32 the function return this error tensorflow python framework error impl invalidargumenterror concatop dimension of input should match shape 0 32 32 vs shape 1 1 1 op concatv2 name concat describe the expect behavior the function should return a tensor of shape n n no mater how large the tensor be do you want to contribute a pr yes no no standalone code to reproduce the issue import tensorflow as tf import numpy as np input input datum np one 33 model input tf keras layers input dtype tf float32 out tf linalg diag input model tf keras model input input output out inference pre model predict input datum