repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
intermittent very long latency in xrt operation
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below mater python version n a bazel version if compile from source 0 16 1 gcc compiler version if compile from source 7 2 0 cuda cudnn version n a gpu model and memory n a exact command to reproduce n a describe the problem on occasion I ve see xrt operation take significantly long than I d expect and then they usually do into the 10 of second attach gdb while this be happen reveal that it be spend most of its time in tf graph level optimization pass particularly encapsulatexlacomputation which don t really make any sense in conjunction with xrt ops I don t have an exact reproducer for when this happen but it appear that it s more likely to happen upon the first xrtallocate after reconnecte after a client crash I believe I manage to capture a log at vlog level 2 while this be happen see gist at hopefully that should aid in figure out what s go on at the end of the gist it try to dump the serialize representation of the 500 mb sized model into the log so I interrupt it there I can try to reproduce it again let it finish dump the serialized model if that would be helpful cc michaelisard
tensorflowtensorflow
tf shape output be wrong when net input shape be change during import
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 7 tensorflow instal from source or binary pip tensorflow version use command below 1 8 0 still present in 2 6 0 update the code accordingly python version 3 6 6 cuda cudnn version n a gpu model and memory n a bazel version n a mobile device n a exact command to reproduce see below describe the problem tf shape return an inconsistent result when a network be import from file and its input be change during the import let I create a simple net with a batch size of 128 and save it to disk import tensorflow compat v1 as tf tf disable v2 behavior batch size 128 x tf placeholder tf float32 shape batch size 10 name x b tf variable tf zero 10 y tf add x b name y saver tf train saver with tf session as sess tf global variable initializer run saver save sess foo later I reload this model and replace the input placeholder with a more flexible one with an undefined batch size import numpy as np import tensorflow compat v1 as tf tf disable v2 behavior x tf placeholder tf float32 shape none 10 restorer tf train import meta graph foo meta input map x 0 x y tf get default graph get tensor by name y 0 y shape tf shape y sess tf session restorer restore sess foo y y shape sess run y 0 y shape x np zero 1 10 np float32 assert np all y shape y shape inconsistent size this result in an assertionerror inconsistent size because y shape still return the old batch size of 128 despite the output y be compute as expect with a batch size of 1
tensorflowtensorflow
set intersection doesn t work as expectation tensorflow 1 6 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 1 6 0 python version 3 6 5 anaconda bazel version if compile from source 0 11 gcc compiler version if compile from source gcc 5 4 0 cuda cudnn version cuda9 1 and cudnn 7 1 4 gpu model and memory gpu model and 16 g memory exact command to reproduce python import collection a collection ordereddict 0 0 1 0 3 1 1 1 1 1 3 1 2 0 1 2 1 1 a tf sparsetensor list a key list a value dense shape 3 4 b collection ordereddict 0 0 1 0 3 1 1 1 1 1 2 1 1 3 1 2 0 1 2 1 1 b tf sparsetensor list b key list b value dense shape 3 4 with tf session as sess print sess run a print sess run b print sess run tf contrib metric set intersection a b describe the problem for set intersection for my own understanding output should be bash sparsetensorvalue index array 0 0 0 3 1 1 1 3 2 0 2 1 value array 1 1 1 1 1 1 dtype int32 dense shape array 3 4 however I get the result as follow bash sparsetensorvalue index array 0 0 1 0 2 0 value array 1 1 1 dtype int32 dense shape array 3 1 I don t understand if I m not fully understand for the point good regard orlando
tensorflowtensorflow
slot variable use in an optimizer must have the same shape with the variable to be optimize
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu macos tensorflow instal from source or binary source tensorflow version use command below 1 4 python version 2 7 5 bazel version if compile from source 0 9 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version 8 gpu model and memory cpu exact command to reproduce when a optimizer be create describe the problem I write a new optimizer to try some strategy for gradient apply some slot be use in the implementation but I find that slot must have the same shape with the variable to be optimize otherwise an error will be throw out with message shape not match when I try to save model the problem happen in version 1 4 I try the same code in version 1 2 it work correctly so I want to figure out the reason source code log python 1 def create slot self var list 2 for v in var list 3 with op colocate with v 4 dtype v dtype base dtype 5 v shape v get shape 6 if v shape be fully define 7 init init op constant initializer self initial accumulator value dtype dtype 8 else 9 init constant gen array op fill array op shape v self initial accumulator value 10 init math op cast init constant dtype 11 12 self get or make slot with initializer 13 v init v shape dtype accumulator self name 14 self get or make slot with initializer 15 v init op zeros initializer self global step dtype 16 v shape self global step dtype accumulator decay power self name in line 16 if I change v shape with other value an error will be get for example v shape 512 256 but only 512 be need to create this slot
tensorflowtensorflow
provide a list of support xla operation like tensorflow lite
Bug
tensorflow lite provide a list of currently support op here and I wonder if xla could also have such a list it s rough to develop and train a model with the full tensorflow python api only to get stick during aot compilation because of miss op kernel in the tf2xla bridge
tensorflowtensorflow
preprocessor definition clash with glog
Bug
check macro from platform log h leak out into core public header which clash with user of glog one path be through core platform allocator h in file include from external org tensorflow tensorflow core platform log h 25 0 from external org tensorflow tensorflow core framework allocator h 26 from external org tensorflow tensorflow core framework tensor h 21 from external org tensorflow tensorflow core public session h 23 snip external org tensorflow tensorflow core platform default log h 224 0 note this be the location of the previous definition define check op log name op val1 val2 this one be easy to fix by move method implementation to allocator cc another be through core lib core status h in file include from external org tensorflow tensorflow core platform log h 25 0 from external org tensorflow tensorflow core lib core status h 24 from external org tensorflow tensorflow core lib core error h 19 from external org tensorflow tensorflow core framework tensor shape h 24 from external org tensorflow tensorflow core framework tensor h 24 from external org tensorflow tensorflow core public session h 23 this one be more work to fix because tf check ok be use all over the code but it do not seem to be necessary for core public
tensorflowtensorflow
tensorflow model optimization quantization keras quantize model raise exception on its documentation example
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 16 1 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 11 5 bazel version no response gcc compiler version no response cuda cudnn version 12 3 gpu model and memory nvidia geforce rtx 4060 8 gb current behavior the example program in the documentation of tensorflow model optimization quantization keras quantize model do not run successfully import kera from keras import layer import tensorflow model optimization as tfmot model tfmot quantization keras quantize model keras sequential layer dense 10 activation relu input shape 100 layer dense 2 activation sigmoid python 3 11 5 main sep 29 2023 17 18 47 gcc 13 2 0 on linux type help copyright credit or license for more information import kera from keras import layer import tensorflow model optimization as tfmot model tfmot quantization keras quantize model keras sequential layer dense 10 activation relu input shape 100 layer dense 2 activation sigmoid home valerio py3 tf nightly lib python3 11 site package keras src layer core dense py 88 userwarning do not pass an input shape input dim argument to a layer when use sequential model prefer use an input shape object as the first layer in the model instead super init activity regularizer activity regularizer kwargs 2024 03 28 20 45 09 603570 I tensorflow core common runtime gpu gpu device cc 1928 create device job localhost replica 0 task 0 device gpu 0 with 6086 mb memory device 0 name nvidia geforce rtx 4060 pci bus i d 0000 01 00 0 compute capability 8 9 traceback most recent call last file line 1 in file home valerio py3 tf nightly lib python3 11 site package tensorflow model optimization python core quantization keras quantize py line 135 in quantize model raise valueerror valueerror to quantize can only either be a keras sequential or functional model standalone code to reproduce the issue shell import kera from keras import layer import tensorflow model optimization as tfmot model tfmot quantization keras quantize model keras sequential layer dense 10 activation relu input shape 100 layer dense 2 activation sigmoid relevant log output shell home valerio py3 tf nightly lib python3 11 site package keras src layer core dense py 88 userwarning do not pass an input shape input dim argument to a layer when use sequential model prefer use an input shape object as the first layer in the model instead super init activity regularizer activity regularizer kwargs 2024 03 28 20 45 09 603570 I tensorflow core common runtime gpu gpu device cc 1928 create device job localhost replica 0 task 0 device gpu 0 with 6086 mb memory device 0 name nvidia geforce rtx 4060 pci bus i d 0000 01 00 0 compute capability 8 9 traceback most recent call last file line 1 in file home valerio py3 tf nightly lib python3 11 site package tensorflow model optimization python core quantization keras quantize py line 135 in quantize model raise valueerror valueerror to quantize can only either be a keras sequential or functional model
tensorflowtensorflow
save a model define by model subclassing can not be save
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 15 1 custom code no os platform and distribution mac 14 2 1 mobile device no response python version 3 10 13 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hey all remark I be use simple cpu computation and instal tf use pip so I do not build it myself therefor I do not provide any cuda version the problem I be face I be currently try to save a model which be create by subclasse which throw I an error see below this be the code for the definition of the model 1 first use function api 2 secondly use model subclasse the first one run fine the second throw an error during the last step where I try to save the model as you can see I call the model with the datum before save it just as the error message suggest but that still do not help it seem to I that this be a bug but maybe I have also miss something important df pd dataframe var1 a b c var2 1 2 3 inp var1 tf keras layers input shape 1 inp var2 tf keras layers input shape 1 output1 tf keras layer stringlookup output mode int vocabulary a b inp var1 output2 tf keras layer discretization bin boundary 0 1 2 3 4 output mode int inp var2 dp tf keras model inp var1 inp var2 output1 output2 convert dp df var1 df var2 for row in convert print row dp save delteme1 this work class dataprep2 tf keras model def init self super init self layer1 tf keras layer stringlookup output mode int vocabulary a b self layer2 tf keras layer discretization bin boundary 0 1 2 3 4 output mode int def call self x return conv1 self layer1 x var1 conv2 self layer2 x var2 dp2 dataprep2 dp2 df dp2 save delteme this do not same result with this line btw dp2 save delteme save format tf thank for look into this in advance good standalone code to reproduce the issue the code be in a notebook here the code inline here import tensorflow as tf import panda as pd df pd dataframe var1 a b c var2 1 2 3 inp var1 tf keras layers input shape 1 inp var2 tf keras layers input shape 1 output1 tf keras layer stringlookup output mode int vocabulary a b inp var1 output2 tf keras layer discretization bin boundary 0 1 2 3 4 output mode int inp var2 dp tf keras model inp var1 inp var2 output1 output2 convert dp df var1 df var2 for row in convert print row dp save delteme1 this work tf keras util register keras serializable dataprep2 class dataprep2 tf keras model def init self super init self layer1 tf keras layer stringlookup output mode int vocabulary a b self layer2 tf keras layer discretization bin boundary 0 1 2 3 4 output mode int def call self x return conv1 self layer1 x var1 conv2 self layer2 x var2 dp2 dataprep2 dp2 df dp2 save delteme this do not same result with this line btw dp2 save delteme save format tf relevant log output shell valueerror traceback most recent call last in 30 dp2 dataprep2 31 dp2 df 32 dp2 save delteme this do not 1 frame usr local lib python3 10 dist package keras src save legacy saving util py in raise model input error model 95 if the model be not a sequential it be intend to be a subclassed 96 model 97 raise valueerror 98 f model model can not be save either because the input shape be not 99 available or because the forward pass of the model be not define valueerror model main dataprep2 object at 0x7cfd49e6f8b0 can not be save either because the input shape be not available or because the forward pass of the model be not define to define a forward pass please override model call to specify an input shape either call build input shape directly or call the model on actual datum use model model fit or model predict if you have a custom training step please make sure to invoke the forward pass in train step through model call I e model input as oppose to model call
tensorflowtensorflow
could not interpret optimizer identifier keras src optimizer adam adam object
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 16 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior some day ago I try to run code by use google colab from a tutorial of transfer learn with movinet without any error today I try to run code by use google colab from the same tutorial and it produce error when I try to compile the model num epoch 2 loss obj tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer adam learning rate 0 001 model compile loss loss obj optimizer optimizer metric accuracy standalone code to reproduce the issue shell scrollto dvqblrn1tbsd relevant log output shell valueerror traceback most recent call last in 5 optimizer tf keras optimizer adam learning rate 0 001 6 7 model compile loss loss obj optimizer optimizer metric accuracy usr local lib python3 10 dist package tf keras src util traceback util py in error handler args kwargs 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb usr local lib python3 10 dist package tf keras src optimizer init py in get identifi kwargs 333 334 else 335 raise valueerror 336 f could not interpret optimizer identifier identifi 337 valueerror could not interpret optimizer identifier
tensorflowtensorflow
tensorflow 2 16 api doc not available online
Bug
issue type documentation bug tensorflow version 2 16 the website display the 2 16 api of tensorflow be not available see only the 2 15 appear if click through
tensorflowtensorflow
tpu connectivity issue on google colab
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior code be design by someone else I ve just be adapt it to use tpus and tensorflow dataset in training when train in tpu mode I get the attached error standalone code to reproduce the issue shell train py have the training loop datum generator py have the datagenerator training be run by open training colab ipynb directly on colab s from github option relevant log output shell err traceback most recent call last err file content seniorhonoursproject bacon ii train py line 763 in err main err file content seniorhonoursproject bacon ii train py line 733 in main err model history my train model optimizer loss err file content seniorhonoursproject bacon ii train py line 168 in my train err for ids x batch train y batch train in train generator dataset err file usr local lib python3 10 dist package tensorflow python distribute input lib py line 264 in next err return self get next err file usr local lib python3 10 dist package tensorflow python distribute input lib py line 331 in get next err num replica with value calculate replica with value err file usr local lib python3 10 dist package tensorflow python distribute input lib py line 196 in calculate replica with value err client have value math op reduce sum worker have value keepdim true err file usr local lib python3 10 dist package tensorflow python op weak tensor op py line 88 in wrapper err return op args kwargs err file usr local lib python3 10 dist package tensorflow python util traceback util py line 153 in error handler err raise e with traceback filter tb from none err file usr local lib python3 10 dist package tensorflow python framework op py line 498 in shape err self tensor shape tensor shape tensorshape self shape tuple err tensorflow python framework error impl internalerror fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse err additional grpc error information from remote target job localhost replica 0 task 0 device cpu 0 err unknown fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse create time 2024 03 15t21 29 31 939700993 00 00 grpc status 14 err node multideviceiteratorgetnextfromshard err execute non communication op originally return unavailableerror and be replace by internalerror to avoid invoke tf network error handling logic err remotecall err exception ignore in atexit callback err traceback most recent call last err file usr local lib python3 10 dist package tensorflow python eager context py line 2833 in async wait err context sync executor err file usr local lib python3 10 dist package tensorflow python eager context py line 749 in sync executor err pywrap tfe tfe contextsyncexecutor self context handle err tensorflow python framework error impl internalerror fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse err additional grpc error information from remote target job localhost replica 0 task 0 device cpu 0 err unknown fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse create time 2024 03 15t21 29 31 939700993 00 00 grpc status 14 err node multideviceiteratorgetnextfromshard err execute non communication op originally return unavailableerror and be replace by internalerror to avoid invoke tf network error handling logic err remotecall err 2024 03 15 21 29 31 946346 w tensorflow core distribute runtime eager remote tensor handle datum cc 77 unable to destroy remote tensor handle if you be run a tf function it usually indicate some op in the graph get an error fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse err additional grpc error information from remote target job localhost replica 0 task 0 device cpu 0 err unknown fail to connect to all address last error unknown ipv4 127 0 0 1 36768 fail to connect to remote host connection refuse create time 2024 03 15t21 29 31 939700993 00 00 grpc status 14 err node multideviceiteratorgetnextfromshard err execute non communication op originally return unavailableerror and be replace by internalerror to avoid invoke tf network error handling logic err remotecall err 2024 03 15 21 29 32 185465 w tensorflow core distribute runtime eager destroy tensor handle node h 59 ignore an error encounter when delete remote tensor handle invalid argument unable to find the relevant tensor remote handle op i d 2124 output num 0 err additional grpc error information from remote target job worker replica 0 task 0 while call tensorflow eager eagerservice enqueue err create 1710538172 185353260 description error receive from peer ipv4 10 120 183 98 8470 file external com github grpc grpc src core lib surface call cc file line 1056 grpc message unable to find the relevant tensor remote handle op i d 2124 output num 0 grpc status 3 type googleapis com tensorflow core platform errorsourceproto x08 x05
tensorflowtensorflow
comment in detection postprocess cc be incorrect and mislead
Bug
the comment point by the link below be incorrect l58 the comment should state boxcorner represent the upper left corner xmin ymin and the low right corner xmax ymax while this be not a functional bug it can case some wasted time and be easy to fix for the reference this link output signature in tensorflow documentation have a correct description
tensorflowtensorflow
tensorflow 2 15 1 wheel be not available on pypi
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 15 1 custom code no os platform and distribution any mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior tensorflow 2 15 1 wheel also 2 16 1 be not upload to pypi standalone code to reproduce the issue shell pip install tensorflow 2 15 1 relevant log output shell error could not find a version that satisfy the requirement tensorflow 2 15 1 from version 2 16 0rc0 error no match distribution find for tensorflow 2 15 1
tensorflowtensorflow
jit compilation fail
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 10 10 custom code yes os platform and distribution window 11 mobile device no python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory nvidia rtx 3060 12 gb current behavior the code should have load the val datum standalone code to reproduce the issue shell import os import cv2 import tensorflow as tf import numpy as np import typing as list import matplotlib pyplot as plt import imageio physical device tf config list physical device gpu tf config experimental set memory growth physical device 0 true import gdown url output datum zip gdown download url output quiet false gdown extractall datum zip vocab x for x in abcdefghijklmnopqrstuvwxyz 123456789 char to num tf keras layer stringlookup vocabulary vocab oov token num to char tf keras layer stringlookup vocabulary char to num get vocabulary oov token invert true def load video path str cap cv2 videocapture path frame for in range int cap get cv2 cap prop frame count ret frame cap read frame tf image rgb to grayscale frame frame append frame 190 236 80 220 cap release mean tf math reduce mean frame std tf math reduce std tf cast frame tf float32 return tf cast frame mean tf float32 std def load alignment path str with open path r as f line f readline token for line in line line line split if line 2 sil token token line 2 return char to num tf reshape tf string unicode split token input encode utf 8 1 1 def load data path str path bytes decode path numpy file name path split 1 split 0 video path os path join datum s1 f file name mpg alignment path os path join datum alignment s1 f file name align frame load video video path alignment load alignment alignment path return frame alignment def mappable function path str result tf py function load datum path tf float32 tf int64 return result datum tf datum dataset list file datum s1 mpg datum datum shuffle 500 reshuffle each iteration false datum datum map mappable function datum datum pad batch 2 pad shape 75 none none none 40 datum datum prefetch tf datum autotune add for split train datum take 450 test datum skip 450 sample datum as numpy iterator val sample next val 0 relevant log output shell unknownerror traceback most recent call last cell in 16 line 1 1 val sample next val 0 file miniconda3 envs tfgpu lib site package tensorflow python data op dataset op py 4638 in numpyiterator next self 4637 def next self 4638 return self next file miniconda3 envs tfgpu lib site package tensorflow python data op dataset op py 4635 in numpyiterator next self 4632 numpy setflag write false 4633 return numpy 4635 return nest map structure to numpy next self iterator file miniconda3 envs tfgpu lib site package tensorflow python data op iterator op py 766 in ownediterator next self 764 def next self 765 try 766 return self next internal 767 except error outofrangeerror 768 raise stopiteration file miniconda3 envs tfgpu lib site package tensorflow python data op iterator op py 749 in ownediterator next internal self 746 todo b 77291417 this run in sync mode as iterator use an error status 747 to communicate that there be no more datum to iterate over 748 with context execution mode context sync 749 ret gen dataset ops iterator get next 750 self iterator resource 751 output type self flat output type 752 output shape self flat output shape 754 try 755 fast path for the case self structure be not a nest structure 756 return self element spec from compatible tensor list ret pylint disable protect access file miniconda3 envs tfgpu lib site package tensorflow python ops gen dataset op py 3016 in iterator get next iterator output type output shape name 3014 return result 3015 except core notokstatusexception as e 3016 op raise from not ok status e name 3017 except core fallbackexception 3018 pass file miniconda3 envs tfgpu lib site package tensorflow python framework op py 7209 in raise from not ok status e name 7207 def raise from not ok status e name 7208 e message name name if name be not none else 7209 raise core status to exception e from none unknownerror function node wrap iteratorgetnext output type 2 device job localhost replica 0 task 0 device cpu 0 unknownerror function node wrap sub device job localhost replica 0 task 0 device gpu 0 jit compilation fail op sub traceback most recent call last file c user adity miniconda3 envs tfgpu lib site package tensorflow python op script op py line 269 in call return func device token args file c user adity miniconda3 envs tfgpu lib site package tensorflow python op script op py line 147 in call output self call device args file c user adity miniconda3 envs tfgpu lib site package tensorflow python op script op py line 154 in call ret self func args file c user adity miniconda3 envs tfgpu lib site package tensorflow python autograph impl api py line 642 in wrapper return func args kwargs file c user adity appdata local temp ipykernel 9688 3954971381 py line 6 in load datum frame load video video path file c user adity appdata local temp ipykernel 9688 1209296760 py line 13 in load video return tf cast frame mean tf float32 std file c user adity miniconda3 envs tfgpu lib site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file c user adity miniconda3 envs tfgpu lib site package tensorflow python framework op py line 7209 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl unknownerror function node wrap sub device job localhost replica 0 task 0 device gpu 0 jit compilation fail op sub node eagerpyfunc op iteratorgetnext
tensorflowtensorflow
issue in tensorflow model training
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior 0 I be try to run a pretraine tensorflow object detection model with my own dataset in google collab but get this unknown error while run the training I try change the batch size 2 and step 20000 but still the same error its just stop the training with c I notice my system memory go its peak just before stop be it something due to this standalone code to reproduce the issue shell python content model research object detection model main tf2 py pipeline config path pipeline file model dir model dir alsologtostderr num train step num step sample 1 of n eval example 1 relevant log output shell 2024 03 01 11 15 07 851290 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt 2024 03 01 11 15 14 012057 e tensorflow compiler xla stream executor cuda cuda driver cc 268 fail call to cuinit cuda error no device no cuda capable device be detect info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device cpu 0 i0301 11 15 14 048205 139949003751424 mirror strategy py 419 use mirroredstrategy with device job localhost replica 0 task 0 device cpu 0 info tensorflow maybe overwrite train step 20000 i0301 11 15 14 210126 139949003751424 config util py 552 maybe overwrite train step 20000 info tensorflow maybe overwrite use bfloat16 false i0301 11 15 14 210465 139949003751424 config util py 552 maybe overwrite use bfloat16 false warning tensorflow from content model research object detection model lib v2 py 563 strategybase experimental distribute dataset from function from tensorflow python distribute distribute lib be deprecate and will be remove in a future version instruction for update rename to distribute dataset from function w0301 11 15 14 290481 139949003751424 deprecation py 364 from content model research object detection model lib v2 py 563 strategybase experimental distribute dataset from function from tensorflow python distribute distribute lib be deprecate and will be remove in a future version instruction for update rename to distribute dataset from function info tensorflow reading unweighted dataset content train tfrecord i0301 11 15 14 314484 139949003751424 dataset builder py 162 read unweighted dataset content train tfrecord info tensorflow reading record dataset for input file content train tfrecord i0301 11 15 14 314929 139949003751424 dataset builder py 79 read record dataset for input file content train tfrecord info tensorflow number of filename to read 1 i0301 11 15 14 315054 139949003751424 dataset builder py 80 number of filename to read 1 warning tensorflow num reader have be reduce to 1 to match input file shard w0301 11 15 14 315170 139949003751424 dataset builder py 86 num reader have be reduce to 1 to match input file shard warn tensorflow from content model research object detection builder dataset builder py 100 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf datum autotune instead if sloppy execution be desire use tf datum option deterministic w0301 11 15 14 353366 139949003751424 deprecation py 364 from content model research object detection builder dataset builder py 100 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf datum autotune instead if sloppy execution be desire use tf datum option deterministic warning tensorflow from content model research object detection builder dataset builder py 235 datasetv1 map with legacy function from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf datum dataset map w0301 11 15 14 419352 139949003751424 deprecation py 364 from content model research object detection builder dataset builder py 235 datasetv1 map with legacy function from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf datum dataset map warn tensorflow from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 sparse to dense from tensorflow python op sparse op be deprecate and will be remove in a future version instruction for update create a tf sparse sparsetensor and use tf sparse to dense instead w0301 11 15 24 248110 139949003751424 deprecation py 364 from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 sparse to dense from tensorflow python op sparse op be deprecate and will be remove in a future version instruction for update create a tf sparse sparsetensor and use tf sparse to dense instead warn tensorflow from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 sample distort bounding box from tensorflow python op image op impl be deprecate and will be remove in a future version instruction for update seed2 arg be deprecate use sample distort bounding box v2 instead w0301 11 15 28 897560 139949003751424 deprecation py 364 from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 sample distort bounding box from tensorflow python op image op impl be deprecate and will be remove in a future version instruction for update seed2 arg be deprecate use sample distort bounding box v2 instead warn tensorflow from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 to float from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0301 11 15 32 042234 139949003751424 deprecation py 364 from usr local lib python3 10 dist package tensorflow python util dispatch py 1176 to float from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead 2024 03 01 11 15 35 407217 w tensorflow core framework dataset cc 956 input of generatordatasetop dataset will not be optimize because the dataset do not implement the asgraphdefinternal method need to apply optimization c screenshot 2024 03 01 132219
tensorflowtensorflow
logic error in experimental unity plugin hellotflite cs sample code
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 15 custom code no os platform and distribution window 11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior currently in hellotflite cs in the experimental unity plugin example project there be a logic error at line 59 here be the function I be look at void start debug logformat tensorflow lite verion 0 interpreter getversion var option new interpreter option thread 2 interpreter new interpreter model bytes option int inputcount interpreter getinputtensorcount int outputcount interpreter getoutputtensorcount for int I 0 I inputcount I debug logformat input 0 1 I interpreter getinputtensorinfo I for int I 0 I inputcount I debug logformat output 0 1 I interpreter getoutputtensorinfo I in the second for loop the bind inputcount should be outputcount as it be loop through the output tensor standalone code to reproduce the issue shell load in a model with input and output tensor of different length and you will find that either not all the tensor get log or there be an error relevant log output no response
tensorflowtensorflow
valueerror only instance of keras layer can be add to a sequential mode
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 15 0 rc1 8 g6887368d6d4 2 15 0 custom code no os platform and distribution kaggle mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when try to build a sequential model with tfhub model use in kaggle the notebook throw an error complain that the hub model be not a valid class instance at the same time the same code work fine locally standalone code to reproduce the issue shell import panda as pd import tensorflow as tf import tensorflow hub as hub from tensorflow keras optimizer import adam sgd from tensorflow keras layer import dense input batchnormalization dropout concatenate from tensorflow keras model import model sequential from tensorflow keras callback import modelcheckpoint module url embed hub keraslayer module url trainable true name use embed def build model embed model sequential input shape dtype tf string embe dense 1 activation sigmoid model compile loss mean squared error optimizer tf keras optimizer adam learning rate 0 0001 beta 1 0 9 beta 2 0 999 epsilon 1e 07 amsgrad false name adam metric accuracy return model relevant log output shell valueerror traceback most recent call last cell in 9 line 1 1 model build model embed 2 model fit description label epoch 4 3 model save quality use kera cell in 8 line 12 in build model embed 10 def build model embed 12 model sequential 13 input shape dtype tf string 14 embed 15 dense 1 activation sigmoid 16 17 model compile loss mean squared error 18 optimizer tf keras optimizer adam 19 learning rate 0 0001 24 name adam 25 metric accuracy 26 return model file opt conda lib python3 10 site package kera src model sequential py 70 in sequential init self layer trainable name 68 if layer 69 for layer in layer 70 self add layer rebuild false 71 self maybe rebuild file opt conda lib python3 10 site package kera src model sequential py 92 in sequential add self layer rebuild 90 layer origin layer 91 if not isinstance layer layer 92 raise valueerror 93 only instance of keras layer can be 94 f add to a sequential model receive layer 95 f of type type layer 96 97 if not self be layer name unique layer 98 raise valueerror 99 all layer add to a sequential model 100 f should have unique name name layer name be already 101 the name of a layer in this model update the name argument 102 to pass a unique name 103 valueerror only instance of keras layer can be add to a sequential model receive of type
tensorflowtensorflow
custom keras rnn with constant change constant shape when save
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 15 custom code yes os platform and distribution window wsl mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior constant shape change from rank 2 during inference to rank 3 tensor during model save standalone code to reproduce the issue python from tensorflow python keras layers recurrent import dropoutrnncellmixin config for enable cache device cache device class rnnwithconstant dropoutrnncellmixin tf keras internal layer baserandomlayer def init self unit activation recurrent activation dropout recurrent dropout kwargs super rnnwithconstant self init kwargs self unit unit self dropout dropout self recurrent dropout recurrent dropout self recurrent activation recurrent activation self cell tf keras layer grucell unit unit activation activation recurrent activation recurrent activation recurrent dropout recurrent dropout dropout dropout self state size unit self output size unit tf function def call self input state constant print f input input shape print f states state 0 shape print f constant constant 0 shape input tf concat input constant 0 axis 1 error due to shape change h self cell input state return h h class constantsmodel tf keras model model def init self unit kwargs super init kwargs self unit unit self cell rnnwithconstant unit sigmoid sigmoid 0 1 0 1 self rnn tf keras layer rnn self cell tf function def call self input constant training return self rnn input constant constant const constantsmodel 10 print initialize const tf random normal shape 100 50 10 tf random normal shape 100 10 print nsave const save const model relevant log output shell initialize input 100 10 state 100 10 constant 100 10 input 100 10 state 100 10 constant 100 10 save input none 10 state none 10 constant none 10 input none 10 state none 10 constant none none 10
tensorflowtensorflow
confusing result of tf argsort argmax argmin give a boolean axis
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior these three apis tf argsort argmax argmin will accept boolean axis such as true and false which confuse I indeed the documentation claim that the type of axis for tf argmax and tf argmin should be integer moreover tf argsort can also accept string variable take a close look I find this code l179 it seem that argsort will convert any value of axis to integer use the int call however such silent handling might make user confuse because the actual behavior deviate from the documentation standalone code to reproduce the issue shell import tensorflow as tf value tf constant 3 1 4 1 5 9 2 6 3 4 5 1 2 3 4 0 axis true print tf argsort value axis tf tensor 1 3 0 2 3 0 1 2 shape 2 4 dtype int32 print tf argmax value axis tf tensor 2 2 shape 2 dtype int64 print tf argmin value axis tf tensor 1 3 shape 2 dtype int64 axis 1 print tf argsort value axis tf tensor 1 3 0 2 3 0 1 2 shape 2 4 dtype int32 relevant log output no response
tensorflowtensorflow
tensorflow v2 15 0 build fail
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 15 0 custom code no os platform and distribution macos 14 0 mobile device no response python version python 3 10 bazel version 6 5 0 gcc compiler version no response cuda cudnn version none gpu model and memory none current behavior unable to build tf v2 15 0 standalone code to reproduce the issue shell here s the terminal command scrollto 7i3yke297nzv relevant log output no response
tensorflowtensorflow
fix typo in
Bug
at the web page need to fix typo in dataset in image there be no tf datum dataset s it s tf datum dataset with no s at the end thank
tensorflowtensorflow
github link to tf nest map structure be break in the doc
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 8 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the view source on github button in take I to a file that doesn t exist also the doc say refer to tf nest for the definition of a structure but if I go to that link there be no such definition standalone code to reproduce the issue shell n a relevant log output shell n a
tensorflowtensorflow
remapper test fail on aarch64 with config mkl aarch64 threadpool
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 17 bazel version 6 5 0 gcc compiler version 17 0 0 cuda cudnn version n a gpu model and memory n a current behavior the test tensorflow core grappler optimizer remapper test and tensorflow core grappler optimizer mkl remapper test will fail on aarch64 when build with config mkl aarch64 threadpool and run with tf enable onednn opt 1 but also remapper test will fail those test that be not skip even with tf enable onednn opt 0 all test in mkl remapper test be skip in that case standalone code to reproduce the issue shell bazel test config mkl aarch64 threadpool test timeout 500 900 3000 1 copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define tf api version 2 test size filter small medium test lang filter py cc test output error verbose failure true test keep go not verbose timeout warning action env python bin path usr local bin python3 test env portserver address unitt portserver build tag filter no oss oss exclude oss serial v1only benchmark test no aarch64 gpu tpu no oss py39 no oss py310 test tag filter no oss oss exclude oss serial v1only benchmark test no aarch64 gpu tpu no oss py39 no oss py310 job 75 build test only copt og copt ggdb strip never per file copt external org brotli c dec decode c o2 tensorflow core grappler optimizer remapper test relevant log output shell run remappert fuseconv2dwithbatchnorm tensorflow core grappler optimizer remapper test cc 2141 failure expect equality of these value node op which be fusedbatchnorm fusedconv2d tensorflow core grappler optimizer remapper test cc 2142 failure expect node input size 6 actual 5 vs 6 fail remappert fuseconv2dwithbatchnorm 3 ms run remappert fuseconv2dwithbatchnormandactivation tensorflow core grappler optimizer remapper test cc 2240 failure expect equality of these value node op which be identity fusedconv2d tensorflow core grappler optimizer remapper test cc 2241 failure expect node input size 6 actual 1 vs 6 fail remappert fuseconv2dwithbatchnormandactivation 3 ms run remappert fuseconv3dwithbiasandaddn tensorflow core grappler optimizer remapper test cc 2321 failure expect equality of these value node op which be addn fusedconv3d tensorflow core grappler optimizer remapper test cc 2322 failure expect node input size 3 actual 2 vs 3 fail remappert fuseconv3dwithbiasandaddn 285 ms run remappert fuseconv3dwithbiasandadd tensorflow core grappler optimizer remapper test cc 2392 failure expect equality of these value node op which be add fusedconv3d tensorflow core grappler optimizer remapper test cc 2393 failure expect node input size 3 actual 2 vs 3 fail remappert fuseconv3dwithbiasandadd 272 ms
tensorflowtensorflow
tf matmul
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 3 9 0 custom code yes os platform and distribution window 10 mobile device no response python version python 3 12 1 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I m new to tensorflow js so please excuse I if the issue be resolve prior I have a problem iwth tf matmul when try to do this operation with a 2x2 matrix and the first value be wrong standalone code to reproduce the issue shell const a tf tensor2d 1 2 3 4 const b tf tensor2d 5 7 6 8 a matmul b print expect output 19 23 39 53 inste I get 17 23 39 53 relevant log output no response
tensorflowtensorflow
ml dtypes h 19 10 fatal error ml dtype include float8 h file not find
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version r2 15 custom code yes os platform and distribution linux redhat 9 3 mobile device no response python version 3 11 bazel version 6 4 0 gcc compiler version 12 1 1 cuda cudnn version n a gpu model and memory n a current behavior note this be relate to but different from step to reproduce 1 build the tensorflow install header bazel target 2 add bazel bin tensorflow include or a copy thereof to your compiler s header search path I path to tensorflow include 3 try to compile any file that include tsl platform ml dtype h directly or indirectly 4 observe a compilation failure as there be no ml dtype include float8 h in file include from mysource hh 37 in file include from include tensorflow core public session h 26 in file include from include tensorflow core framework tensor h 26 in file include from include tensorflow core framework allocator h 26 in file include from include tensorflow core framework numeric type h 24 in file include from include tsl framework numeric type h 22 in file include from include tsl platform type h 22 include tsl platform ml dtype h 19 10 fatal error ml dtype include float8 h file not find 19 include ml dtype include float8 h from ml dtype 1 error generate note that there be a ml dtype include float8 h under virtual include float8 so one work around be to add that directory and the neighboring int4 directory to the header search path but this be clumsy and have never be necessary prior to r2 15 the tensorflow install header target have always create a usable stand alone include tree the python package s include tree have this fix up somehow but the tensorflow install header target need an update standalone code to reproduce the issue shell include tensorflow core public session h relevant log output no response
tensorflowtensorflow
test bug
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 15 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior just teate standalone code to reproduce the issue shell tesate relevant log output no response
tensorflowtensorflow
can t load model with tf nightly if the model be save with tf 2 15
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 15 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hello I have a model that I save use tf 2 15 version when try to load it use tf nightly I m get the follow error importerror can not import name be tensor or tensor list from keras src util tf util usr local lib python3 10 dist package keras src util tf util py standalone code to reproduce the issue save model with tf2 15 shell import tensorflow as tf import kera in keras layer input shape 8 8 3 out keras layer conv2d 3 3 in model keras model input in output out modelpath f model keras model save modelpath load model with tf nightly import kera import tensorflow as tf modelpath f model keras load model keras model load model modelpath relevant log output shell 5 load model keras model load model modelpath usr local lib python3 10 dist package keras src save save api py in load model filepath custom object compile safe mode 174 175 if be keras zip 176 return save lib load model 177 filepath 178 custom object custom object usr local lib python3 10 dist package keras src save saving lib py in load model filepath custom object compile safe mode 151 construct the model from the configuration file in the archive 152 with objectsharingscope 153 model deserialize keras object 154 config dict custom object safe mode safe mode 155 usr local lib python3 10 dist package keras src save serialization lib py in deserialize keras object config custom object safe mode kwargs 681 return obj 682 683 cls retrieve class or fn 684 class name 685 register name usr local lib python3 10 dist package keras src save serialization lib py in retrieve class or fn name register name module obj type full config custom object 783 and class name import the module find the class 784 try 785 mod importlib import module module 786 except modulenotfounderror 787 raise typeerror usr lib python3 10 importlib init py in import module name package 124 break 125 level 1 126 return bootstrap gcd import name level package level 127 128 usr lib python3 10 importlib bootstrap py in gcd import name package level usr lib python3 10 importlib bootstrap py in find and load name import usr lib python3 10 importlib bootstrap py in find and load unlocked name import usr lib python3 10 importlib bootstrap py in load unlocked spec usr lib python3 10 importlib bootstrap external py in exec module self module usr lib python3 10 importlib bootstrap py in call with frame remove f args kwds usr local lib python3 10 dist package keras src engine functional py in 24 25 from keras src import backend 26 from keras src dtensor import layout map as layout map lib 27 from keras src engine import base layer 28 from keras src engine import base layer util usr local lib python3 10 dist package keras src dtensor layout map py in 25 from keras src dtensor import lazy variable 26 from keras src dtensor import util 27 from keras src engine import base layer 28 29 isort off usr local lib python3 10 dist package keras src engine base layer py in 52 a module that only depend on keras layer import these from here 53 from keras src util generic util import to snake case noqa f401 54 from keras src util tf util import be tensor or tensor list noqa f401 55 56 isort off importerror can not import name be tensor or tensor list from keras src util tf util usr local lib python3 10 dist package keras src util tf util py
tensorflowtensorflow
tf datum option not affect autotune parallelism behavior
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 14 0 rc1 21 g4dacf3f368e 2 14 0 custom code no os platform and distribution no response mobile device no response python version 3 10 13 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior my understanding be that by set option autotune cpu budget 1 the autotuner win t use more than 1 cpu for the map task when use the autotuned parallelism num parallel call tf data experimental autotune however the log show that the task return in 2 batch one around 8 second and one around 13 second this be likely a parallelism of 8 I have 8 core on my machine my question be 1 what can I do to ask tf datum to respect the cpu budget 2 be there a way to debug the autotuner e g how can I find out what parallelism the autotuner choose standalone code to reproduce the issue python import time import os import tensorflow as tf tf get logg setlevel debug os environ tf cpp min log level 0 os environ tf cpp min vlog level 0 def foo I for in range 20 000 000 I 1 return I start time perf counter option tf datum option option autotune cpu budget 1 ds tf datum dataset range 16 with option option ds ds map lambda item tf numpy function foo item tout tf int64 num parallel call 1 num parallel call tf data experimental autotune ret 0 for row in ds print time perf counter start row ret row relevant log output shell with manually specify num parallel call 1 0 9090110249817371 tf tensor 20000000 shape dtype int64 1 7222977789351717 tf tensor 20000001 shape dtype int64 2 547924594953656 tf tensor 20000002 shape dtype int64 3 3709740849444643 tf tensor 20000003 shape dtype int64 4 167228076024912 tf tensor 20000004 shape dtype int64 4 973599593038671 tf tensor 20000005 shape dtype int64 5 799614907009527 tf tensor 20000006 shape dtype int64 6 618450765963644 tf tensor 20000007 shape dtype int64 7 427832546993159 tf tensor 20000008 shape dtype int64 8 226964170928113 tf tensor 20000009 shape dtype int64 9 045983245014213 tf tensor 20000010 shape dtype int64 9 841567247989587 tf tensor 20000011 shape dtype int64 10 652729655965231 tf tensor 20000012 shape dtype int64 11 432073520030826 tf tensor 20000013 shape dtype int64 12 26768407295458 tf tensor 20000014 shape dtype int64 13 078245428041555 tf tensor 20000015 shape dtype int64 with num parallel call tf data experimental autotune 7 32448233210016 tf tensor 20000000 shape dtype int64 7 346081388066523 tf tensor 20000001 shape dtype int64 7 39306910301093 tf tensor 20000002 shape dtype int64 7 404705744003877 tf tensor 20000003 shape dtype int64 7 452494919067249 tf tensor 20000004 shape dtype int64 8 152240323019214 tf tensor 20000005 shape dtype int64 8 226954131037928 tf tensor 20000006 shape dtype int64 8 243354345089756 tf tensor 20000007 shape dtype int64 11 689252487034537 tf tensor 20000008 shape dtype int64 12 99928606802132 tf tensor 20000009 shape dtype int64 13 021037437021732 tf tensor 20000010 shape dtype int64 13 037309881066903 tf tensor 20000011 shape dtype int64 13 363138618064113 tf tensor 20000012 shape dtype int64 13 37413718609605 tf tensor 20000013 shape dtype int64 13 395835493109189 tf tensor 20000014 shape dtype int64 13 50603799300734 tf tensor 20000015 shape dtype int64
tensorflowtensorflow
automate transfer learn gradual fine tuning of a tensorflow model produce valueerror unknown metric function val loss exception during the fit method of the fine tuning stage
Bug
current behavior I m attempt to automate transfer learn fine tuning use iterative gradual thawing of an underlying pre train and frozen base network a compile model contain the underlie pre train and frozen base network architecture be feed into an x4learner class the class expose two method feature extraction and fine tune the feature extraction method fit the model for a designate number of epoch and store the last epoch as an instance variable the fine tuning method operate on the feature extract model and perform an iterative fine tuning process in which each iteration thaw some number of layer in the underlying base model then recompile it during fine tuning the fit method produce the follow exception valueerror unknown metric function val loss please ensure this object be pass to the custom object argument a reproducible test case can be find here I ve include the code below for convenience issue type bug have you reproduce the bug with tensorflow nightly no tensorflow version v2 8 0 0 g3f878cff5b6 2 8 0 custom code yes os platform and distribution window subsystem for linux ubuntu 22 04 2 lts python version 3 10 12 standalone code to reproduce the issue shell from type import union import os import numpy as np import panda as pd import tensorflow as tf class x4learnerlite perform transfer learning of a tensorflow model contain a pre train base model two method be expose extract feature and fine tune the extract feature method train the model on the give datum use the designate learning rate the fine tune method thaw one or more layer in the model then train it on a decayed learning rate each fine tuning session decay the learning rate by a learning rate decay factor to mitigate catastrophic forgetting args model tf keras model model contain a frozen pre train base model train ds tf datum dataset tensorflow training dataset val ds tf datum dataset tensorflow validation dataset base model layer int index for the base model layer for thaw learning rate float the learning rate for feature extraction default 0 0001 metric str the metric use to evaluate model fit performance default val loss loss str the loss function default binary crossentropy activation str activation function default sigmoid def init self model tf keras model base model layer int train ds tf datum dataset val ds tf datum dataset learning rate float 0 0001 metric str val loss loss str binary crossentropy activation str sigmoid none self model model self base model layer base model layer self train ds train ds self val ds val ds self learning rate learning rate self metric metric self loss loss self activation activation use during the thaw process to determine number of layer to thaw as proportion of total number of layer in the underlying base model self n layer len self model layer self base model layer layer self initial epoch none def extract feature self epoch int 5 none perform the feature extraction phase of transfer learn args epoch int number of epoch to execute history self model fit self train ds epoch epoch validation datum self val ds save the last feature extraction epoch for fine tune phase self initial epoch history epoch 1 def fine tune self epoch int 10 session int 10 learn rate decay factory float 0 1 thaw rate union float int 0 05 none perform iterative fine tuning use gradual unfreezing of the base model args epoch int number of epoch per session default 10 session int number of fine tuning session to execute default be 10 learning rate decay factor float factor by which the learning rate be reduce each session thaw rate union float int rate by which layer be thaw this can be a raw integer or a float proportion of base model layer default 0 05 session 0 learning rate self learning rate initial epoch self initial epoch while session session session 1 learning rate learning rate decay factory thaw the top n layer of the base model accord to the follow n max int self n layer thaw rate session 1 self model layer self base model layer trainable true for layer in self model layer self base model layer layer n layer trainable false self model compile loss self loss optimizer tf keras optimizer adam learn rate learning rate metric self metric total epoch epoch initial epoch history self model fit self train ds epoch total epoch validation datum self val ds initial epoch initial epoch initial epoch history epoch 1 relevant log output valueerror in user code file home john anaconda3 envs bcd lib python3 10 site package kera engine training py line 1021 in train function return step function self iterator file home john anaconda3 envs bcd lib python3 10 site package kera engine training py line 1010 in step function output model distribute strategy run run step args datum file home john anaconda3 envs bcd lib python3 10 site package kera engine training py line 1000 in run step output model train step datum file home john anaconda3 envs bcd lib python3 10 site package kera engine training py line 864 in train step return self compute metric x y y pre sample weight file home john anaconda3 envs bcd lib python3 10 site package kera engine training py line 957 in compute metric self compile metric update state y y pre sample weight file home john anaconda3 envs bcd lib python3 10 site package kera engine compile util py line 438 in update state self build y pre y true file home john anaconda3 envs bcd lib python3 10 site package kera engine compile util py line 358 in build self metric tf internal nest map structure up to y pre self get metric object file home john anaconda3 envs bcd lib python3 10 site package kera engine compile util py line 484 in get metric object return self get metric object m y t y p for m in metric file home john anaconda3 envs bcd lib python3 10 site package kera engine compile util py line 484 in return self get metric object m y t y p for m in metric file home john anaconda3 envs bcd lib python3 10 site package kera engine compile util py line 503 in get metric object metric obj metric mod get metric file home john anaconda3 envs bcd lib python3 10 site package kera metric py line 4262 in get return deserialize str identifier file home john anaconda3 envs bcd lib python3 10 site package kera metric py line 4218 in deserialize return deserialize keras object file home john anaconda3 envs bcd lib python3 10 site package keras util generic util py line 709 in deserialize keras object raise valueerror valueerror unknown metric function val loss please ensure this object be pass to the custom object argument see register the custom object for detail
tensorflowtensorflow
incorrect reduction argument in third party xla xla experiment triton autotune matmul lib py
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version late custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior in l301 l309 should the row size be int dim m int dim n instead standalone code to reproduce the issue shell no need relevant log output no response
tensorflowtensorflow
tensorflow doc email address be invalid
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version master branch custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the email address for the docs team be invalid l10 standalone code to reproduce the issue shell the email address for the docs team be invalid l10 relevant log output no response
tensorflowtensorflow
bug button view release have loading issue
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 post1 custom code yes os platform and distribution macos mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior there be some issue on loading button view release in home page it seem that the button be rely in some state come from fetched datum that s why it be be show after some time standalone code to reproduce the issue shell 1 search tensorflow in browser 2 navigate to the website 3 in the very first page you will be able to see the bug note I have not sign in relevant log output no response
tensorflowtensorflow
error in markdown base comment
Bug
this l1440 line in the documentation of tf datum dataset shuffle md to shuffle an entire dataset set buffer size dataset cardinality this have a miss backtick I believe that the line instead may have be md to shuffle an entire dataset set buffer size dataset cardinality this this be cause the doc to be show as image
tensorflowtensorflow
error report when training model use tf 2 x
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution linux mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior error report when training model use tf 2 x typeerror you be pass kerastensor type spec tensorspec shape dtype tf float32 name none name placeholder 0 description create by layer tf cast 2 an intermediate keras symbolic input output to a tf api that do not allow register custom dispatcher such as tf cond tf function gradient tape or tf map fn keras functional model construction only support tf api call that do support dispatch such as tf math add or tf reshape other apis can not be call directly on symbolic kerasinput output you can work around this limitation by put the operation in a custom keras layer call and call that layer on this symbolic input output standalone code to reproduce the issue shell what should I do if I don t want to use import tensorflow compat v1 as tf relevant log output no response
tensorflowtensorflow
problem in my code due to tf shape and tensor shape tf shape and tensor shape both be not work
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I ve code the detr object detection pipeline from scratch in tensorflow I ve test all the individual component in the pipeline and it work but when I start train it on my dataset in tf datum dataset form I get an error this mostly due to the behaviour of tensor shape and tf shape tensor shape return none in it s shape and tf shape return something like tensor shape 2 0 shape 1 dtype int32 which be not the shape of the tensor please help thank you standalone code to reproduce the issue kaggle notebook to reproduce error make a copy of the notebook to reproduce this issue relevant log output shell valueerror traceback most recent call last cell in 33 line 3 1 for epoch in range 1 detr args epoch 1 2 print f epoch epoch detr args epoch 3 loss train step train ds 4 print f loss at epoch epoch loss n 5 model save weight f detr weight epoch epoch keras file opt conda lib python3 10 site package tensorflow python util traceback util py 153 in filter traceback error handler args kwargs 151 except exception as e 152 filter tb process traceback frames e traceback 153 raise e with traceback filter tb from none 154 finally 155 del filter tb file tmp autograph generate fileg7gvd3up py 36 in outer factory inner factory tf train step train ds 34 grad ag undefine grad 35 loss ag undefine loss 36 ag for stmt ag convert call ag ld enumerate ag ld train ds none fscope none loop body get state set state iterate name step x train y train 37 try 38 do return true file tmp autograph generate fileg7gvd3up py 22 in outer factory inner factory tf train step loop body itr 20 with ag ld tf gradienttape as tape 21 y pre ag convert call ag ld model ag ld x train dict training true fscope 22 y pre ag convert call ag ld matcher ag ld y train ag ld y pre none fscope 23 loss ag convert call ag ld loss fn ag ld y train ag ld y pre none fscope 24 grad ag convert call ag ld tape gradient ag ld loss ag ld model trainable weight none fscope file tmp autograph generate fileweg7gf52 py 12 in outer factory inner factory tf call self y y hat 10 class true bbox true ag ld y 11 class prob bbox pre ag ld y hat 12 class prob bbox pre ag convert call ag ld matcher match ag ld class true ag ld bbox true ag ld class prob ag ld bbox pre none fscope 13 try 14 do return true file tmp autograph generate fileuj8 9ikk py 11 in outer factory inner factory tf match class true bbox true class prob bbox pre 9 retval ag undefinedreturnvalue 10 bbox true bbox pre ag convert call ag ld swap xy ag convert call ag ld xywh to xyxy ag ld bbox true none fscope none fscope ag convert call ag ld swap xy ag convert call ag ld xywh to xyxy ag ld bbox pre none fscope none fscope 11 c ag convert call ag ld matcher batch cost matrix ag ld class true ag ld bbox true ag ld class prob ag ld bbox pre none fscope 12 idx ag convert call ag ld tf stack ag convert call ag ld linear sum assignment ag ld c ag ld I none fscope 1 for I in ag convert call ag ld range ag ld c shape 0 none fscope none fscope 13 class prob ag convert call ag ld tf gather ag ld class prob ag ld idx dict batch dim 1 fscope file tmp autograph generate filet5rvjc4z py 20 in outer factory inner factory tf batch cost matrix class true bbox true class prob bbox pre 18 try 19 do return true 20 retval ag convert call ag ld tf map fn ag autograph artifact lambda b ag convert call ag ld matcher compute cost matrix ag ld class true ag ld b ag ld class prob ag ld b ag ld bbox true ag ld b ag ld bbox pre ag ld b none fscope ag convert call ag ld tf range ag convert call ag ld tf shape ag ld class true none fscope 0 none fscope dict fn output signature ag ld tf float32 fscope 21 except 22 do return false file tmp autograph generate filet5rvjc4z py 20 in outer factory inner factory tf batch cost matrix b 18 try 19 do return true 20 retval ag convert call ag ld tf map fn ag autograph artifact lambda b ag convert call ag ld matcher compute cost matrix ag ld class true ag ld b ag ld class prob ag ld b ag ld bbox true ag ld b ag ld bbox pre ag ld b none fscope ag convert call ag ld tf range ag convert call ag ld tf shape ag ld class true none fscope 0 none fscope dict fn output signature ag ld tf float32 fscope 21 except 22 do return false file tmp autograph generate filejl v0o33 py 11 in outer factory inner factory tf compute cost matrix class true class prob bbox true bbox pre 9 do return false 10 retval ag undefinedreturnvalue 11 n ag convert call ag ld tf shape ag ld class true none fscope 0 12 cost I ag autograph artifact lambda I ag convert call ag ld tf map fn ag autograph artifact lambda j ag convert call ag ld matcher l match ag ld class true ag ld I ag ld class prob ag ld j ag convert call ag ld int ag ld class true ag ld I none fscope ag ld bbox true ag ld I ag ld bbox pre ag ld j none fscope ag convert call ag ld tf range ag ld n none fscope dict fn output signature ag ld tf float32 fscope 13 try valueerror in user code file tmp ipykernel 42 4115406382 py line 7 in train step y pre matcher y train y pre file tmp ipykernel 42 968499204 py line 64 in call class prob bbox pre matcher match class true bbox true class prob bbox pre file tmp ipykernel 42 968499204 py line 53 in match c matcher batch cost matrix class true bbox true class prob bbox pre file tmp ipykernel 42 968499204 py line 46 in batch cost matrix tf range tf shape class true 0 fn output signature tf float32 file tmp ipykernel 42 968499204 py line 22 in compute cost matrix n tf shape class true 0 valueerror slice index 0 of dimension 0 out of bound for node map while stride slice 4 stridedslice index dt int32 t dt int32 begin mask 0 ellipsis mask 0 end mask 0 new axis mask 0 shrink axis mask 1 map while shape map while stride slice 4 stack map while stride slice 4 stack 1 map while stride slice 4 stack 2 with input shape 0 1 1 1 and with compute input tensor input 1 0 input 2 1 input 3 1
tensorflowtensorflow
update a typo in doc
Bug
update the line from if y pre rank y true rank 1 or y pre shape 1 1 to if y pre rank y true rank 1 or y pre shape 1 1 as the current behavior will squeeze the y pre tensor even when it s the same rank and shape as the y pre tensor fix 62718 please have a look at this and do the needful thank you
tensorflowtensorflow
critical typo squeeze the y pre tensor even when it s the same rank and shape as the y pre tensor should be instead of
Bug
l191 should be if y pre rank y true rank 1 or y pre shape 1 1 current behavior will squeeze the y pre tensor even when it s the same rank and shape as the y pre tensor
tensorflowtensorflow
summarize graph building fail
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version master custom code yes os platform and distribution ubuntu 18 04 mobile device no response python version 3 7 bazel version 7 0 0 gcc compiler version gcc ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 cuda cudnn version no response gpu model and memory no response current behavior I want to figure out the input and output from a ckpt model when I build summarize graph with use bazel build tensorflow tool graph transform summarize graph in conda without compile the whole tf repository I find the error as follow I have change the environment of conda same error exist ptal standalone code to reproduce the issue relevant log output shell usr include c 7 type trait 878 48 error constructor require before non static datum member for stream executor commandbuffer deleter own have be parse template target tensorflow tool graph transform summarize graph fail to build use verbose failure to see the command line of fail build step info elapse time 64 002 critical path 46 63 info 2034 process 66 internal 1968 local fail build do not complete successfully no response
tensorflowtensorflow
android c c api select tensorflow op s include in the give model be be not support by this interpreter make sure you apply link the flex delegate before inference
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 12 0 custom code yes os platform and distribution mac apple m1 android studio mobile device huawei nova6 python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I run my custom tflite model on androidstudio use c api so library then occur these error e select tensorflow op s include in the give model be be not support by this interpreter make sure you apply link the flex delegate before inference for the android it can be resolve by add org tensorflow tensorflow lite select tf op dependency see instruction 2023 12 19 12 44 34 993 3039 5562 tflite com cent karaoke tnetinterpretapp e node number 622 flexdepthwiseconv2dnative fail to prepare 2023 12 19 12 44 34 993 3039 5562 tflite com cent karaoke tnetinterpretapp e fail to allocate tensor I think maybe the acquireflexdelegate have not work I find the source code the acquireflexdelegate look like a attribute weak symbal I have load the libtensorflowlite gpu jni so before start the interpret standalone code to reproduce the issue shell if be android add library tflite core share import set target property tflite core property import location cmake current source dir tflite libs android android abi libtensorflowlite jni so add library tflite flex share import set target property tflite flex property import location cmake current source dir tflite libs android android abi libtensorflowlite flex jni so add library tflite gpu delegate share import set target property tflite gpu delegate property import location cmake current source dir tflite libs android android abi libtensorflowlite gpu jni so target link librarie tnet source tflite core tflite flex tflite gpu delegate endif relevant log output no response
tensorflowtensorflow
documentation aesthetic tensorflow learn for mobile edge android generate model interface use metadata incomplete rendering of all link imagery from android studio screenshot
Bug
in the previous and late version of the tflite documentation for android there exist a section pertain to the importation of a tensorflow lite model in android studio import a tensorflow lite model in android studio where the rendering of the procedure s exhibitory screenshot from android studio in step 1 2 and 4 be incomplete the respective hyperlink in each of the intend exhibitory screenshot point to inadmissible png component step 1 step 2 step 4 just for purpose of perpetuate the already exist and discernibly superior quality in the tflite documentation I be of the conviction that this minor hyperlinking glitch should not be ignore by the author s for the sake of maximum perfectionism in the tflite documentation aesthetic in its present form the resultant visual stimulus emanating from the incomplete rendering of exhibitory screenshot from android studio be not salacious and allure enough for the wander eye screenshot 20231216 102314
tensorflowtensorflow
build from source cudnn document version be not correct for tf 2 15 0
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 15 0 post1 custom code no os platform and distribution ubuntu 22 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior gpu support 2 report cudnn version 8 8 be need however after instal that and run tensorflow it complain 2023 12 12 16 48 40 306928 e external local xla xla stream executor cuda cuda dnn cc 447 load runtime cudnn library 8 8 0 but source be compile with 8 9 4 cudnn library need to have match major version and equal or high minor version if use a binary install upgrade your cudnn library if build from source make sure the library load at runtime be compatible with the version specify during compile so it claim to use 8 9 4 instead of the document 8 8 0 either this be a documentation bug or a ci bug that build this pip package with the wrong cudnn version I instal with pip pip3 show tensorflow name tensorflow version 2 15 0 post1 summary tensorflow be an open source machine learning framework for everyone home page author google inc author email license apache 2 0 location home martijn local lib python3 10 site package require absl py astunparse flatbuffer gast google pasta grpcio h5py keras libclang ml dtypes numpy opt einsum packaging protobuf setuptool six tensorboard tensorflow estimator tensorflow io gcs filesystem termcolor type extension wrapt require by
tensorflowtensorflow
error warning in documentation webpage define a preprocessing function
Bug
warn error output on the documentation page instead of demonstration output from cell code webpage define a preprocessing function example warn apache beam runner interactive interactive environment dependency require for interactive beam pcollection visualization be not available please use pip install apache beam interactive to install necessary dependency to enable all datum visualization feature warn tensorflow you be pass instance dict and datasetmetadata to tft which will not provide optimal performance consider follow the tft guide to upgrade to the tfxio format apache arrow recordbatch warn tensorflow you be pass instance dict and datasetmetadata to tft which will not provide optimal performance consider follow the tft guide to upgrade to the tfxio format apache arrow recordbatch warn tensorflow from tmpfs src tf docs env lib python3 9 site package tensorflow transform tf util py 324 tensor experimental ref from tensorflow python framework op be deprecate and will be remove in a future version instruction for update use ref instead 2023 04 13 09 15 56 867283 e tensorflow compiler xla stream executor cuda cuda driver cc 267 fail call to cuinit cuda error no device no cuda capable device be detect warn tensorflow from tmpfs src tf docs env lib python3 9 site package tensorflow transform tf util py 324 tensor experimental ref from tensorflow python framework op be deprecate and will be remove in a future version instruction for update use ref instead warn tensorflow you be pass instance dict and datasetmetadata to tft which will not provide optimal performance consider follow the tft guide to upgrade to the tfxio format apache arrow recordbatch warn tensorflow you be pass instance dict and datasetmetadata to tft which will not provide optimal performance consider follow the tft guide to upgrade to the tfxio format apache arrow recordbatch warn apache beam option pipeline option discard unparseable args tmpfs src tf docs env lib python3 9 site package ipykernel launcher py f tmpf tmp tmpzu0d2pwa json historymanag hist file memory info tensorflow asset write to tmpfs tmp tmpdhm3 m yu tftransform tmp 88750e1500194862a87b2f23e04367bc asset info tensorflow asset write to tmpfs tmp tmpdhm3 m yu tftransform tmp 88750e1500194862a87b2f23e04367bc asset info tensorflow struct2tensor be not available info tensorflow struct2tensor be not available info tensorflow tensorflow decision forest be not available info tensorflow tensorflow decision forest be not available info tensorflow tensorflow text be not available info tensorflow tensorflow text be not available info tensorflow asset write to tmpfs tmp tmpdhm3 m yu tftransform tmp 8fad0af5a26242cc9733a752a7652277 asset info tensorflow asset write to tmpfs tmp tmpdhm3 m yu tftransform tmp 8fad0af5a26242cc9733a752a7652277 asset info tensorflow struct2tensor be not available info tensorflow struct2tensor be not available info tensorflow tensorflow decision forest be not available info tensorflow tensorflow decision forest be not available info tensorflow tensorflow text be not available info tensorflow tensorflow text be not available
tensorflowtensorflow
unexpected nan when invoke apis combination of tf gradienttape jacobian and tf multiply and tf math reciprocal
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior python import tensorflow as tf import numpy as np with tf device cpu 0 def test a b with tf gradienttape as tape 1 tape 1 watch a tape 1 watch b w a tf math reciprocal b print w grad 1 tape 1 jacobian w a return grad 1 a tf constant 3 dtype tf float32 b tf constant 0 2 3 dtype tf float32 cpu output 1 test a b with tf device gpu 0 def test a b with tf gradienttape as tape 1 tape 1 watch a tape 1 watch b w a tf math reciprocal b grad 1 tape 1 jacobian w a return grad 1 a tf constant 3 dtype tf float32 b tf constant 0 2 3 dtype tf float32 gpu output 1 test a b print cpu output 1 print gpu output 1 standalone code to reproduce the issue shell unexpected nan when invoke apis combination of tf gradienttape jacobian and tf multiply and tf math reciprocal relevant log output shell tf tensor inf 1 5 1 shape 3 dtype float32 tf tensor inf nan nan shape 3 1 dtype float32 tf tensor inf nan nan shape 3 1 dtype float32
tensorflowtensorflow
unexpected 0 when invoke tf gradienttape and tf math reduce prod both on cpu and gpu
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior python import tensorflow as tf import numpy as np with tf device cpu 0 def test x with tf gradienttape as tape 1 tape 1 watch x w tf math reduce prod x grad 1 tape 1 gradient w x return grad 1 x tf constant 0 0 1 0 2 0 0 1 0 2 dtype tf float32 cpu output 1 test x with tf device gpu 0 def test x with tf gradienttape as tape 1 tape 1 watch x w tf math reduce prod x grad 1 tape 1 gradient w x return grad 1 x tf constant 0 0 1 0 2 0 0 1 0 2 dtype tf float32 gpu output 1 test x expect out 1 tf constant 0 0 2 0 4 0 2 4 dtype tf float32 print expect out 1 print cpu output 1 print gpu output 1 standalone code to reproduce the issue shell unexpected 0 when invoke tf gradienttape and tf math reduce prod both on cpu and gpu relevant log output shell tf tensor 0 0 2 0 4 0 2 4 shape 2 3 dtype float32 tf tensor 0 0 0 0 0 0 shape 2 3 dtype float32 tf tensor 0 0 0 0 0 0 shape 2 3 dtype float32
tensorflowtensorflow
find unexpected nan result on api combination of tf gradienttape and tf divide both on gpu and cpu
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior python import tensorflow as tf import numpy as np with tf device cpu 0 def test x y with tf gradienttape as tape tape watch x tape watch y z tf divide x y jaco tape jacobian z x return jaco x tf constant 1 dtype tf float32 y tf constant 0 1 2 dtype tf float32 cpu output test x y with tf device gpu 0 def test x y with tf gradienttape as tape tape watch x tape watch y z tf divide x y jaco tape jacobian z x return jaco x tf constant 1 dtype tf float32 y tf constant 0 1 2 dtype tf float32 gpu output test x y expect output tf constant np inf 1 1 1 2 print cpu output print gpu output print expect output standalone code to reproduce the issue shell unexpected nan on the result both in gpu and cpu relevant log output shell tf tensor inf nan nan shape 3 1 dtype float32 tf tensor inf nan nan shape 3 1 dtype float32 tf tensor inf 1 0 5 shape 3 1 dtype float32
tensorflowtensorflow
get nan when invoke tf experimental numpy arcsin and the input not over range
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior get nan when invoke tf experimental numpy arcsin and the input not over range standalone code to reproduce the issue shell import tensorflow as tf with tf device cpu input datum tf constant 0 0 99455555 0 5230071 0 0 72101855 0 5321333 0 7079337 0 0 0 02282775 0 0 49104163 0 6195026 0 0 8540474 0 9993153 2 0230498 0 0 0 0270395 0 0 3878699 0 48934025 0 0 6746053 0 3691354 0 597991 0 0 0 02135829 0 0 44674954 0 5636233 0 0 7770121 0 7287762 0 8405701 0 0 0 02460053 def nu arcsin input datum return tf experimental numpy arcsin input datum output nu arcsin input data print output with tf device gpu input datum tf constant 0 0 99455555 0 5230071 0 0 72101855 0 5321333 0 7079337 0 0 0 02282775 0 0 49104163 0 6195026 0 0 8540474 0 9993153 2 0230498 0 0 0 0270395 0 0 3878699 0 48934025 0 0 6746053 0 3691354 0 597991 0 0 0 02135829 0 0 44674954 0 5636233 0 0 7770121 0 7287762 0 8405701 0 0 0 02460053 def nu arcsin input datum return tf experimental numpy arcsin input datum output nu arcsin input data print output relevant log output no response
tensorflowtensorflow
inconsistency behavior in xla compile model with tf great and tf round follow by multiple extra output
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution ubuntu 22 04 3 lts mobile device no response python version 3 10 0 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 2 cudnn 8 9 04 gpu model and memory tesla v100s pcie 32 gb current behavior I ve encounter a bug in tensorflow where the tf greate r operation when combine with tf round and follow by the addition of two extra node trigger inconsistent result this behavior be only see on gpu furthermore our experiment indicate that 1 remove one of the additional output node resolve the inconsistency 2 use tf less in place of tf great do not cause the same issue 3 omit the tf round operation from the model prevent the error we hope that the detail of our experiment will assist you in pinpoint the root cause of this erratic behavior standalone code to reproduce the issue shell import tensorflow as tf import numpy as np class model1 tf keras model tf function jit compile true def call self inp trans tf transpose inp perm 0 1 3 2 round tf round trans great tf great tf reverse round axis 0 2 round logical and tf logical and great great return great logical and class model2 tf keras model tf function jit compile true def call self inp trans tf transpose inp perm 0 1 3 2 round tf round trans great tf great tf reverse round axis 0 2 round logical and tf logical and great great return great logical and tran input tf random uniform shape 15 1 50 35 dtype tf float64 model1 model1 model2 model2 device gpu with tf device device tf config run function eagerly true out1 model1 input out2 model2 input print f eager output version tf version try for I in range min len out1 len out2 np testing assert allclose out1 I numpy out2 i numpy rtol 0 001 atol 0 001 err msg f at check I th print xla eager do not trigger assertion except assertionerror as e print xla eager trigger assertion print e tf config run function eagerly false out1 model1 input out2 model2 input print f compile output version tf version try for I in range min len out1 len out2 np testing assert allclose out1 I numpy out2 i numpy rtol 0 001 atol 0 001 err msg f at check I th print xla complie do not trigger assertion except assertionerror as e print xla complie trigger assertion print e relevant log output shell compile output version 2 15 0 xla complie trigger assertion eager output version 2 15 0 xla eager do not trigger assertion not equal to tolerance rtol 0 001 atol 0 001 at check 0th mismatch element 6550 26250 25 x array false false false false false false false false false false false false false false false false false false y array false false false false false false false true true false false false false false true true false false
tensorflowtensorflow
gpu installation docs woefully break
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 15 0 custom code no os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 11 6 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior instal tensorflow from scratch on ubuntu 22 04 the install page be here it say current stable release for cpu and gpu pip install tensorflow okay it install tensorflow 2 15 0 but what be the gpu requirement in term of cuda version etc no clue but there s a link in the left hand menu call gpu device plugin locate here nope that page say note this page be for non nvidia gpu device for nvidia gpu support go to the install tensorflow with pip guide okay let s go to that guide locate here the guide say python3 m pip install tensorflow and cuda okay start from scratch then uninstall tensorflow install this new thing which download a bunch of nvidia cuda python module install tensorflow 2 13 1 but do not install any of the nvidia python module because of this warning tensorflow 2 13 1 do not provide the extra and cuda seriously folk this be not acceptable fix the documentation or whatever else need fix thank you standalone code to reproduce the issue shell no code it s a documentation issue no idea why it s prompt I to enter code since I select the documentation bug category at the top relevant log output no response
tensorflowtensorflow
python code modifie loop while iterate over it
Bug
issue type bug have you reproduce the bug with tensorflow nightly the code be present on the master branch but I find this bug by look at the code not from execute tensorflow source source tensorflow version master custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior in tensorflow python training saver py in recordlastcheckpoint remove first from list if the same name be use before for p in self last checkpoint if late save path self checkpointfilename p self last checkpoint remove p this modify the last checkpoint list while iterate over it cause the loop to skip step this seem like a bug from look at the code this should probably be something like self last checkpoint p for p in self last checkpoint if late save path self checkpointfilename p standalone code to reproduce the issue shell example program l 1 1 1 1 1 1 1 1 for e in l if e 1 l remove e print l output 1 1 1 1 relevant log output no response
tensorflowtensorflow
inconsistent output from tf nn astrous conv2d multiply with tf cos on xla compile model
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution ubuntu 22 04 3 lts mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 2 cudnn 8 9 04 gpu model and memory tesla v100s pcie 32 gb current behavior I ve identify a critical issue in tensorflow 2 15 0 where the combination of tf nn conv2d and tf cos in an xla compile model produce significantly different output compare to the eager execution mode this inconsistency be particularly concern give that the data type use be float32 which be a standard in many application this issue only occur with certain input datum with gpu device to reproduce this please download first pickle file and replace your pickle file path with your pickle file path
tensorflowtensorflow
no gradient provide for any variable
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when call fit on sub class layer and model it throw valueerror no gradient provide for any variable with or without gradient tape context I have also try define function with sub layer and construct model use model api I e not sub class it but I get the same error standalone code to reproduce the issue shell I m put a link to the question I have ask on stackoverflow as it be quite a bit of code to reproduce the issue relevant log output shell file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src engine training py line 1401 in train function return step function self iterator file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src engine training py line 1384 in step function output model distribute strategy run run step args datum file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src engine training py line 1373 in run step output model train step datum file root ipykernel 2759 command 461111845465809 3511162167 line 27 in train step self optimizer apply gradient zip gradient trainable var file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src optimizer optimizer py line 1222 in apply gradient grad and var self aggregate gradient grad and var file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src optimizer optimizer py line 1184 in aggregate gradient return optimizer util all reduce sum gradient grad and var file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src optimizer util py line 33 in all reduce sum gradient filter grad and var filter empty gradient grad and var file local disk0 ephemeral nfs cluster librarie python lib python3 10 site package keras src optimizer util py line 77 in filter empty gradient raise valueerror valueerror no gradient provide for any variable trxster 7 encoder 10 en embed layer embedding 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 query kernel 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 query bias 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 key kernel 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 key bias 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 value kernel 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 value bias 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 136 en mha layer 1 attention output bias 0 trxster 7 encoder 10 encoder sub layer 136 dense 283 kernel 0 trxster 7 encoder 10 encoder sub layer 136 dense 283 bias 0 trxster 7 encoder 10 encoder sub layer 136 dense 284 kernel 0 trxster 7 encoder 10 encoder sub layer 136 dense 284 bias 0 trxster 7 encoder 10 encoder sub layer 136 layer normalization 340 gamma 0 trxster 7 encoder 10 encoder sub layer 136 layer normalization 340 beta 0 trxster 7 encoder 10 encoder sub layer 136 layer normalization 341 gamma 0 trxster 7 encoder 10 encoder sub layer 136 layer normalization 341 beta 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 query kernel 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 query bias 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 key kernel 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 key bias 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 value kernel 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 value bias 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 137 en mha layer 2 attention output bias 0 trxster 7 encoder 10 encoder sub layer 137 dense 285 kernel 0 trxster 7 encoder 10 encoder sub layer 137 dense 285 bias 0 trxster 7 encoder 10 encoder sub layer 137 dense 286 kernel 0 trxster 7 encoder 10 encoder sub layer 137 dense 286 bias 0 trxster 7 encoder 10 encoder sub layer 137 layer normalization 342 gamma 0 trxster 7 encoder 10 encoder sub layer 137 layer normalization 342 beta 0 trxster 7 encoder 10 encoder sub layer 137 layer normalization 343 gamma 0 trxster 7 encoder 10 encoder sub layer 137 layer normalization 343 beta 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 query kernel 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 query bias 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 key kernel 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 key bias 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 value kernel 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 value bias 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 138 en mha layer 3 attention output bias 0 trxster 7 encoder 10 encoder sub layer 138 dense 287 kernel 0 trxster 7 encoder 10 encoder sub layer 138 dense 287 bias 0 trxster 7 encoder 10 encoder sub layer 138 dense 288 kernel 0 trxster 7 encoder 10 encoder sub layer 138 dense 288 bias 0 trxster 7 encoder 10 encoder sub layer 138 layer normalization 344 gamma 0 trxster 7 encoder 10 encoder sub layer 138 layer normalization 344 beta 0 trxster 7 encoder 10 encoder sub layer 138 layer normalization 345 gamma 0 trxster 7 encoder 10 encoder sub layer 138 layer normalization 345 beta 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 query kernel 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 query bias 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 key kernel 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 key bias 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 value kernel 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 value bias 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 139 en mha layer 4 attention output bias 0 trxster 7 encoder 10 encoder sub layer 139 dense 289 kernel 0 trxster 7 encoder 10 encoder sub layer 139 dense 289 bias 0 trxster 7 encoder 10 encoder sub layer 139 dense 290 kernel 0 trxster 7 encoder 10 encoder sub layer 139 dense 290 bias 0 trxster 7 encoder 10 encoder sub layer 139 layer normalization 346 gamma 0 trxster 7 encoder 10 encoder sub layer 139 layer normalization 346 beta 0 trxster 7 encoder 10 encoder sub layer 139 layer normalization 347 gamma 0 trxster 7 encoder 10 encoder sub layer 139 layer normalization 347 beta 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 query kernel 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 query bias 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 key kernel 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 key bias 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 value kernel 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 value bias 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 140 en mha layer 5 attention output bias 0 trxster 7 encoder 10 encoder sub layer 140 dense 291 kernel 0 trxster 7 encoder 10 encoder sub layer 140 dense 291 bias 0 trxster 7 encoder 10 encoder sub layer 140 dense 292 kernel 0 trxster 7 encoder 10 encoder sub layer 140 dense 292 bias 0 trxster 7 encoder 10 encoder sub layer 140 layer normalization 348 gamma 0 trxster 7 encoder 10 encoder sub layer 140 layer normalization 348 beta 0 trxster 7 encoder 10 encoder sub layer 140 layer normalization 349 gamma 0 trxster 7 encoder 10 encoder sub layer 140 layer normalization 349 beta 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 query kernel 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 query bias 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 key kernel 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 key bias 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 value kernel 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 value bias 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 attention output kernel 0 trxster 7 encoder 10 encoder sub layer 141 en mha layer 6 attention output bias 0 trxster 7 encoder 10 encoder sub layer 141 dense 293 kernel 0 trxster 7 encoder 10 encoder sub layer 141 dense 293 bias 0 trxster 7 encoder 10 encoder sub layer 141 dense 294 kernel 0 trxster 7 encoder 10 encoder sub layer 141 dense 294 bias 0 trxster 7 encoder 10 encoder sub layer 141 layer normalization 350 gamma 0 trxster 7 encoder 10 encoder sub layer 141 layer normalization 350 beta 0 trxster 7 encoder 10 encoder sub layer 141 layer normalization 351 gamma 0 trxster 7 encoder 10 encoder sub layer 141 layer normalization 351 beta 0 trxster 7 decoder 11 de embed layer embedding 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 query kernel 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 query bias 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 key kernel 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 key bias 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 value kernel 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 value bias 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 69 de mha layer 1 attention output bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 query kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 query bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 key kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 key bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 value kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 value bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 de mha layer 1 attention output bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 dense 295 kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 dense 295 bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 dense 296 kernel 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 dense 296 bias 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 layer normalization 352 gamma 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 layer normalization 352 beta 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 layer normalization 353 gamma 0 trxster 7 decoder 11 decoder sublayer 69 encoder sub layer 142 layer normalization 353 beta 0 trxster 7 decoder 11 decoder sublayer 69 layer normalization 354 gamma 0 trxster 7 decoder 11 decoder sublayer 69 layer normalization 354 beta 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 query kernel 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 query bias 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 key kernel 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 key bias 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 value kernel 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 value bias 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 70 de mha layer 2 attention output bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 query kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 query bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 key kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 key bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 value kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 value bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 de mha layer 2 attention output bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 dense 297 kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 dense 297 bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 dense 298 kernel 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 dense 298 bias 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 layer normalization 355 gamma 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 layer normalization 355 beta 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 layer normalization 356 gamma 0 trxster 7 decoder 11 decoder sublayer 70 encoder sub layer 143 layer normalization 356 beta 0 trxster 7 decoder 11 decoder sublayer 70 layer normalization 357 gamma 0 trxster 7 decoder 11 decoder sublayer 70 layer normalization 357 beta 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 query kernel 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 query bias 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 key kernel 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 key bias 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 value kernel 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 value bias 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 71 de mha layer 3 attention output bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 query kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 query bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 key kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 key bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 value kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 value bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 de mha layer 3 attention output bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 dense 299 kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 dense 299 bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 dense 300 kernel 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 dense 300 bias 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 layer normalization 358 gamma 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 layer normalization 358 beta 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 layer normalization 359 gamma 0 trxster 7 decoder 11 decoder sublayer 71 encoder sub layer 144 layer normalization 359 beta 0 trxster 7 decoder 11 decoder sublayer 71 layer normalization 360 gamma 0 trxster 7 decoder 11 decoder sublayer 71 layer normalization 360 beta 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 query kernel 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 query bias 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 key kernel 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 key bias 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 value kernel 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 value bias 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 72 de mha layer 4 attention output bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 query kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 query bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 key kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 key bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 value kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 value bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 de mha layer 4 attention output bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 dense 301 kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 dense 301 bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 dense 302 kernel 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 dense 302 bias 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 layer normalization 361 gamma 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 layer normalization 361 beta 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 layer normalization 362 gamma 0 trxster 7 decoder 11 decoder sublayer 72 encoder sub layer 145 layer normalization 362 beta 0 trxster 7 decoder 11 decoder sublayer 72 layer normalization 363 gamma 0 trxster 7 decoder 11 decoder sublayer 72 layer normalization 363 beta 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 query kernel 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 query bias 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 key kernel 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 key bias 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 value kernel 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 value bias 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 73 de mha layer 5 attention output bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 query kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 query bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 key kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 key bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 value kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 value bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 de mha layer 5 attention output bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 dense 303 kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 dense 303 bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 dense 304 kernel 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 dense 304 bias 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 layer normalization 364 gamma 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 layer normalization 364 beta 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 layer normalization 365 gamma 0 trxster 7 decoder 11 decoder sublayer 73 encoder sub layer 146 layer normalization 365 beta 0 trxster 7 decoder 11 decoder sublayer 73 layer normalization 366 gamma 0 trxster 7 decoder 11 decoder sublayer 73 layer normalization 366 beta 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 query kernel 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 query bias 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 key kernel 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 key bias 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 value kernel 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 value bias 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 74 de mha layer 6 attention output bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 query kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 query bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 key kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 key bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 value kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 value bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 attention output kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 de mha layer 6 attention output bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 dense 305 kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 dense 305 bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 dense 306 kernel 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 dense 306 bias 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 layer normalization 367 gamma 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 layer normalization 367 beta 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 layer normalization 368 gamma 0 trxster 7 decoder 11 decoder sublayer 74 encoder sub layer 147 layer normalization 368 beta 0 trxster 7 decoder 11 decoder sublayer 74 layer normalization 369 gamma 0 trxster 7 decoder 11 decoder sublayer 74 layer normalization 369 beta 0 trxster 7 decoder 11 output layer kernel 0 trxster 7 decoder 11 output layer bias 0 provide grad and var be none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none none
tensorflowtensorflow
inconsistency in output of tf nn conv2d tf cos combination in xla compile model on gpu
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution ubuntu 22 04 3 lts mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 2 cudnn 8 9 04 gpu model and memory tesla v100s pcie 32 gb current behavior I ve identify a critical issue in tensorflow 2 15 0 where the combination of tf nn conv2d and tf cos in an xla compile model produce significantly different output compare to the eager execution mode this inconsistency be particularly concern give that the data type use be float32 which be a standard in many application this issue only occur with certain input datum with gpu device to reproduce this please download first pickle file and replace your pickle file path with your pickle file path standalone code to reproduce the issue shell import tensorflow as tf import pickle import os import numpy as np class model1 tf keras model def init self super init tf function jit compile true def call self inp1 inp2 conv2 tf nn conv2d inp1 inp2 stride 1 padding valid dilation 1 3 cos tf cos conv2 return conv2 cos model1 model1 device gpu path to the pickle file relative to the script directory pickle file path conv cos bug input pickle your pickle file path if not os path exist pickle file path print f pickle file not exist else with open pickle file path rb as f oracle pickle load f input tf convert to tensor arr for arr in oracle with tf device device tf config run function eagerly true out1 model1 input out2 model1 input print f eager output version tf version try for I in range min len out1 len out2 np testing assert allclose out1 I numpy out2 i numpy rtol 0 001 atol 0 001 err msg f at check I th print xla eager do not trigger assertion except assertionerror as e print xla eager trigger assertion print e tf config run function eagerly false out1 model1 input out2 model1 input print f compile output version tf version try for I in range min len out1 len out2 np testing assert allclose out1 I numpy out2 i numpy rtol 0 001 atol 0 001 err msg f at check I th print xla complie do not trigger assertion except assertionerror as e print xla complie trigger assertion print e relevant log output shell eager output version 2 15 0 xla eager do not trigger assertion 2023 11 26 16 40 44 610101 I external local xla xla service service cc 168 xla service 0x559e2c9bb480 initialize for platform cuda this do not guarantee that xla will be use device 2023 11 26 16 40 44 610131 I external local xla xla service service cc 176 streamexecutor device 0 tesla v100s pcie 32 gb compute capability 7 0 compile output version 2 15 0 xla complie trigger assertion not equal to tolerance rtol 0 001 atol 0 001 at check 1th mismatch element 4113 10000 41 1 max absolute difference 0 03115243 max relative difference 129 51477 x array 0 87046 0 997616 0 527893 0 624568 0 543016 0 689372 0 174798 0 671232 0 72486 0 273734 0 056437 y array 0 87142 0 997339 0 534512 0 624568 0 536439 0 686538 0 186324 0 671232 0 727545 0 273734 0 056437
tensorflowtensorflow
valueerror name tf raggedtensorspec have already be register for class tensorflow python op rag ragged tensor raggedtensorspec
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0rc0 custom code yes os platform and distribution macos mobile device no response python version python 3 9 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior this code should import tensor flow standalone code to reproduce the issue shell import tensorflow as tf from object detection util import config util from object detection proto import pipeline pb2 from google protobuf import text format relevant log output shell valueerror traceback most recent call last cell in 24 line 1 1 import tensorflow as tf 2 from object detection util import config util 3 from object detection proto import pipeline pb2 file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow init py 48 45 from tensorflow python import tf2 as tf2 46 tf2 enable 48 from tensorflow api v2 import internal 49 from tensorflow api v2 import operator 50 from tensorflow api v2 import audio file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow api v2 internal init py 11 9 from tensorflow api v2 internal import decorator 10 from tensorflow api v2 internal import dispatch 11 from tensorflow api v2 internal import distribute 12 from tensorflow api v2 internal import eager context 13 from tensorflow api v2 internal import feature column file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow api v2 internal distribute init py 8 3 public api for tf api v2 internal distribute namespace 4 6 import sys as sys 8 from tensorflow api v2 internal distribute import combination 9 from tensorflow api v2 internal distribute import interim 10 from tensorflow api v2 internal distribute import multi process runner file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow api v2 internal distribute combination init py 8 3 public api for tf api v2 internal distribute combination namespace 4 6 import sys as sys 8 from tensorflow python distribute combination import env line 456 9 from tensorflow python distribute combination import generate line 365 10 from tensorflow python distribute combination import in main process line 418 file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute combination py 33 29 import six 32 from tensorflow python client import session 33 from tensorflow python distribute import collective all reduce strategy 34 from tensorflow python distribute import distribute lib 35 from tensorflow python distribute import multi process runner file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute collective all reduce strategy py 25 23 from tensorflow core protobuf import tensorflow server pb2 24 from tensorflow python distribute import collective util 25 from tensorflow python distribute import cross device op as cross device op lib 26 from tensorflow python distribute import cross device util 27 from tensorflow python distribute import device util file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute cross device op py 28 26 from tensorflow python client import device lib 27 from tensorflow python distribute import collective util 28 from tensorflow python distribute import cross device util 29 from tensorflow python distribute import device util 30 from tensorflow python distribute import distribute util file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute cross device util py 22 19 from type import callable list optional union 21 from tensorflow python distribute import collective util 22 from tensorflow python distribute import value as value lib 23 from tensorflow python eager import backprop util 24 from tensorflow python eager import context file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute value py 23 21 from tensorflow core protobuf import struct pb2 22 from tensorflow python distribute import device util 23 from tensorflow python distribute import distribute lib 24 from tensorflow python distribute import pack distribute variable as pack 25 from tensorflow python distribute import reduce util file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python distribute distribute lib py 206 204 from tensorflow python autograph core import ag ctx as autograph ctx 205 from tensorflow python autograph impl import api as autograph 206 from tensorflow python datum op import dataset op 207 from tensorflow python distribute import collective util 208 from tensorflow python distribute import device util file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python datum init py 21 15 tf datum dataset api for input pipeline 16 17 see import datum for an overview 18 20 pylint disable unused import 21 from tensorflow python data import experimental 22 from tensorflow python data op dataset op import autotune 23 from tensorflow python data op dataset op import dataset file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python data experimental init py 98 15 experimental api for build input pipeline 16 17 this module contain experimental dataset source and transformation that can 94 unknown cardinality 95 97 pylint disable unused import 98 from tensorflow python datum experimental import service 99 from tensorflow python datum experimental op batch import dense to ragged batch 100 from tensorflow python datum experimental op batch import dense to sparse batch file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python data experimental service init py 419 1 copyright 2020 the tensorflow author all right reserve 2 3 license under the apache license version 2 0 the license 13 limitation under the license 14 15 api for use the tf datum service 16 17 this module contain 416 job of parameterserverstrategy 417 419 from tensorflow python datum experimental op datum service op import distribute 420 from tensorflow python datum experimental op datum service op import from dataset i d 421 from tensorflow python datum experimental op datum service op import register dataset file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python datum experimental op datum service op py 22 20 from tensorflow core protobuf import datum service pb2 21 from tensorflow python import tf2 22 from tensorflow python datum experimental op import compression op 23 from tensorflow python data experimental service import pywrap server lib 24 from tensorflow python data experimental service import pywrap util file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python datum experimental op compression op py 16 1 copyright 2020 the tensorflow author all right reserve 2 3 license under the apache license version 2 0 the license 13 limitation under the license 14 15 op for compress and uncompress dataset element 16 from tensorflow python datum util import structure 17 from tensorflow python ops import gen experimental dataset op as ge op 20 def compress element file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python data util structure py 32 30 from tensorflow python op import resource variable op 31 from tensorflow python ops import tensor array op 32 from tensorflow python op rag import ragged tensor 33 from tensorflow python platform import tf logging as log 34 from tensorflow python type import internal file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python op rag ragged tensor py 2320 2313 return tensor 2316 2317 raggedtensorspec 2318 2319 tf export raggedtensorspec 2320 type spec registry register tf raggedtensorspec 2321 class raggedtensorspec 2322 type spec batchabletypespec internal type raggedtensorspec 2323 type specification for a tf raggedtensor 2325 slot 2326 shape dtype ragged rank row split dtype 2327 flat value spec 2328 file desktop jupyter tfodcourse tfod lib python3 11 site package tensorflow python framework type spec registry py 59 in register decorator fn cls 56 raise valueerror class s s have already be register with name s 57 cls module cls name type spec to name cls 58 if name in name to type spec 59 raise valueerror name s have already be register for class s s 60 name name to type spec name module 61 name to type spec name name 62 type spec to name cls name 63 name to type spec name cls valueerror name tf raggedtensorspec have already be register for class tensorflow python op rag ragged tensor raggedtensorspec
tensorflowtensorflow
bytebuffer be not a valid tensorflow lite model flatbuffe
Bug
1 system information I ve coverte an savedmolde into tflite model on googlecolab then integrate it into androi application building on android studio bumblebee 2021 1 1 patch2 but when run application get blow error w system err java lang illegalargumentexception bytebuffer be not a valid tensorflow lite model flatbuffer w system err at org tensorflow lite nativeinterpreterwrapper createmodelwithbuffer native method w system err at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 72 w system err at org tensorflow lite nativeinterpreterwrapperexperimental nativeinterpreterwrapperexperimental java 36 w system err at org tensorflow lite interpreter interpreter java 232 w system err at org tensorflow lite interpreter interpreter java 216 w system err at jp kthrlab jamsketch jamsketchenginetf musiccalculatorforoutline jamsketchenginetf kt 17 w system err at jp kthrlab jamsketch jamsketchengineabstract init jamsketchengineabstract kt 64 w system err at jp kthrlab jamsketch melodydata2 melodydata2 kt 34 w system err at jp kthrlab jamsketch jamsketch initdata jamsketch kt 104 w system err at jp kthrlab jamsketch jamsketch startmusic jamsketch kt 189 w system err at java lang reflect method invoke native method w system err at process core papplet method papplet java 2905 w system err at jp kthrlab jamsketch button mouseevent button java 43 w system err at java lang reflect method invoke native method w system err at process core papplet registeredmethod handle papplet java 1010 w system err at process core papplet 3 run papplet java 1217 w system err at android os handler handlecallback handler java 790 w system err at android os handler dispatchmessage handler java 99 w system err at android os looper loop looper java 169 w system err at android app activitythread main activitythread java 6521 w system err at java lang reflect method invoke native method w system err at com android internal os runtimeinit methodandargscaller run runtimeinit java 438 w system err at com android internal os zygoteinit main zygoteinit java 807 I choreographer skip 118 frame the application may be do too much work on its main thread 2 code colab notebook relevent kotlin file1 loading tflite model as bytebuffer type and assign it to tf model val assetmanager assetmanager jamsketchactivity myresource getasset var tf model mappedbytebuffer null tf model loadmodelfile assetmanag config tfl model file throw ioexception class private fun loadmodelfile asset assetmanager modelfilename string mappedbytebuffer val filedescriptor asset openfd modelfilename val inputstream fileinputstream filedescriptor filedescriptor println declaredlength filedescriptor declaredlength println startoffset filedescriptor startoffset return inputstream channel map filechannel mapmode read only filedescriptor declaredlength 0 filedescriptor startoffset 0 relevent kotlin file2 wrap the tf model with interpreter then issue have occur override fun musiccalculatorforoutline musiccalculator val noteseqgenerator noteseqgenerator melody layer chord layer config beat per measure config ent bias model tflite interpreter tf model printtensor tflite return noteseqgenerator I reference the similir issue but I think it be not get to the solution I have be stucke this error over 1 week so if anybody know tip get to solution please teach I please let I know if you need more information
tensorflowtensorflow
custom train step in subclass model not save
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 1 2 14 0 2 15 0 custom code yes os platform and distribution google colab mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hi describe the current behavior I m build a custom model via model subclasse follow the step in I m override the train step and test step method I can train the model without any problem but when I load a save model the train step and test step be not load and instead the method from the super class tf keras model be use I ve try the suggestion from previous issue where they have the same problem but the suggestion don t work also I ve go through the documentation about serialization and save how savedmodel handle custom object and the suggestion from issuecomment 790209731 but it doesn t work describe the expect behavior I d expect that train step and test step be load from the save model standalone code to reproduce the issue relevant log output no response
tensorflowtensorflow
bug in backward gradient for tf keras layers conv2dtranspose
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf 2 14 0 custom code yes os platform and distribution window colab mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the shape of re backward be 1 5 6 2 the shape of re forward be 1 5 6 2 the shape of grad jvp be 1 5 6 2 while the shape of backward gradient be 1 5 6 2 1 5 6 2 I think both the value and shape of grad backward and grad jvp should be the same standalone code to reproduce the issue shell filter 2 kernel size 0 3 kernel size 1 3 kernel size kernel size 0 kernel size 1 stride 0 1 stride 1 1 stride stride 0 stride 1 padding same output pad none datum format channel last dilation rate 0 2 dilation rate 1 2 dilation rate dilation rate 0 dilation rate 1 activation linear use bias true kernel initializer none bias initializer none kernel regularizer none bias regularizer none activity regularizer none kernel constraint none bias constraint none input 0 tensor tf random uniform 1 5 6 1 minval 2 maxval 2 dtype tf float32 input 0 tf identity input 0 tensor conv2dtranspose class tf keras layer conv2dtranspose filter kernel size stride stride padding padding output padding output padding datum format datum format dilation rate dilation rate activation activation use bias use bias kernel initializer kernel initializer bias initializer bias initializer kernel regularizer kernel regularizer bias regularizer bias regularizer activity regularizer activity regularizer kernel constraint kernel constraint bias constraint bias constraint dtype tf float32 layer conv2dtranspose class input input 0 with tf gradienttape persistent true as g g watch input re backward layer input grad backward g jacobian re backward re backward print re backward re backward print grad backward grad backward tangent tf constant 1 dtype tf float32 shape 1 5 6 1 with tf autodiff forwardaccumulator input tangent as acc re forward layer input grad jvp acc jvp res forward print re forward re forward print grad forward grad jvp relevant log output no response
tensorflowtensorflow
can t recognize gpu in tensorflow
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 14 custom code yes os platform and distribution window 10 mobile device no response python version 3 11 1 bazel version no response gcc compiler version no response cuda cudnn version 11 2 8 1 gpu model and memory nvidia gtx 1060 6 gb current behavior I instal tensorflow and cuda cudnn but can not see gpu in tensorflow print gpu tf config list physical device gpu the output be gpu the output of print tf config list physical device be physicaldevice name physical device cpu 0 device type cpu standalone code to reproduce the issue shell import tensorflow as tf print tf config list physical device print gpu tf config list physical device gpu relevant log output no response
tensorflowtensorflow
callback tf keras callback modelcheckpoint and tf keras callback earlystoppe set param not work
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 14 0 rc1 21 g4dacf3f368e 2 14 0 custom code yes os platform and distribution google colab python version python 3 10 12 tf keras callback modelcheckpoint and tf keras callback earlystopping set param not work as assume python import tensorflow as tf load tensorflow ckpt tf keras callbacks modelcheckpoint content make check point ckpt param ckpt dict get all parameter print parameter ckpt param see parameter dictionary ckpt param save well only true update parameter save well only to true default false ckpt param save weight only true update parameter save weight only to true default false ckpt set param ckpt param try to make update parameter print be same as update ckpt param ckpt dict make check both be same or not both look same but if you look carefully ckpt param and ckpt dict will contain copy of themself in new key param print parameter ckpt param see parameter dictionary shell parameter validation datum none model none chief worker only false support tf logs true monitor val loss verbose 0 filepath content save well only false save weight only false save freq epoch epoch since last save 0 batch see since last save 0 last batch see 0 good inf option load weight on restart false period 1 monitor op be same as update true parameter validation datum none model none chief worker only false support tf logs true monitor val loss verbose 0 filepath content save well only true save weight only true save freq epoch epoch since last save 0 batch see since last save 0 last batch see 0 good inf option load weight on restart false period 1 monitor op param this change can be see by make a copy of ckpt param python import tensorflow as tf load tensorflow ckpt tf keras callbacks modelcheckpoint content make check point ckpt param ckpt dict copy get all parameter print parameter ckpt param see parameter dictionary ckpt param save well only true update parameter save well only to true default false ckpt param save weight only true update parameter save weight only to true default false ckpt param deep copy key value for key value in ckpt param item make deep copy of ckpt param ckpt set param ckpt param try to make update parameter print be same as update ckpt param ckpt dict make check both be same or not both look same but if you look carefully ckpt param and ckpt dict will contain copy of themself in new key param print uncomman parameter key for key in ckpt dict if key not in ckpt param see parameter dictionary print uncomman parameter ckpt dict param which be copy of itself only shell parameter validation datum none model none chief worker only false support tf logs true monitor val loss verbose 0 filepath content save well only false save weight only false save freq epoch epoch since last save 0 batch see since last save 0 last batch see 0 good inf option load weight on restart false period 1 monitor op be same as update false uncomman parameter param uncomman parameter validation datum none model none chief worker only false support tf logs true monitor val loss verbose 0 filepath content save well only true save weight only true save freq epoch epoch since last save 0 batch see since last save 0 last batch see 0 good inf option load weight on restart false period 1 monitor op bad be with tf keras callback earlystoppe it not only make a copy but also don t update parameter python import tensorflow as tf load tensorflow lstp tf keras callback earlystopping make check point lstp param lstp dict copy get all parameter print parameter lstp param see parameter dictionary lstp param patience 10 update parameter patience to 10 default 0 lstp param verbose 1 update parameter verbose to 1 default 0 print update parameter lstp param see update parameter lstp set param lstp param copy try to make update parameter print be same as update lstp param lstp dict make check both be same or not even value be not update from lstp param and lstp dict will contain copy of themself in new key param print object parameter lstp dict see parameter dictionary shell parameter validation datum none model none chief worker only none support tf log false monitor val loss patience 0 verbose 0 baseline none min delta 0 wait 0 stop epoch 0 restore good weight false good weight none start from epoch 0 monitor op update parameter validation datum none model none chief worker only none support tf log false monitor val loss patience 10 verbose 1 baseline none min delta 0 wait 0 stop epoch 0 restore good weight false good weight none start from epoch 0 monitor op be same as update false object parameter validation datum none model none chief worker only none support tf log false monitor val loss patience 0 verbose 0 baseline none min delta 0 wait 0 stop epoch 0 restore good weight false good weight none start from epoch 0 monitor op param validation datum none model none chief worker only none support tf log false monitor val loss patience 10 verbose 1 baseline none min delta 0 wait 0 stop epoch 0 restore good weight false good weight none start from epoch 0 monitor op
tensorflowtensorflow
tensorflow doesn t detect cuda driver
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 1 custom code no os platform and distribution li mobile device gentoo linux 6 1 57 gentoo x86 64 python version 3 11 5 bazel version no response gcc compiler version no response cuda cudnn version 11 8 8 7 0 84 gpu model and memory nvidia geforce gtx 1650 mobile max q current behavior current behaviour list of cuda capable device be empty sh python3 c import tensorflow as tf print tf config list physical device gpu 2023 11 06 01 01 39 145881 I tensorflow tsl cuda cudart stub cc 28 could not find cuda driver on your machine gpu will not be use 2023 11 06 01 01 39 200934 I tensorflow tsl cuda cudart stub cc 28 could not find cuda driver on your machine gpu will not be use 2023 11 06 01 01 39 201518 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 fma in other operation rebuild tensorflow with the appropriate compiler flag 2023 11 06 01 01 40 245608 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt 2023 11 06 01 01 41 021740 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2023 11 06 01 01 41 022476 w tensorflow core common runtime gpu gpu device cc 1960 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device expect behaviour list of coda capable device contain one item I would be happy to reproduce the bug in tf nightly but I can t even install it due to broken dependency with tensorrt sh python3 m pip install tf nightly and cuda collect tf nightly and cuda use cache tf nightly 2 16 0 dev20231103 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb collect absl py 1 0 0 from tf nightly and cuda use cache absl py 2 0 0 py3 none any whl metadata 2 3 kb collect astunparse 1 6 0 from tf nightly and cuda use cache astunparse 1 6 3 py2 py3 none any whl 12 kb collect flatbuffer 23 5 26 from tf nightly and cuda use cache flatbuffer 23 5 26 py2 py3 none any whl metadata 850 byte collect gast 0 5 0 0 5 1 0 5 2 0 2 1 from tf nightly and cuda use cache gast 0 5 4 py3 none any whl 19 kb collect google pasta 0 1 1 from tf nightly and cuda use cache google pasta 0 2 0 py3 none any whl 57 kb collect h5py 3 10 0 from tf nightly and cuda use cache h5py 3 10 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 2 5 kb collect libclang 13 0 0 from tf nightly and cuda use cache libclang 16 0 6 py2 py3 none manylinux2010 x86 64 whl metadata 5 2 kb collect ml dtype 0 3 1 from tf nightly and cuda use cache ml dtype 0 3 1 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 20 kb collect opt einsum 2 3 2 from tf nightly and cuda use cache opt einsum 3 3 0 py3 none any whl 65 kb collect packaging from tf nightly and cuda use cache packaging 23 2 py3 none any whl metadata 3 2 kb collect protobuf 4 21 0 4 21 1 4 21 2 4 21 3 4 21 4 4 21 5 5 0 0dev 3 20 3 from tf nightly and cuda use cache protobuf 4 25 0 cp37 abi3 manylinux2014 x86 64 whl metadata 541 byte requirement already satisfy setuptool in miniconda3 envs tf test lib python3 11 site package from tf nightly and cuda 68 0 0 collect six 1 12 0 from tf nightly and cuda use cache six 1 16 0 py2 py3 none any whl 11 kb collect termcolor 1 1 0 from tf nightly and cuda use cache termcolor 2 3 0 py3 none any whl 6 9 kb collect type extension 3 6 6 from tf nightly and cuda use cache typing extension 4 8 0 py3 none any whl metadata 3 0 kb collect wrapt 1 15 1 11 0 from tf nightly and cuda use cache wrapt 1 14 1 cp311 cp311 manylinux 2 5 x86 64 manylinux1 x86 64 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 6 7 kb collect grpcio 2 0 1 24 3 from tf nightly and cuda use cache grpcio 1 59 2 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 0 kb collect tb nightly 2 16 0 a from tf nightly and cuda use cache tb nightly 2 16 0a20231105 py3 none any whl metadata 1 7 kb collect tf estimator nightly 2 14 0 dev from tf nightly and cuda use cache tf estimator nightly 2 14 0 dev2023080308 py2 py3 none any whl metadata 1 3 kb collect kera nightly 3 0 0 dev from tf nightly and cuda use cache kera nightly 3 0 0 dev2023110403 py3 none any whl metadata 5 3 kb collect tensorflow io gcs filesystem 0 23 1 from tf nightly and cuda use cache tensorflow io gcs filesystem 0 34 0 cp311 cp311 manylinux 2 12 x86 64 manylinux2010 x86 64 whl metadata 14 kb collect numpy 2 0 0 1 23 5 from tf nightly and cuda use cache numpy 1 26 1 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 61 kb collect nvidia cublas cu12 12 2 5 6 from tf nightly and cuda use cache nvidia cublas cu12 12 2 5 6 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia cuda cupti cu12 12 2 142 from tf nightly and cuda use cache nvidia cuda cupti cu12 12 2 142 py3 none manylinux1 x86 64 whl metadata 1 6 kb collect nvidia cuda nvcc cu12 12 2 140 from tf nightly and cuda use cache nvidia cuda nvcc cu12 12 2 140 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia cuda nvrtc cu12 12 2 140 from tf nightly and cuda use cache nvidia cuda nvrtc cu12 12 2 140 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia cuda runtime cu12 12 2 140 from tf nightly and cuda use cache nvidia cuda runtime cu12 12 2 140 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia cudnn cu12 8 9 4 25 from tf nightly and cuda use cache nvidia cudnn cu12 8 9 4 25 py3 none manylinux1 x86 64 whl metadata 1 6 kb collect nvidia cufft cu12 11 0 8 103 from tf nightly and cuda use cache nvidia cufft cu12 11 0 8 103 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia curand cu12 10 3 3 141 from tf nightly and cuda use cache nvidia curand cu12 10 3 3 141 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect nvidia cusolver cu12 11 5 2 141 from tf nightly and cuda use cache nvidia cusolver cu12 11 5 2 141 py3 none manylinux1 x86 64 whl metadata 1 6 kb collect nvidia cusparse cu12 12 1 2 141 from tf nightly and cuda use cache nvidia cusparse cu12 12 1 2 141 py3 none manylinux1 x86 64 whl metadata 1 6 kb collect nvidia nccl cu12 2 18 3 from tf nightly and cuda use cache nvidia nccl cu12 2 18 3 py3 none manylinux1 x86 64 whl metadata 1 8 kb collect nvidia nvjitlink cu12 12 2 140 from tf nightly and cuda use cache nvidia nvjitlink cu12 12 2 140 py3 none manylinux1 x86 64 whl metadata 1 5 kb collect tensorrt 8 6 1 post1 from tf nightly and cuda use cache tensorrt 8 6 1 post1 tar gz 18 kb prepare metadata setup py do collect tensorrt binding 8 6 1 from tf nightly and cuda use cache tensorrt binding 8 6 1 cp311 none manylinux 2 17 x86 64 whl 980 kb info pip be look at multiple version of tf nightly and cuda to determine which version be compatible with other requirement this could take a while collect tf nightly and cuda use cache tf nightly 2 16 0 dev20231102 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231101 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231031 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231026 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb collect tb nightly 2 15 0 a from tf nightly and cuda use cache tb nightly 2 15 0a20231023 py3 none any whl metadata 1 7 kb collect tf nightly and cuda use cache tf nightly 2 16 0 dev20231025 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb collect nvidia nccl cu12 2 16 5 from tf nightly and cuda use cache nvidia nccl cu12 2 16 5 py3 none manylinux1 x86 64 whl 188 7 mb collect tf nightly and cuda use cache tf nightly 2 16 0 dev20231024 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231022 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb info pip be still look at multiple version of tf nightly and cuda to determine which version be compatible with other requirement this could take a while use cache tf nightly 2 16 0 dev20231021 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231020 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 5 kb use cache tf nightly 2 16 0 dev20231013 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb collect ml dtype 0 2 0 from tf nightly and cuda use cache ml dtype 0 2 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 20 kb collect kera nightly 2 15 0 dev from tf nightly and cuda use cache kera nightly 2 15 0 dev2023092207 py3 none any whl metadata 2 5 kb collect tf nightly and cuda use cache tf nightly 2 15 0 dev20231012 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231011 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb info this be take long than usual you might need to provide the dependency resolver with strict constraint to reduce runtime see for guidance if you want to abort this run press ctrl c use cache tf nightly 2 15 0 dev20231010 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231009 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231006 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231005 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231004 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231003 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231002 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20231001 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230930 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230929 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230928 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230927 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230926 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230925 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230924 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230923 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230922 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230921 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230920 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230919 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 4 kb use cache tf nightly 2 15 0 dev20230918 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb collect nvidia cuda runtime cu11 11 8 89 from tf nightly and cuda use cache nvidia cuda runtime cu11 11 8 89 py3 none manylinux1 x86 64 whl 875 kb collect nvidia cubla cu11 11 11 3 6 from tf nightly and cuda use cache nvidia cubla cu11 11 11 3 6 py3 none manylinux1 x86 64 whl 417 9 mb collect nvidia cufft cu11 10 9 0 58 from tf nightly and cuda use cache nvidia cufft cu11 10 9 0 58 py3 none manylinux1 x86 64 whl 168 4 mb collect nvidia cudnn cu11 8 7 0 84 from tf nightly and cuda use cache nvidia cudnn cu11 8 7 0 84 py3 none manylinux1 x86 64 whl 728 5 mb collect nvidia curand cu11 10 3 0 86 from tf nightly and cuda use cache nvidia curand cu11 10 3 0 86 py3 none manylinux1 x86 64 whl 58 1 mb collect nvidia cusolver cu11 11 4 1 48 from tf nightly and cuda use cache nvidia cusolver cu11 11 4 1 48 py3 none manylinux1 x86 64 whl 128 2 mb collect nvidia cusparse cu11 11 7 5 86 from tf nightly and cuda use cache nvidia cusparse cu11 11 7 5 86 py3 none manylinux1 x86 64 whl 204 1 mb collect nvidia nccl cu11 2 16 5 from tf nightly and cuda use cache nvidia nccl cu11 2 16 5 py3 none manylinux1 x86 64 whl 210 3 mb collect nvidia cuda cupti cu11 11 8 87 from tf nightly and cuda use cache nvidia cuda cupti cu11 11 8 87 py3 none manylinux1 x86 64 whl 13 1 mb collect nvidia cuda nvcc cu11 11 8 89 from tf nightly and cuda use cache nvidia cuda nvcc cu11 11 8 89 py3 none manylinux1 x86 64 whl 19 5 mb collect tf nightly and cuda use cache tf nightly 2 15 0 dev20230917 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230916 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230915 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230914 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230913 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230911 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230910 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230909 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb use cache tf nightly 2 15 0 dev20230908 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230907 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230906 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230904 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230903 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230902 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230901 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230831 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230830 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230829 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230828 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230827 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230826 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230825 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230824 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230817 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230816 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230815 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230814 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230813 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230812 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230811 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230810 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb collect tb nightly 2 14 0 a from tf nightly and cuda use cache tb nightly 2 14 0a20230808 py3 none any whl metadata 1 8 kb collect kera nightly 2 14 0 dev from tf nightly and cuda use cache kera nightly 2 14 0 dev2023080207 py3 none any whl metadata 2 5 kb collect tf nightly and cuda use cache tf nightly 2 15 0 dev20230809 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230808 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb use cache tf nightly 2 15 0 dev20230807 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 2 kb error can not install tf nightly and cuda 2 15 0 dev20230807 tf nightly and cuda 2 15 0 dev20230808 tf nightly and cuda 2 15 0 dev20230809 tf nightly and cuda 2 15 0 dev20230810 tf nightly and cuda 2 15 0 dev20230811 tf nightly and cuda 2 15 0 dev20230812 tf nightly and cuda 2 15 0 dev20230813 tf nightly and cuda 2 15 0 dev20230814 tf nightly and cuda 2 15 0 dev20230815 tf nightly and cuda 2 15 0 dev20230816 tf nightly and cuda 2 15 0 dev20230817 tf nightly and cuda 2 15 0 dev20230824 tf nightly and cuda 2 15 0 dev20230825 tf nightly and cuda 2 15 0 dev20230826 tf nightly and cuda 2 15 0 dev20230827 tf nightly and cuda 2 15 0 dev20230828 tf nightly and cuda 2 15 0 dev20230829 tf nightly and cuda 2 15 0 dev20230830 tf nightly and cuda 2 15 0 dev20230831 tf nightly and cuda 2 15 0 dev20230901 tf nightly and cuda 2 15 0 dev20230902 tf nightly and cuda 2 15 0 dev20230903 tf nightly and cuda 2 15 0 dev20230904 tf nightly and cuda 2 15 0 dev20230906 tf nightly and cuda 2 15 0 dev20230907 tf nightly and cuda 2 15 0 dev20230908 tf nightly and cuda 2 15 0 dev20230909 tf nightly and cuda 2 15 0 dev20230910 tf nightly and cuda 2 15 0 dev20230911 tf nightly and cuda 2 15 0 dev20230913 tf nightly and cuda 2 15 0 dev20230914 tf nightly and cuda 2 15 0 dev20230915 tf nightly and cuda 2 15 0 dev20230916 tf nightly and cuda 2 15 0 dev20230917 tf nightly and cuda 2 15 0 dev20230918 tf nightly and cuda 2 15 0 dev20230919 tf nightly and cuda 2 15 0 dev20230920 tf nightly and cuda 2 15 0 dev20230921 tf nightly and cuda 2 15 0 dev20230922 tf nightly and cuda 2 15 0 dev20230923 tf nightly and cuda 2 15 0 dev20230924 tf nightly and cuda 2 15 0 dev20230925 tf nightly and cuda 2 15 0 dev20230926 tf nightly and cuda 2 15 0 dev20230927 tf nightly and cuda 2 15 0 dev20230928 tf nightly and cuda 2 15 0 dev20230929 tf nightly and cuda 2 15 0 dev20230930 tf nightly and cuda 2 15 0 dev20231001 tf nightly and cuda 2 15 0 dev20231002 tf nightly and cuda 2 15 0 dev20231003 tf nightly and cuda 2 15 0 dev20231004 tf nightly and cuda 2 15 0 dev20231005 tf nightly and cuda 2 15 0 dev20231006 tf nightly and cuda 2 15 0 dev20231009 tf nightly and cuda 2 15 0 dev20231010 tf nightly and cuda 2 15 0 dev20231011 tf nightly and cuda 2 15 0 dev20231012 tf nightly and cuda 2 16 0 dev20231013 tf nightly and cuda 2 16 0 dev20231020 tf nightly and cuda 2 16 0 dev20231021 tf nightly and cuda 2 16 0 dev20231022 tf nightly and cuda 2 16 0 dev20231024 tf nightly and cuda 2 16 0 dev20231025 tf nightly and cuda 2 16 0 dev20231026 tf nightly and cuda 2 16 0 dev20231031 tf nightly and cuda 2 16 0 dev20231101 tf nightly and cuda 2 16 0 dev20231102 and tf nightly and cuda 2 16 0 dev20231103 because these package version have conflicting dependency the conflict be cause by tf nightly and cuda 2 16 0 dev20231103 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231102 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231101 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231031 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231026 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231025 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231024 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231022 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231021 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231020 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 16 0 dev20231013 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231012 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231011 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231010 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231009 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231006 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231005 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231004 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231003 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231002 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20231001 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230930 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230929 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230928 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230927 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230926 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230925 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230924 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230923 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230922 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230921 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230920 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230919 depend on tensorrt lib 8 6 1 extra and cuda tf nightly and cuda 2 15 0 dev20230918 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230917 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230916 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230915 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230914 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230913 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230911 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230910 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230909 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230908 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230907 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230906 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230904 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230903 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230902 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230901 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230831 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230830 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230829 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230828 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230827 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230826 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230825 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230824 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230817 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230816 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230815 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230814 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230813 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230812 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230811 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230810 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230809 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230808 depend on tensorrt 8 5 3 1 extra and cuda tf nightly and cuda 2 15 0 dev20230807 depend on tensorrt 8 5 3 1 extra and cuda to fix this you could try to 1 loosen the range of package version you ve specify 2 remove package version to allow pip attempt to solve the dependency conflict error resolutionimpossible for help visit deal with dependency conflict standalone code to reproduce the issue I ve just copy command from the official website shell python3 m pip install tensorflow and cuda python3 c import tensorflow as tf print tf config list physical device gpu relevant log output driver sh nvidia smi nvidia smi 535 113 01 driver version 535 113 01 cuda version 12 2 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 nvidia geforce gtx 1650 off 00000000 01 00 0 on n a n a 43c p8 3w 50w 119mib 4096mib 7 default n a process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 15023 c g 95206080 4704969098354582108 262144 36mib 0 n a n a 26218 g usr bin x 81mib log of the tensorflow and cuda installation shell collect tensorflow and cuda download tensorflow 2 14 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 1 kb collect absl py 1 0 0 from tensorflow and cuda use cache absl py 2 0 0 py3 none any whl metadata 2 3 kb collect astunparse 1 6 0 from tensorflow and cuda use cache astunparse 1 6 3 py2 py3 none any whl 12 kb collect flatbuffer 23 5 26 from tensorflow and cuda use cache flatbuffer 23 5 26 py2 py3 none any whl metadata 850 byte collect gast 0 5 0 0 5 1 0 5 2 0 2 1 from tensorflow and cuda use cache gast 0 5 4 py3 none any whl 19 kb collect google pasta 0 1 1 from tensorflow and cuda use cache google pasta 0 2 0 py3 none any whl 57 kb collect h5py 2 9 0 from tensorflow and cuda use cache h5py 3 10 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 2 5 kb collect libclang 13 0 0 from tensorflow and cuda use cache libclang 16 0 6 py2 py3 none manylinux2010 x86 64 whl metadata 5 2 kb collect ml dtype 0 2 0 from tensorflow and cuda use cache ml dtype 0 2 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 20 kb collect numpy 1 23 5 from tensorflow and cuda use cache numpy 1 26 1 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 61 kb collect opt einsum 2 3 2 from tensorflow and cuda use cache opt einsum 3 3 0 py3 none any whl 65 kb collect packaging from tensorflow and cuda use cache packaging 23 2 py3 none any whl metadata 3 2 kb collect protobuf 4 21 0 4 21 1 4 21 2 4 21 3 4 21 4 4 21 5 5 0 0dev 3 20 3 from tensorflow and cuda use cache protobuf 4 25 0 cp37 abi3 manylinux2014 x86 64 whl metadata 541 byte requirement already satisfy setuptool in miniconda3 envs tf test lib python3 11 site package from tensorflow and cuda 68 0 0 collect six 1 12 0 from tensorflow and cuda use cache six 1 16 0 py2 py3 none any whl 11 kb collect termcolor 1 1 0 from tensorflow and cuda use cache termcolor 2 3 0 py3 none any whl 6 9 kb collect type extension 3 6 6 from tensorflow and cuda use cache typing extension 4 8 0 py3 none any whl metadata 3 0 kb collect wrapt 1 15 1 11 0 from tensorflow and cuda use cache wrapt 1 14 1 cp311 cp311 manylinux 2 5 x86 64 manylinux1 x86 64 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 6 7 kb collect tensorflow io gcs filesystem 0 23 1 from tensorflow and cuda use cache tensorflow io gcs filesystem 0 34 0 cp311 cp311 manylinux 2 12 x86 64 manylinux2010 x86 64 whl metadata 14 kb collect grpcio 2 0 1 24 3 from tensorflow and cuda use cache grpcio 1 59 2 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 4 0 kb collect tensorboard 2 15 2 14 from tensorflow and cuda download tensorboard 2 14 1 py3 none any whl metadata 1 7 kb collect tensorflow estimator 2 15 2 14 0 from tensorflow and cuda download tensorflow estimator 2 14 0 py2 py3 none any whl metadata 1 3 kb collect keras 2 15 2 14 0 from tensorflow and cuda download keras 2 14 0 py3 none any whl metadata 2 4 kb collect nvidia cuda runtime cu11 11 8 89 from tensorflow and cuda use cache nvidia cuda runtime cu11 11 8 89 py3 none manylinux1 x86 64 whl 875 kb collect nvidia cubla cu11 11 11 3 6 from tensorflow and cuda use cache nvidia cubla cu11 11 11 3 6 py3 none manylinux1 x86 64 whl 417 9 mb collect nvidia cufft cu11 10 9 0 58 from tensorflow and cuda use cache nvidia cufft cu11 10 9 0 58 py3 none manylinux1 x86 64 whl 168 4 mb collect nvidia cudnn cu11 8 7 0 84 from tensorflow and cuda use cache nvidia cudnn cu11 8 7 0 84 py3 none manylinux1 x86 64 whl 728 5 mb collect nvidia curand cu11 10 3 0 86 from tensorflow and cuda use cache nvidia curand cu11 10 3 0 86 py3 none manylinux1 x86 64 whl 58 1 mb collect nvidia cusolver cu11 11 4 1 48 from tensorflow and cuda use cache nvidia cusolver cu11 11 4 1 48 py3 none manylinux1 x86 64 whl 128 2 mb collect nvidia cusparse cu11 11 7 5 86 from tensorflow and cuda use cache nvidia cusparse cu11 11 7 5 86 py3 none manylinux1 x86 64 whl 204 1 mb collect nvidia nccl cu11 2 16 5 from tensorflow and cuda use cache nvidia nccl cu11 2 16 5 py3 none manylinux1 x86 64 whl 210 3 mb collect nvidia cuda cupti cu11 11 8 87 from tensorflow and cuda use cache nvidia cuda cupti cu11 11 8 87 py3 none manylinux1 x86 64 whl 13 1 mb collect nvidia cuda nvcc cu11 11 8 89 from tensorflow and cuda use cache nvidia cuda nvcc cu11 11 8 89 py3 none manylinux1 x86 64 whl 19 5 mb info pip be look at multiple version of tensorflow and cuda to determine which version be compatible with other requirement this could take a while collect tensorflow and cuda download tensorflow 2 13 1 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 3 4 kb warn tensorflow 2 13 1 do not provide the extra and cuda collect gast 0 4 0 0 2 1 from tensorflow and cuda download gast 0 4 0 py3 none any whl 9 8 kb collect keras 2 14 2 13 1 from tensorflow and cuda download keras 2 13 1 py3 none any whl metadata 2 4 kb collect numpy 1 24 3 1 22 from tensorflow and cuda download numpy 1 24 3 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 17 3 mb 17 3 17 3 mb 10 9 mb s eta 0 00 00 collect tensorboard 2 14 2 13 from tensorflow and cuda download tensorboard 2 13 0 py3 none any whl 5 6 mb 5 6 5 6 mb 11 1 mb s eta 0 00 00 collect tensorflow estimator 2 14 2 13 0 from tensorflow and cuda download tensorflow estimator 2 13 0 py2 py3 none any whl metadata 1 3 kb collect type extension 4 6 0 3 6 6 from tensorflow and cuda download type extension 4 5 0 py3 none any whl 27 kb collect wrapt 1 11 0 from tensorflow and cuda download wrapt 1 15 0 cp311 cp311 manylinux 2 5 x86 64 manylinux1 x86 64 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 78 kb 78 9 78 9 kb 7 1 mb s eta 0 00 00 requirement already satisfied wheel 1 0 0 23 0 in miniconda3 envs tf test lib python3 11 site package from astunparse 1 6 0 tensorflow and cuda 0 41 2 collect google auth 3 1 6 3 from tensorboard 2 14 2 13 tensorflow and cuda download google auth 2 23 4 py2 py3 none any whl metadata 4 7 kb collect google auth oauthlib 1 1 0 5 from tensorboard 2 14 2 13 tensorflow and cuda download google auth oauthlib 1 0 0 py2 py3 none any whl 18 kb collect markdown 2 6 8 from tensorboard 2 14 2 13 tensorflow and cuda download markdown 3 5 1 py3 none any whl metadata 7 1 kb collect request 3 2 21 0 from tensorboard 2 14 2 13 tensorflow and cuda download request 2 31 0 py3 none any whl metadata 4 6 kb collect tensorboard datum server 0 8 0 0 7 0 from tensorboard 2 14 2 13 tensorflow and cuda download tensorboard datum server 0 7 2 py3 none manylinux 2 31 x86 64 whl metadata 1 1 kb collect werkzeug 1 0 1 from tensorboard 2 14 2 13 tensorflow and cuda download werkzeug 3 0 1 py3 none any whl metadata 4 1 kb collect cachetool 6 0 2 0 0 from google auth 3 1 6 3 tensorboard 2 14 2 13 tensorflow and cuda download cachetool 5 3 2 py3 none any whl metadata 5 2 kb collect pyasn1 module 0 2 1 from google auth 3 1 6 3 tensorboard 2 14 2 13 tensorflow and cuda download pyasn1 module 0 3 0 py2 py3 none any whl 181 kb 181 3 181 3 kb 6 6 mb s eta 0 00 00 collect rsa 5 3 1 4 from google auth 3 1 6 3 tensorboard 2 14 2 13 tensorflow and cuda download rsa 4 9 py3 none any whl 34 kb collect request oauthlib 0 7 0 from google auth oauthlib 1 1 0 5 tensorboard 2 14 2 13 tensorflow and cuda download request oauthlib 1 3 1 py2 py3 none any whl 23 kb collect charset normalizer 4 2 from request 3 2 21 0 tensorboard 2 14 2 13 tensorflow and cuda download charset normalizer 3 3 2 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 33 kb collect idna 4 2 5 from request 3 2 21 0 tensorboard 2 14 2 13 tensorflow and cuda download idna 3 4 py3 none any whl 61 kb 61 5 61 5 kb 6 4 mb s eta 0 00 00 collect urllib3 3 1 21 1 from request 3 2 21 0 tensorboard 2 14 2 13 tensorflow and cuda download urllib3 2 0 7 py3 none any whl metadata 6 6 kb collect certifi 2017 4 17 from request 3 2 21 0 tensorboard 2 14 2 13 tensorflow and cuda download certifi 2023 7 22 py3 none any whl metadata 2 2 kb collect markupsafe 2 1 1 from werkzeug 1 0 1 tensorboard 2 14 2 13 tensorflow and cuda download markupsafe 2 1 3 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl metadata 3 0 kb collect pyasn1 0 6 0 0 4 6 from pyasn1 module 0 2 1 google auth 3 1 6 3 tensorboard 2 14 2 13 tensorflow and cuda download pyasn1 0 5 0 py2 py3 none any whl 83 kb 83 9 83 9 kb 7 6 mb s eta 0 00 00 collect oauthlib 3 0 0 from request oauthlib 0 7 0 google auth oauthlib 1 1 0 5 tensorboard 2 14 2 13 tensorflow and cuda download oauthlib 3 2 2 py3 none any whl 151 kb 151 7 151 7 kb 9 0 mb s eta 0 00 00 download absl py 2 0 0 py3 none any whl 130 kb 130 2 130 2 kb 5 7 mb s eta 0 00 00 download flatbuffer 23 5 26 py2 py3 none any whl 26 kb download grpcio 1 59 2 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 5 3 mb 5 3 5 3 mb 10 8 mb s eta 0 00 00 download h5py 3 10 0 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 4 8 mb 4 8 4 8 mb 10 9 mb s eta 0 00 00 download keras 2 13 1 py3 none any whl 1 7 mb 1 7 1 7 mb 10 2 mb s eta 0 00 00 download libclang 16 0 6 py2 py3 none manylinux2010 x86 64 whl 22 9 mb 22 9 22 9 mb 10 9 mb s eta 0 00 00 download protobuf 4 25 0 cp37 abi3 manylinux2014 x86 64 whl 294 kb 294 4 294 4 kb 10 0 mb s eta 0 00 00 download tensorflow estimator 2 13 0 py2 py3 none any whl 440 kb 440 8 440 8 kb 8 7 mb s eta 0 00 00 download tensorflow io gcs filesystem 0 34 0 cp311 cp311 manylinux 2 12 x86 64 manylinux2010 x86 64 whl 2 4 mb 2 4 2 4 mb 11 2 mb s eta 0 00 00 download package 23 2 py3 none any whl 53 kb 53 0 53 0 kb 6 1 mb s eta 0 00 00 download tensorflow 2 13 1 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 479 7 mb 479 7 479 7 mb 6 0 mb s eta 0 00 00 download google auth 2 23 4 py2 py3 none any whl 183 kb 183 3 183 3 kb 8 9 mb s eta 0 00 00 download markdown 3 5 1 py3 none any whl 102 kb 102 2 102 2 kb 8 1 mb s eta 0 00 00 download request 2 31 0 py3 none any whl 62 kb 62 6 62 6 kb 6 2 mb s eta 0 00 00 download tensorboard datum server 0 7 2 py3 none manylinux 2 31 x86 64 whl 6 6 mb 6 6 6 6 mb 11 1 mb s eta 0 00 00 download werkzeug 3 0 1 py3 none any whl 226 kb 226 7 226 7 kb 10 6 mb s eta 0 00 00 download cachetool 5 3 2 py3 none any whl 9 3 kb download certifi 2023 7 22 py3 none any whl 158 kb 158 3 158 3 kb 10 5 mb s eta 0 00 00 download charset normalizer 3 3 2 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 140 kb 140 3 140 3 kb 8 9 mb s eta 0 00 00 download markupsafe 2 1 3 cp311 cp311 manylinux 2 17 x86 64 manylinux2014 x86 64 whl 28 kb download urllib3 2 0 7 py3 none any whl 124 kb 124 2 124 2 kb 9 0 mb s eta 0 00 00 instal collect package libclang flatbuffer wrapt urllib3 type extension termcolor tensorflow io gcs filesystem tensorflow estimator tensorboard datum server six pyasn1 protobuf packaging oauthlib numpy markupsafe markdown keras idna grpcio gast charset normalizer certifi cachetool absl py werkzeug rsa request pyasn1 module opt einsum h5py google pasta astunparse request oauthlib google auth google auth oauthlib tensorboard tensorflow successfully instal markupsafe 2 1 3 absl py 2 0 0 astunparse 1 6 3 cachetool 5 3 2 certifi 2023 7 22 charset normalizer 3 3 2 flatbuffer 23 5 26 gast 0 4 0 google auth 2 23 4 google auth oauthlib 1 0 0 google pasta 0 2 0 grpcio 1 59 2 h5py 3 10 0 idna 3 4 kera 2 13 1 libclang 16 0 6 markdown 3 5 1 numpy 1 24 3 oauthlib 3 2 2 opt einsum 3 3 0 packaging 23 2 protobuf 4 25 0 pyasn1 0 5 0 pyasn1 module 0 3 0 request 2 31 0 request oauthlib 1 3 1 rsa 4 9 six 1 16 0 tensorboard 2 13 0 tensorboard datum server 0 7 2 tensorflow 2 13 1 tensorflow estimator 2 13 0 tensorflow io gcs filesystem 0 34 0 termcolor 2 3 0 type extension 4 5 0 urllib3 2 0 7 werkzeug 3 0 1 wrapt 1 15 0
tensorflowtensorflow
incorrect outdate link in beginner quickstart guide
Bug
it have a link to the mnist dataset and it lead to I assume that use to be the right link but now it require login to visit so should it be remove or be there a new website thank
tensorflowtensorflow
different behavior of tf raw op rgbtohsv with jit compile true
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 14 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory gpu 0 nvidia geforce rtx 2070 gpu 1 nvidia geforce rtx 2070 gpu 2 nvidia geforce rtx 2070 gpu 3 nvidia geforce rtx 2070 current behavior when the tf raw op rgbtohsv operation be invoke within a tf function with jit compilation enable jit compile true it produce different result compare to the same operation call without jit compilation this inconsistency be observe when the code be execute on a cpu device standalone code to reproduce the issue shell import tensorflow as tf import traceback class network tf module def init self super init tf function jit compile true def call self x x tf raw op rgbtohsv image x return x m network inp x tf random normal 9 8 6 3 dtype tf bfloat16 with tf device cpu 0 tf config run function eagerly true no op re m inp tf config run function eagerly false with tf device cpu 0 op re m inp tf debugging assert near tf cast no op re tf float64 tf cast op re tf float64 atol 0 001 rtol 0 001 relevant log output shell file home guihuan llm result tf 2 2023 10 22 20 21 test py line 26 in tf debugging assert near tf cast no op re tf float64 tf cast op re tf float64 atol 0 001 rtol 0 001 file home guihuan conda envs night lib python3 9 site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file home guihuan conda envs night lib python3 9 site package tensorflow python op control flow assert py line 102 in assert raise error invalidargumenterror tensorflow python framework error impl invalidargumenterror expect tf tensor false shape dtype bool to be true summarize datum b b x and y not equal to tolerance rtol tf tensor 0 001 shape dtype float64 atol tf tensor 0 001 shape dtype float64 b x shape 9 8 6 3 dtype float64 0 6328125 1 59375 0 765625 b y shape 9 8 6 3 dtype float64 0 6328125 1 59375 0 765625
tensorflowtensorflow
different behavior of tf raw op cos tf raw op square with jit compile true
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 14 0 custom code yes os platform and distribution ubuntu 22 04 3 lts x86 64 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory gpu 0 nvidia geforce rtx 2070 gpu 1 nvidia geforce rtx 2070 gpu 2 nvidia geforce rtx 2070 gpu 3 nvidia geforce rtx 2070 current behavior when the tf raw op cos tf raw op square operation be invoke within a tf function with jit compilation enable jit compile true it produce different result compare to the same operation call without jit compilation this inconsistency be observe when the code be execute on a gpu device the problem occur when input tensor pass through raw op cos and raw op square with individual op there be no issue standalone code to reproduce the issue shell import tensorflow as tf import traceback class network tf module def init self super init tf function jit compile true def call self x x tf raw op cos x x x tf raw op square x x return x m network inp x tf random normal 10 9 dtype tf bfloat16 with tf device gpu 0 tf config run function eagerly true no op re m inp tf config run function eagerly false with tf device gpu 0 op re m inp tf debugging assert near tf cast no op re tf float64 tf cast op re tf float64 atol 0 001 rtol 0 001 relevant log output shell file home guihuan llm result tf 2 2023 10 22 20 21 test py line 27 in tf debugging assert near tf cast no op re tf float64 tf cast op re tf float64 atol 0 001 rtol 0 001 file home guihuan conda envs night lib python3 9 site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file home guihuan conda envs night lib python3 9 site package tensorflow python op control flow assert py line 102 in assert raise error invalidargumenterror tensorflow python framework error impl invalidargumenterror expect tf tensor false shape dtype bool to be true summarize datum b b x and y not equal to tolerance rtol tf tensor 0 001 shape dtype float64 atol tf tensor 0 001 shape dtype float64 b x shape 10 9 dtype float64 0 1396484375 0 66015625 0 064453125 b y shape 10 9 dtype float64 0 1396484375 0 65625 0 06494140625
tensorflowtensorflow
different behavior of tf raw op sqrtgrad with jit compile true
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 14 0 custom code yes os platform and distribution ubuntu 22 04 3 lts x86 64 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory gpu 0 nvidia geforce rtx 2070 gpu 1 nvidia geforce rtx 2070 gpu 2 nvidia geforce rtx 2070 gpu 3 nvidia geforce rtx 2070 current behavior when the tf raw ops sqrtgrad operation be invoke within a tf function with jit compilation enable jit compile true it produce different result compare to the same operation call without jit compilation this inconsistency be observe when the code be execute on a cpu device standalone code to reproduce the issue shell import tensorflow as tf import traceback class network tf module def init self super init tf function jit compile true def call self x real part tf random normal dtype tf float64 imag part tf random normal dtype tf float64 tensor tf complex real part imag part tensor tf cast tensor dtype tf complex128 x tf raw ops sqrtgrad dy x y tensor return x m network real part tf random normal dtype tf float64 imag part tf random normal dtype tf float64 tensor tf complex real part imag part tensor tf cast tensor dtype tf complex128 inp x tensor with tf device cpu 0 tf config run function eagerly true no op re m inp tf config run function eagerly false with tf device cpu 0 op re m inp tf debugging assert near no op re op re atol 0 001 rtol 0 001 relevant log output shell file home guihuan llm result tf 2 2023 10 22 20 21 test py line 33 in tf debugging assert near no op re op re atol 0 001 rtol 0 001 file home guihuan conda envs night lib python3 9 site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file home guihuan conda envs night lib python3 9 site package tensorflow python op control flow assert py line 102 in assert raise error invalidargumenterror tensorflow python framework error impl invalidargumenterror expect tf tensor false shape dtype bool to be true summarize datum b b x and y not equal to tolerance rtol tf tensor 0 001 shape dtype float64 atol tf tensor 0 001 shape dtype float64 b x shape dtype complex128 0 20583126869122956 0 1606528338452279j b y shape dtype complex128 0 2721269549260611 0 24474350338228776j
tensorflowtensorflow
miss tensorflow python training tracking in version 2 14 0 can not load pickled model
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 14 0 custom code yes os platform and distribution google colab mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version n a gpu model and memory no response current behavior we use to load some pickle tf model use import pickle pickle load open content cosmopower cosmopower train model cp paper cmb cmb tt nn pkl rb but as of version 2 14 0 we get modulenotfounderror traceback most recent call last in 1 pickle load open content cosmopower cosmopower train model cp paper cmb cmb tt nn pkl rb modulenotfounderror no module name tensorflow python training tracking we check that there be no error with tf 2 13 0 and everything work as expect the tensorflow python training tracking module seem to have be remove since 2 14 0 but we be unable to find it in the release note this error be observe on colab as well as on other platform and oss standalone code to reproduce the issue shell git clone download folder with tf model this be intend for colab change path if necessary import pickle pickle load open content cosmopower cosmopower train model cp paper cmb cmb tt nn pkl rb relevant log output shell modulenotfounderror traceback most recent call last in 1 pickle load open content cosmopower cosmopower train model cp paper cmb cmb tt nn pkl rb modulenotfounderror no module name tensorflow python training tracking note if your import be fail due to a missing package you can manually install dependency use either pip or apt to view example of instal some common dependency click the open example button below
tensorflowtensorflow
tensorflow python eager forwardprop test test fail on aarch64
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 17 bazel version 6 1 0 gcc compiler version 17 0 0 cuda cudnn version n a gpu model and memory n a current behavior since eigen be update the unit test tensorflow python eager forwardprop test now fail eigen be update by it look like this commit in eigen be the problem this seem to only affect aarch64 and not x86 standalone code to reproduce the issue shell bazel test cache test result no build test only config mkl aarch64 threadpool test env tf enable onednn opt 1 copt flax vector conversion test env tf2 behavior 1 test env portserver address unitt portserver test size filter small medium test output error verbose failure true test keep go not verbose timeout warning test tag filter oss serial no oss oss exclude v1only benchmark test no aarch64 no oss py38 no oss py39 no oss py310 job 75 tensorflow python eager forwardprop test relevant log output shell fail testnumerichigherorder main forwardpropt forwardpropt testnumerichigherorder warm up get object count run the test check for new object traceback most recent call last file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python framework test util py line 713 in decorator f self args kwargs file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python eager forwardprop test py line 455 in testnumerichigherorder test gradient file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python eager forwardprop test py line 215 in test gradient test gradient file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python eager forwardprop test py line 215 in test gradient test gradient file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python eager forwardprop test py line 231 in test gradient testcase assertallclose sym jac back sym jac fwd rtol srtol atol satol file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python framework test util py line 1657 in decorate return f args kwd file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python framework test util py line 3293 in assertallclose self assertallcloserecursive a b rtol rtol atol atol msg msg file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python framework test util py line 3229 in assertallcloserecursive self assertarraylikeallclose file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfiles org tensorflow tensorflow python framework test util py line 3186 in assertarraylikeallclose np testing assert allclose file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfile pypi numpy site package numpy testing private util py line 1527 in assert allclose assert array compare compare actual desire err msg str err msg file home andrew src tf test tensorflow git bazel ci build cache cache bazel bazel andrew eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow bazel out aarch64 opt bin tensorflow python eager forwardprop test cpu runfile pypi numpy site package numpy testing private util py line 844 in assert array compare raise assertionerror msg assertionerror not equal to tolerance rtol 1e 06 atol 1e 06 mismatch value a be different from b not close where array 0 array 0 array 2 not close lhs 155 46957 not close rhs 155 46982 not close dif 0 00024414 not close tol 0 00015647 dtype float32 shape 1 4 4 mismatch element 1 16 6 25 max absolute difference 0 00024414 max relative difference 1 5703409e 06 x array 4755 0654 139 43396 155 46957 158 52019 139 43398 119 946 26 157635 20 20992 155 4697 26 15764 428 02136 365 0485 y array 4755 0654 139 43398 155 46982 158 52022 139 43399 119 946014 26 157648 20 209923 155 46967 26 157635 428 02112 365 04846 run 13 test in 95 972 fail failure 1
tensorflowtensorflow
new
Invalid
system information android device information use adb shell getprop ro build fingerprint if possible tensorflow lite in play service sdk version find in build gradle google play service version setting app google play service app detail standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to or attach code demonstrate the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorshape be none none none on image
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior an i o pipeline for image use dataset read image into tensor but the shape be none none none prevent some application that need the tensor shape far down in preprocessing for example resize the image by a factor in the example below I can get by by recover the image dimension with a trick but it feel very very hacky be there a reason why the shape should be none standalone code to reproduce the issue shell def preprocess file name x tf io read file file name x tf io decode jpeg x nrow ncol x shape x tf image resize x nrow 2 ncol 2 x tf image random crop x 32 32 3 return x tf zero like x def preprocess workaround file name x tf io read file file name x tf io decode jpeg x nrow tf math reduce sum tf one like x 0 dtype tf int32 ncol tf math reduce sum tf one like x 0 dtype tf int32 x tf image resize x nrow 2 ncol 2 x tf image random crop x 32 32 3 return x tf zero like x model tf keras model sequential model add tf keras input shape 32 32 3 model add tf keras layer conv2d 3 3 3 padding same model summary file glob glob mnt ng ncl acquisition stitch 20231019 row1 1 crop jpg print len file crash with the output below ds tf datum dataset from tensor slice file map preprocess batch 1 the follow work ds tf datum dataset from tensor slice file map preprocess workaround batch 1 model fit ds relevant log output shell typeerror traceback most recent call last tmp ipykernel 583425 761276435 py in 1 file glob glob mnt ng ncl acquisition stitch 20231019 row1 1 crop jpg 2 print len file 3 ds tf datum dataset from tensor slice file map preprocess batch 1 4 model fit ds anaconda3 lib python3 9 site package tensorflow python data op dataset op py in map self map func num parallel call deterministic name 2292 warning warn the deterministic argument have no effect unless the 2293 num parallel call argument be specify 2294 return mapdataset self map func preserve cardinality true name name 2295 else 2296 return parallelmapdataset anaconda3 lib python3 9 site package tensorflow python data op dataset op py in init self input dataset map func use inter op parallelism preserve cardinality use legacy function name 5497 self use inter op parallelism use int op parallelism 5498 self preserve cardinality preserve cardinality 5499 self map func structured function structuredfunctionwrapper 5500 map func 5501 self transformation name anaconda3 lib python3 9 site package tensorflow python data op structured function py in init self func transformation name dataset input class input shape input type input structure add to graph use legacy function defun kwargs 261 fn factory trace tf function defun kwargs 262 263 self function fn factory 264 there be no graph to add in eager mode 265 add to graph not context execute eagerly anaconda3 lib python3 9 site package tensorflow python eager polymorphic function trace compiler py in get concrete function self args kwargs 224 tf tensor or tf tensorspec 225 226 concrete function self get concrete function garbage collect 227 args kwargs 228 concrete function garbage collector release pylint disable protect access anaconda3 lib python3 9 site package tensorflow python eager polymorphic function trace compiler py in get concrete function garbage collect self args kwargs 190 191 with self lock 192 concrete function self maybe define concrete function args kwargs 193 see name set 194 capture object identity objectidentityset anaconda3 lib python3 9 site package tensorflow python eager polymorphic function trace compiler py in maybe define concrete function self args kwargs 155 kwargs 156 157 return self maybe define function args kwargs 158 159 def get concrete function internal garbage collect self args kwargs anaconda3 lib python3 9 site package tensorflow python eager polymorphic function trace compiler py in maybe define function self args kwargs 358 args kwargs generalize func key placeholder value pylint disable protect access 359 360 concrete function self create concrete function args kwargs 361 362 graph capture container concrete function graph capture func lib pylint disable protect access anaconda3 lib python3 9 site package tensorflow python eager polymorphic function trace compiler py in create concrete function self args kwargs 282 arg name base arg name miss arg name 283 concrete function monomorphic function concretefunction 284 func graph module func graph from py func 285 self name 286 self python function anaconda3 lib python3 9 site package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value acd record initial resource use 1281 original func tf decorator unwrap python func 1282 1283 func output python func func args func kwargs 1284 1285 invariant func output contain only tensor compositetensor anaconda3 lib python3 9 site package tensorflow python data op structured function py in wrap fn args 238 attribute defun kwargs 239 def wrap fn args pylint disable miss docstre 240 ret wrapper helper args 241 ret structure to tensor list self output structure ret 242 return op convert to tensor t for t in ret anaconda3 lib python3 9 site package tensorflow python data op structured function py in wrapper helper args 169 if not should unpack nest args 170 nest args nest args 171 ret autograph tf convert self func ag ctx nest args 172 ret variable util convert variable to tensor ret 173 if should pack ret anaconda3 lib python3 9 site package tensorflow python autograph impl api py in wrapper args kwargs 690 except exception as e pylint disable broad except 691 if hasattr e ag error metadata 692 raise e ag error metadata to exception e 693 else 694 raise anaconda3 lib python3 9 site package tensorflow python autograph impl api py in wrapper args kwargs 687 try 688 with conversion ctx 689 return convert call f args kwargs option option 690 except exception as e pylint disable broad except 691 if hasattr e ag error metadata anaconda3 lib python3 9 site package tensorflow python autograph impl api py in convert call f args kwargs caller fn scope option 437 try 438 if kwargs be not none 439 result convert f effective args kwargs 440 else 441 result convert f effective args tmp autograph generate filecu6im70c py in tf preprocess file name 11 x ag convert call ag ld tf io decode jpeg ag ld x none fscope 12 nrow ncol ag ld x shape 13 x ag convert call ag ld tf image resize ag ld x ag ld nrow 2 ag ld ncol 2 none fscope 14 x ag convert call ag ld tf image random crop ag ld x 32 32 none fscope 15 try typeerror in user code file tmp ipykernel 583425 4207213119 py line 7 in preprocess x tf image resize x nrow 2 ncol 2 typeerror unsupported operand type s for nonetype and int
tensorflowtensorflow
invalid classifier use for cuda 12 2
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 17 bazel version 6 1 0 gcc compiler version 16 0 6 cuda cudnn version n a gpu model and memory n a current behavior upload to pypi fail with invalid classifier introduce by standalone code to reproduce the issue shell python3 m twine upload verbose home ubuntu action runner work tensorflow tensorflow whl u token p relevant log output shell info response from 400 invalid value for classifier error classifier environment gpu nvidia cuda 12 2 be not a valid classifier info 400 invalid value for classifier error classifier environment gpu nvidia cuda 12 2 be not a valid classifi 400 invalid value for classifier error classifier environment gpu nvidia cuda 12 2 be not a valid classifier the server could not comply with the request since it be either malforme or otherwise incorrect invalid value for classifier error classifier environment gpu nvidia cuda 12 2 be not a valid classifier error httperror 400 bad request from invalid value for classifier error classifier environment gpu nvidia cuda 12 2 be not a valid classifier
tensorflowtensorflow
unable to change validation dataset for model fit function after it raise an exception
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution google colab mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when use model fit in tensorflow the validation datum provide as an input parameter be initially use for model evaluation however if the initial evaluation fail subsequent run of model fit still use the previously provide validation datum even if new datum be provide standalone code to reproduce the issue shell the error be quite easy to reproduce check the compile and train section in the below provide colab scrollto 08wjxihenxk7 I modify a tensorflow tutorial colab on cnn to reproduce this error in the section I first run the model with correct parameter then with incorrect and then again with correct parameter but this time it fail relevant log output no response
tensorflowtensorflow
internal definition break tensorflow probability
Bug
tensorflow probability s test issuecomment 1753993699 be break and it appear to be due to a bad definition in tensorflow issuecomment 1756359940 this be prevent the tensorflow probability team from update type extension which then break upgrade to python 3 12
tensorflowtensorflow
tfrecordwriter stick or very slow while serialize datum depend on feature transformation type
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 custom code yes os platform and distribution ubuntu 22 4 mobile device no response python version 3 8 9 bazel version no response gcc compiler version no response cuda cudnn version 11 8 gpu model and memory no response current behavior tf io tfrecordwriter freeze when preprocesse feature with scikit learn dask ml quantiletranformer whereas its working write out few k sample within second when use standardscaler how might quantiletransformer produce output datum shape in a way it break tfrecord serialization might dtype precision influence serialization performance in such criticality standalone code to reproduce the issue shell reduce pseudo code fit scaler note option 1 use this scaler break tf record writer feat standardizer dask quantiletransformer output distribution standardizer distribution n quantile n feat standardizer quantil subsample n fit sample copy false note option 2 use this one work for tf record writer feat standardizer dask standardscaler copy false from here proceed the same way until tf data serialization as tfrecord files x train future dask client scatter x train arr feat standardizer dask client submit feat standardizer fit x train future result x train preproc future dask client submit feat standardizer transform x train future x train dask arr dask client gather x train preproc future transform training datum in chunck of subset of the total training dataset x train future dask client scatter x train arr transform batch x train future dask client submit feat standardizer transform x train future x train future dask client submit feat normalizer transform x train future x train arr transform batch dask client gather x train future reshape feature sample time feat sample time feat x train x train arr transform batch reshape n train sample n step n feat cast numpy default precision float64 tf float32 x train x train astype np float32 y train y train astype np int64 def array to tfrecords x y feature dict x tf train feature float list tf train floatlist value x flatten y tf train feature int64 list tf train int64list value y flatten example tf train example feature tf train feature feature feature dict return example serializetostre tf io tfrecordwriter tfrecord file path option tf io tfrecordoption compression type zlib compression level 7 as writer for x y in tqdm zip x train y train no progress visible here when use quantiletransformer serialize array to tfrecords x y writer write serialize relevant log output shell no output visible the script just never continue and freeze while pretend to serialize datum
tensorflowtensorflow
bias size verification in tfl conv 3d tranpose op
Bug
l5543 should be tfl numelementsequalsdim 3 1 3 instead of tfl numelementsequalsdim 3 1 4 since the number of output channel in the filter be give in dimension 3
tensorflowtensorflow
calculate gradient for a graph contain tf image extract patch use tf 2 9 0
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version v2 9 0 rc2 42 g8a20d54a3c1 2 9 0 custom code yes os platform and distribution window 10 64bit mobile device no response python version python 3 7 bazel version no response gcc compiler version no response cuda cudnn version cuda11 2 gpu model and memory no response current behavior my code extract patch from a batch of image as an input and produce a batch of image patch as an output the input batch size row col 1 be split into patch patch num batch size patch row patch col before be feed to next layer of the network this function be call by a custom layer keras layer layer and it work well during extract the patch but when calculate gradient for a graph with grad tape gradient loss model variable an error come unimplementederror graph execution error detect at node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul 2 root error s find 0 unimplemente a deterministic gpu implementation of sparsetensordensematmulop be not currently available node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul gradient fan weight 8 statefulpartitionedcall grad partitionedcall gradient extractimagepatche grad extractimagepatche 63 1 unimplemente a deterministic gpu implementation of sparsetensordensematmulop be not currently available node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul 0 successful operation 0 derive error ignore op inference backward call 20302 28692 I find an issue about tf extract image patch opend on feb 10 2017 and add tf cast input batch dtype tf float32 name castdata in the function but it will not work here be my code of the function def spilt patch input batch block num cur block shape cur input batch np tile np random randint 0 2 4 3 3 1 2 2 block num cur 4 block shape cur 6 6 shape input batch shape input batch tf reshape input batch shape 0 shape 1 shape 2 1 batch size row col 1 input batch tf cast input batch dtype tf float32 name castdata block cur tf image extract patch image input batch size 1 block shape cur 0 block shape cur 1 1 stride 1 block shape cur 0 block shape cur 1 1 rate 1 1 1 1 padding valid block cur tf cast tf reshape block cur shape 0 block num cur block shape cur 0 block shape cur 1 tf float32 batch size patch num patch row patch col block cur tf transpose block cur 1 0 2 3 return block cur standalone code to reproduce the issue shell relevant log output shell traceback most recent call last file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel py3compat py line 356 in compat exec exec code global local file d code pycoode ocnn2nd fanoutmodel train ocnn py line 112 in grad tape gradient loss model variable file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager backprop py line 1106 in gradient unconnected gradient unconnected gradient file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager imperative grad py line 73 in imperative grad compat as str unconnected gradient value file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager function py line 1206 in backward function wrapper process args remappe capture file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager function py line 1861 in call flat ctx args cancellation manager cancellation manager file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager function py line 502 in call ctx ctx file c programdata anaconda3 envs tf2 9 0 lib site package tensorflow python eager execute py line 55 in quick execute input attrs num output unimplementederror graph execution error detect at node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul define at most recent call last file c programdata anaconda3 envs tf2 9 0 lib runpy py line 193 in run module as main main mod spec file c programdata anaconda3 envs tf2 9 0 lib runpy py line 85 in run code exec code run global file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel console main py line 24 in start main file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel console start py line 340 in main kernel start file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelapp py line 712 in start self io loop start file c programdata anaconda3 envs tf2 9 0 lib site package tornado platform asyncio py line 215 in start self asyncio loop run forever file c programdata anaconda3 envs tf2 9 0 lib asyncio base event py line 541 in run forever self run once file c programdata anaconda3 envs tf2 9 0 lib asyncio base event py line 1786 in run once handle run file c programdata anaconda3 envs tf2 9 0 lib asyncio event py line 88 in run self context run self callback self args file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 510 in dispatch queue await self process one file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 499 in process one await dispatch args file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 406 in dispatch shell await result file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 730 in execute request reply content await reply content file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel ipkernel py line 387 in do execute cell i d cell i d file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel zmqshell py line 528 in run cell return super run cell args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 2975 in run cell raw cell store history silent shell future cell i d file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3029 in run cell return runner coro file c programdata anaconda3 envs tf2 9 0 lib site package ipython core async helper py line 78 in pseudo sync runner coro send none file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3257 in run cell async interactivity interactivity compiler compiler result result file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3472 in run ast node if await self run code code result async asy file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3552 in run code exec code obj self user global ns self user n file c user xxx appdata local temp ipykernel 29532 1338903705 py line 1 in runfile d code pycoode ocnn2nd fanoutmodel train ocnn py wdir d code pycoode ocnn2nd file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 526 in runfile post mortem current namespace stack depth 1 file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 613 in exec file capture last expression false file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 469 in exec code exec fun compile ast code filename exec n global ns local file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel py3compat py line 356 in compat exec exec code global local file d code pycoode ocnn2nd fanoutmodel train ocnn py line 97 in trn pre model x trn input batch size target row target col output batch size 1 file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 64 in error handler return fn args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package kera engine training py line 490 in call return super call args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 64 in error handler return fn args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package kera engine base layer py line 1014 in call output call fn input args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 92 in error handler return fn args kwargs node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul detect at node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul define at most recent call last file c programdata anaconda3 envs tf2 9 0 lib runpy py line 193 in run module as main main mod spec file c programdata anaconda3 envs tf2 9 0 lib runpy py line 85 in run code exec code run global file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel console main py line 24 in start main file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel console start py line 340 in main kernel start file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelapp py line 712 in start self io loop start file c programdata anaconda3 envs tf2 9 0 lib site package tornado platform asyncio py line 215 in start self asyncio loop run forever file c programdata anaconda3 envs tf2 9 0 lib asyncio base event py line 541 in run forever self run once file c programdata anaconda3 envs tf2 9 0 lib asyncio base event py line 1786 in run once handle run file c programdata anaconda3 envs tf2 9 0 lib asyncio event py line 88 in run self context run self callback self args file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 510 in dispatch queue await self process one file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 499 in process one await dispatch args file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 406 in dispatch shell await result file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel kernelbase py line 730 in execute request reply content await reply content file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel ipkernel py line 387 in do execute cell i d cell i d file c programdata anaconda3 envs tf2 9 0 lib site package ipykernel zmqshell py line 528 in run cell return super run cell args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 2975 in run cell raw cell store history silent shell future cell i d file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3029 in run cell return runner coro file c programdata anaconda3 envs tf2 9 0 lib site package ipython core async helper py line 78 in pseudo sync runner coro send none file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3257 in run cell async interactivity interactivity compiler compiler result result file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3472 in run ast node if await self run code code result async asy file c programdata anaconda3 envs tf2 9 0 lib site package ipython core interactiveshell py line 3552 in run code exec code obj self user global ns self user n file c user xxx appdata local temp ipykernel 29532 1338903705 py line 1 in runfile d code pycoode ocnn2nd fanoutmodel train ocnn py wdir d code pycoode ocnn2nd file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 526 in runfile post mortem current namespace stack depth 1 file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 613 in exec file capture last expression false file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel customize spydercustomize py line 469 in exec code exec fun compile ast code filename exec n global ns local file c programdata anaconda3 envs tf2 9 0 lib site package spyder kernel py3compat py line 356 in compat exec exec code global local file d code pycoode ocnn2nd fanoutmodel train ocnn py line 97 in trn pre model x trn input batch size target row target col output batch size 1 file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 64 in error handler return fn args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package kera engine training py line 490 in call return super call args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 64 in error handler return fn args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package kera engine base layer py line 1014 in call output call fn input args kwargs file c programdata anaconda3 envs tf2 9 0 lib site package keras util traceback util py line 92 in error handler return fn args kwargs node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul 2 root error s find 0 unimplemente a deterministic gpu implementation of sparsetensordensematmulop be not currently available node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul gradient fan weight 8 statefulpartitionedcall grad partitionedcall gradient extractimagepatche grad extractimagepatche 63 1 unimplemente a deterministic gpu implementation of sparsetensordensematmulop be not currently available node gradient extractimagepatche 1 grad sparsetensordensematmul sparsetensordensematmul 0 successful operation 0 derive error ignore op inference backward call 20302 28692
tensorflowtensorflow
tensorflow 2 14 macos wheel win t install for python 3 11
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 14 custom code no os platform and distribution arm64 macos mobile device n a python version 3 11 bazel version n a gcc compiler version n a cuda cudnn version n a gpu model and memory n a current behavior when instal tensorflow 2 14 for python 3 11 I see error could not find a version that satisfy the requirement wrapt 1 15 1 11 0 from tensorflow from version 1 15 0rc1 1 15 0 look at the metadata of the 2 14 whl for py3 11 I can see require dist wrapt 1 15 1 11 0 but wrapt have no package for python 3 11 for that version range look at the metadata for the 2 13 1 whl for py3 11 I can see require dist wrapt 1 11 0 which can be satisfied with wrapt 1 15 0 standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
valueerror in tensorflow probability
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tensorflow 2 15 0 custom code yes os platform and distribution window subsystem for linux mobile device no response python version 3 9 bazel version v1 18 0 gcc compiler version 11 3 0 cuda cudnn version no response gpu model and memory no response current behavior I be try to run a program which use tensorflow agent tensorflow probability at the back end when I try to run the train py use yaml input file I be get the follow error standalone code to reproduce the issue shell lib python3 10 site package tensorflow probability python internal prefer static py line 84 in copy docstre raise valueerror valueerror arg spec do not match original fullargspec args input dtype name layout varargs none varkw none default none none none kwonlyarg kwonlydefault none annotation new fullargspec args input dtype name varargs none varkw none default none none kwonlyarg kwonlydefault none annotation fn please help I understand the issue any suggestion to resolve the error be greatly appreciate relevant log output no response
tensorflowtensorflow
tf 2 14 minimum nvidia driver version
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 14 custom code no os platform and distribution tensorflow docker image mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version 450 203 8 gpu model and memory no response current behavior I have a question on if the minimum nvidia driver version have change I believe the current docs state 450 80 02 the below script run use the 2 13 docker image thank you when try to run a test gpu benchmark I get the follow error text 2023 10 02 16 01 32 433368 e tensorflow compiler xla stream executor cuda cuda dnn cc 461 possibly insufficient driver version 450 203 8 standalone code to reproduce the issue shell below be the code be use import tensorflow as tf from tensorflow import kera import numpy as np import matplotlib pyplot as plt import timeit download datum and scale x train y train x test y test keras datasets cifar10 load datum scale image value between 0 1 x train scale x train 255 x test scale x test 255 one hot encoding label y train encode keras util to categorical y train num class 10 dtype float32 y test encode keras util to categorical y test num class 10 dtype float32 define the model def get model model keras sequential keras layer flatten input shape 32 32 3 kera layer dense 3000 activation relu keras layer dense 1000 activation relu keras layer dense 10 activation sigmoid model compile optimizer sgd loss categorical crossentropy metric accuracy return model gpu benchmark def gpubench gpu strategy tf distribute mirroredstrategy with strategy scope with tf device gpu 0 model gpu get model model gpu fit x train scale y train encode epoch 10 gpubench relevant log output no response
tensorflowtensorflow
dead link
Bug
on this page there be a link at the bottom which doesn t work its label be distribute tensorflow
tensorflowtensorflow
tflite model produce wrong output for constant addition
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source 2 15 0 dev20230926 2 code the tensorflow lite model in the example below should output x1 x2 x2 7 1 1 9 however it produce a wrong output x1 x1 x2 7 7 1 15 this indicate that it confuse the order of the two input import tensorflow as tf import numpy as np a tf constant 7 0 shape 1 b tf constant 1 0 shape 1 input datum a b def evaluatetflitemodel tflite model input data interpreter tf lite interpreter model content tflite model interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail test model on random input datum for I in range len input detail input shape input detail I shape input datum I np array np random random sample input shape dtype np float32 interpreter set tensor input detail I index input datum I interpreter invoke the function get tensor return a copy of the tensor datum use tensor in order to get a pointer to the tensor output datum interpreter get tensor output detail I index for I in range len output detail return output datum class model tf keras model def init self super model self init tf function input signature tf tensorspec shape x shape dtype x dtype for x in input datum def call self x1 x2 return x1 x2 x2 m model converter tf lite tfliteconverter from keras model m tflite model converter convert print evaluatetflitemodel tflite model input datum output array 15 dtype float32 keras model import tensorflow as tf print tf version import tensorflow as tf import numpy as np class model tf keras model def init self super model self init def call self x1 x2 return x1 x2 x2 m model a tf constant 7 0 shape 1 b tf constant 1 0 shape 1 input datum a b print m a b output tf tensor 9 shape 1 dtype float32 3 failure after conversion wrong result
tensorflowtensorflow
keyerror min
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras operation layer py line 31 in convert clip if param min 0 keyerror min standalone code to reproduce the issue shell from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load ssd bmv1 torch onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model input 1 name policy renumerate verbose true keras model save model k model ssd bmv1 torch h5 overwrite true save format h5 onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras operation layer py line 31 in convert clip if param min 0 keyerror min
tensorflowtensorflow
keyerror constantofshape
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type keyerror constantofshape standalone code to reproduce the issue shell from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load patchcore torch onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model input 1 name policy renumerate verbose true keras model save model k model patchcore torch h5 overwrite true save format h5 onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type keyerror constantofshape
tensorflowtensorflow
attributeerror number of input be not equal 1 for unsqueeze layer
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model onnx unsqueeze 0 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 210 in convert unsqueeze raise attributeerror number of input be not equal 1 for unsqueeze layer attributeerror number of input be not equal 1 for unsqueeze layer standalone code to reproduce the issue shell please run the below code to reproduce from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load textcnn torch onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model onnx unsqueeze 0 name policy renumerate verbose true keras model save model k model textcnn torch h5 overwrite true save format h5 onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model onnx unsqueeze 0 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 210 in convert unsqueeze raise attributeerror number of input be not equal 1 for unsqueeze layer attributeerror number of input be not equal 1 for unsqueeze layer
tensorflowtensorflow
attributeerror can t gather from tf tensor
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model onnx cast 0 onnx cast 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 87 in convert gather raise attributeerror can t gather from tf tensor attributeerror can t gather from tf tensor standalone code to reproduce the issue shell please run the code below to reproduce from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load fasttext torch onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model onnx cast 0 onnx cast 1 name policy renumerate verbose true keras model save model k model fasttext torch h5 overwrite true save format h5 onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model onnx cast 0 onnx cast 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 87 in convert gather raise attributeerror can t gather from tf tensor attributeerror can t gather from tf tensor
tensorflowtensorflow
attributeerror not implement
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 294 in convert slice raise attributeerror not implement attributeerror not implement standalone code to reproduce the issue shell reproduce by run the follow code from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load deeplabv3 torch onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model input 1 name policy renumerate verbose true keras model save model k model deeplabv3 torch h5 overwrite true save format h5 onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model input 1 name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras reshape layer py line 294 in convert slice raise attributeerror not implement attributeerror not implement
tensorflowtensorflow
valueerror exception encounter when call layer 13 type lambda
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution 5 15 90 1 microsoft standard wsl2 mobile device no response python version 3 8 17 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior valueerror exception encounter when call layer 13 type lambda dimension must be equal but be 204 and 206 for node 13 add addv2 t dt float placeholder placeholder 1 with input shape 64 204 204 64 206 206 call argument receive by layer 13 type lambda input tf tensor shape none 64 204 204 dtype float32 tf tensor shape none 64 206 206 dtype float32 mask none train none standalone code to reproduce the issue shell just run the follow code to reproduce from onnx2keras import onnx to keras import kera import onnx import sys sys path append root mr onnx model onnx load yolov3 darknet53 onnx onnx input onnx model graph input print print onnx input onnx model onnx load vgg11 onnx k model onnx to keras onnx model x name policy renumerate verbose true keras model save model k model ssd resnet50fpn torch h5 overwrite true save format h5 the onnx file can be download at relevant log output shell traceback most recent call last file o2k py line 11 in k model onnx to keras onnx model x name policy renumerate verbose true file root miniconda3 envs onnx lib python3 8 site package onnx2keras converter py line 175 in onnx to keras available converter node type file root miniconda3 envs onnx lib python3 8 site package onnx2keras elementwise layer py line 83 in convert elementwise add layer node name lambda layer input 0 input 1 file root miniconda3 envs onnx lib python3 8 site package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none file root miniconda3 envs onnx lib python3 8 site package onnx2keras elementwise layer py line 76 in target layer layer tf add valueerror exception encounter when call layer layer 12 type lambda dimension must be equal but be 204 and 206 for node layer 12 add addv2 t dt float placeholder placeholder 1 with input shape 64 204 204 64 206 206 call argument receive by layer layer 12 type lambda input tf tensor shape none 64 204 204 dtype float32 tf tensor shape none 64 206 206 dtype float32 mask none train none
tensorflowtensorflow
spectralnormalization layer be not trainable please help operatornotallowedingrapherror exception encounter when call layer spectral normalization type spectralnormalization
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0 custom code no os platform and distribution ubuntu 22 04 3 lt and google colab mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version cudatoolkit 11 8 0 nvidia cudnn cu11 8 6 0 163 gpu model and memory no response current behavior spectralnormalization layer be not trainable whenever I try to use the model fit method tf output the error use a symbolic tf tensor as a python bool be not allow autograph do convert this function this might indicate you be try to use an unsupported feature please help good kav standalone code to reproduce the issue shell link to colab notebook standalone code batch 1 height 10 width 10 channel 1 filter 4 kernel size 3 x input input shape height width channel conv2d spectralnormalization conv2d filter kernel size x output conv2d x input model model x input x output model compile loss mse x np random rand batch height width channel y np random rand batch height width filter model fit x y relevant log output shell operatornotallowedingrapherror traceback most recent call last in 1 model fit x y 1 frame usr local lib python3 10 dist package tensorflow python eager polymorphic function autograph util py in autograph handler args kwargs 50 except exception as e pylint disable broad except 51 if hasattr e ag error metadata 52 raise e ag error metadata to exception e 53 else 54 raise operatornotallowedingrapherror in user code file usr local lib python3 10 dist package kera src engine training py line 1338 in train function return step function self iterator file usr local lib python3 10 dist package kera src engine training py line 1322 in step function output model distribute strategy run run step args datum file usr local lib python3 10 dist package kera src engine training py line 1303 in run step output model train step datum file usr local lib python3 10 dist package kera src engine training py line 1080 in train step y pre self x training true file usr local lib python3 10 dist package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none operatornotallowedingrapherror exception encounter when call layer spectral normalization type spectralnormalization use a symbolic tf tensor as a python bool be not allow autograph do convert this function this might indicate you be try to use an unsupported feature call argument receive by layer spectral normalization type spectralnormalization input tf tensor shape none 10 10 1 dtype float32 training true
tensorflowtensorflow
train vanilla transformer on tpu give internalerror
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0 custom code yes os platform and distribution linux ubuntu 22 04 2 google colab mobile device colab python version 3 10 12 bazel version colab gcc compiler version colab cuda cudnn version colab gpu model and memory colab current behavior training transformer model on tpu give internal error I have some idea on the error that there s a incompatible tensor op that s cause the problem but I can t pinpoint it I have already do a big model which be use pretraine embedding and it go off without a hitch I try to replicate the same but with different tfds dataset if this be already solve please direct I to the relevant link standalone code to reproduce the issue this be the notebook scrollto y0hkz9yrc3fu this notebook work fine thankyou in advance I will respond asap relevant log output shell internalerror traceback most recent call last in 1 model fit 2 train ds 3 validation datum valid ds 4 epoch epoch 5 step per epoch train step 1 frame usr local lib python3 10 dist package keras src util traceback util py in error handler args kwargs 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb usr local lib python3 10 dist package tensorflow core function capture capture container py in capture by value self graph tensor name 120 graph const self by val internal get i d tensor 121 if graph const be none 122 graph const tensor capture as const name pylint disable protect access 123 if graph const be none 124 some eager tensor e g parallel tensor be not convertible to internalerror fail to connect to all address last error unknown ipv4 127 0 0 1 35437 fail to connect to remote host connection refuse additional grpc error information from remote target job localhost replica 0 task 0 device cpu 0 unknown fail to connect to all address last error unknown ipv4 127 0 0 1 35437 fail to connect to remote host connection refuse create time 2023 09 19t08 56 09 694479753 00 00 grpc status 14 execute non communication op originally return unavailableerror and be replace by internalerror to avoid invoke tf network error handling logic
tensorflowtensorflow
how to compile tflite runtime to include gpu part
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 22 04 2 lts tensorflow instal from source or binary source tensorflow version or github sha if from source 2 13 provide the text output from tflite convert subcommand xnnpack operator action compile src operator convolution nchw c configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section md mf bazel out k8 opt bin external xnnpack objs operator convolution nchw pic d frandom seed bazel out k8 opt bin external xnnpack objs operator convolution nchw pic o fpic dxnn ignore platform jit 0 dxnn log level 0 dpthreadpool no deprecate api dxnn enable gemm m specialization 1 dxnn enable jit 0 dxnn enable arm fp16 scalar 0 dxnn enable arm fp16 vector 0 dxnn enable arm bf16 0 dxnn enable arm dotprod 0 dxnn enable assembly 1 dxnn enable dwconv multipass 0 iquote external xnnpack iquote bazel out k8 opt bin external xnnpack iquote external pthreadpool iquote bazel out k8 opt bin external pthreadpool iquote external fxdiv iquote bazel out k8 opt bin external fxdiv iquote external fp16 iquote bazel out k8 opt bin external fp16 iquote external cpuinfo iquote bazel out k8 opt bin external cpuinfo ibazel out k8 opt bin external pthreadpool virtual include pthreadpool ibazel out k8 opt bin external fxdiv virtual include fxdiv ibazel out k8 opt bin external fp16 virtual include fp16 ibazel out k8 opt bin external cpuinfo virtual include cpuinfo isystem external xnnpack include isystem bazel out k8 opt bin external xnnpack include isystem external xnnpack src isystem bazel out k8 opt bin external xnnpack src isystem external pthreadpool include isystem bazel out k8 opt bin external pthreadpool include isystem external fxdiv include isystem bazel out k8 opt bin external fxdiv include isystem external fp16 include isystem bazel out k8 opt bin external fp16 include isystem external cpuinfo include isystem bazel out k8 opt bin external cpuinfo include isystem external cpuinfo src isystem bazel out k8 opt bin external cpuinfo src wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native iinclude isrc os std c99 o2 fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external xnnpack src operator convolution nchw c o bazel out k8 opt bin external xnnpack objs operator convolution nchw pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 222 1 232 2 action 1 run compile absl string internal charconv bigint cc 0s local prepa compile src operator convolution nchw c 1 222 1 232 2 action run compile absl string internal charconv bigint cc 0s local compile src operator convolution nchw c 0s local subcommand com google absl absl time time action compile absl time format cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin external com google absl absl time objs time format pic d frandom seed bazel out k8 opt bin external com google absl absl time objs time format pic o fpic iquote external com google absl iquote bazel out k8 opt bin external com google absl wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 wall wextra wcast qual wconversion null wformat security wmisse declaration woverlength string wpoint arith wundef wunuse local typedef wunuse result wvarargs wvla wwrite string dnominmax fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external com google absl absl time format cc o bazel out k8 opt bin external com google absl absl time objs time format pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 223 1 232 2 action 1 run compile absl string internal charconv bigint cc 0s local prepa compile absl time format cc 1 223 1 232 2 action run compile absl string internal charconv bigint cc 1s local compile absl time format cc 0s local 1 224 1 232 2 action 1 run compile absl string internal charconv bigint cc 2s local scann compile src unpool config c subcommand xnnpack microkernel config action compile src unpool config c configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section md mf bazel out k8 opt bin external xnnpack objs microkernel config unpool config pic d frandom seed bazel out k8 opt bin external xnnpack objs microkernel config unpool config pic o fpic dxnn ignore platform jit 0 dxnn enable arm fp16 scalar 0 dxnn enable arm fp16 vector 0 dxnn enable arm bf16 0 dxnn enable arm dotprod 0 dxnn enable assembly 1 dxnn enable dwconv multipass 0 dxnn enable gemm m specialization 1 dxnn enable jit 0 dxnn log level 0 dpthreadpool no deprecate api iquote external xnnpack iquote bazel out k8 opt bin external xnnpack iquote external fxdiv iquote bazel out k8 opt bin external fxdiv iquote external pthreadpool iquote bazel out k8 opt bin external pthreadpool iquote external cpuinfo iquote bazel out k8 opt bin external cpuinfo iquote external fp16 iquote bazel out k8 opt bin external fp16 ibazel out k8 opt bin external fxdiv virtual include fxdiv ibazel out k8 opt bin external pthreadpool virtual include pthreadpool ibazel out k8 opt bin external cpuinfo virtual include cpuinfo ibazel out k8 opt bin external fp16 virtual include fp16 isystem external xnnpack include isystem bazel out k8 opt bin external xnnpack include isystem external xnnpack src isystem bazel out k8 opt bin external xnnpack src isystem external fxdiv include isystem bazel out k8 opt bin external fxdiv include isystem external pthreadpool include isystem bazel out k8 opt bin external pthreadpool include isystem external cpuinfo include isystem bazel out k8 opt bin external cpuinfo include isystem external cpuinfo src isystem bazel out k8 opt bin external cpuinfo src isystem external fp16 include isystem bazel out k8 opt bin external fp16 include wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native iinclude isrc std c99 o2 fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external xnnpack src unpool config c o bazel out k8 opt bin external xnnpack objs microkernel config unpool config pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 224 1 232 2 action 1 run compile absl string internal charconv bigint cc 2s local prepa compile src unpool config c subcommand tensorflow lite kernel builtin op kernel action compile tensorflow lite kernel add cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin tensorflow lite kernel objs builtin op kernel add pic d frandom seed bazel out k8 opt bin tensorflow lite kernel objs builtin op kernel add pic o fpic dtflite kernel use xnnpack dpthreadpool no deprecate api deigen neon gebp nr 4 deigen mpl2 only deigen max align byte 64 dxnnpack delegate enable qs8 1 dxnnpack delegate enable qu8 1 deigen altivec use custom pack 0 deigen use avx512 gemm kernel 0 dxnn ignore platform jit 0 dxnn log level 0 dxnn enable arm fp16 scalar 0 dxnn enable arm fp16 vector 0 dxnn enable arm bf16 0 dxnn enable arm dotprod 0 dxnn enable gemm m specialization 1 dxnn enable jit 0 dxnn enable assembly 1 dxnn enable dwconv multipass 0 dxnn enable sparse 1 dxnn enable memopt 1 iquote iquote bazel out k8 opt bin iquote external ruy iquote bazel out k8 opt bin external ruy iquote external cpuinfo iquote bazel out k8 opt bin external cpuinfo iquote external gemmlowp iquote bazel out k8 opt bin external gemmlowp iquote external pthreadpool iquote bazel out k8 opt bin external pthreadpool iquote external fxdiv iquote bazel out k8 opt bin external fxdiv iquote external arm neon 2 x86 sse iquote bazel out k8 opt bin external arm neon 2 x86 sse iquote external eigen archive iquote bazel out k8 opt bin external eigen archive iquote external flatbuffer iquote bazel out k8 opt bin external flatbuffer iquote external fft2d iquote bazel out k8 opt bin external fft2d iquote external xnnpack iquote bazel out k8 opt bin external xnnpack iquote external fp16 iquote bazel out k8 opt bin external fp16 iquote external com google absl iquote bazel out k8 opt bin external com google absl iquote external nsync iquote bazel out k8 opt bin external nsync iquote external com google protobuf iquote bazel out k8 opt bin external com google protobuf iquote external zlib iquote bazel out k8 opt bin external zlib iquote external farmhash archive iquote bazel out k8 opt bin external farmhash archive ibazel out k8 opt bin external cpuinfo virtual include cpuinfo ibazel out k8 opt bin external pthreadpool virtual include pthreadpool ibazel out k8 opt bin external fxdiv virtual include fxdiv ibazel out k8 opt bin external flatbuffer virtual include flatbuffer ibazel out k8 opt bin external flatbuffer src virtual include flatbuffer ibazel out k8 opt bin external flatbuffer virtual include runtime cc ibazel out k8 opt bin external fp16 virtual include fp16 isystem external cpuinfo include isystem bazel out k8 opt bin external cpuinfo include isystem external cpuinfo src isystem bazel out k8 opt bin external cpuinfo src isystem external pthreadpool include isystem bazel out k8 opt bin external pthreadpool include isystem external fxdiv include isystem bazel out k8 opt bin external fxdiv include isystem third party eigen3 mkl include isystem bazel out k8 opt bin third party eigen3 mkl include isystem external eigen archive isystem bazel out k8 opt bin external eigen archive isystem tensorflow lite schema isystem bazel out k8 opt bin tensorflow lite schema isystem tensorflow lite experimental acceleration configuration isystem bazel out k8 opt bin tensorflow lite experimental acceleration configuration isystem external xnnpack include isystem bazel out k8 opt bin external xnnpack include isystem external xnnpack src isystem bazel out k8 opt bin external xnnpack src isystem external fp16 include isystem bazel out k8 opt bin external fp16 include isystem external nsync public isystem bazel out k8 opt bin external nsync public isystem external com google protobuf src isystem bazel out k8 opt bin external com google protobuf src isystem external zlib isystem bazel out k8 opt bin external zlib isystem external farmhash archive src isystem bazel out k8 opt bin external farmhash archive src wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 dfarmhash no cxx string msse4 2 o3 fno exception wno error reorder fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c tensorflow lite kernel add cc o bazel out k8 opt bin tensorflow lite kernel objs builtin op kernel add pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 225 1 232 2 action 1 run compile absl string internal charconv bigint cc 2s local prepa compile tensorflow lite kernel add cc subcommand tensorflow lite stderr reporter action compile tensorflow lite stderr reporter cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin tensorflow lite objs stderr reporter stderr reporter pic d frandom seed bazel out k8 opt bin tensorflow lite objs stderr reporter stderr reporter pic o fpic dtflite kernel use xnnpack iquote iquote bazel out k8 opt bin wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 dfarmhash no cxx string msse4 2 o3 fno exception wall fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c tensorflow lite stderr reporter cc o bazel out k8 opt bin tensorflow lite objs stderr reporter stderr reporter pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 226 1 232 2 action 1 run compile tensorflow lite kernel add cc 0s local prepa compile tensorflow lite stderr reporter cc subcommand com google absl absl strings cord action compile absl string cord cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin external com google absl absl string objs cord cord pic d frandom seed bazel out k8 opt bin external com google absl absl string objs cord cord pic o fpic iquote external com google absl iquote bazel out k8 opt bin external com google absl wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 wall wextra wcast qual wconversion null wformat security wmisse declaration woverlength string wpoint arith wundef wunuse local typedef wunuse result wvarargs wvla wwrite string dnominmax fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external com google absl absl strings cord cc o bazel out k8 opt bin external com google absl absl string objs cord cord pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 227 1 232 2 action 1 run compile tensorflow lite kernel add cc 0s local prepa compile absl string cord cc 1 227 1 232 2 action run compile tensorflow lite kernel add cc 0s local compile absl string cord cc 0s local 1 227 1 232 2 action run compile tensorflow lite kernel add cc 1s local compile absl string cord cc 1s local 1 227 1 232 2 action run compile tensorflow lite kernel add cc 2s local compile absl string cord cc 2s local 1 227 1 232 2 action run compile tensorflow lite kernel add cc 3s local compile absl string cord cc 3s local 1 227 1 232 2 action run compile tensorflow lite kernel add cc 4s local compile absl string cord cc 4s local 1 228 1 232 2 action 1 run compile tensorflow lite kernel add cc 5s local scann compile src google protobuf stubs statusor cc subcommand com google protobuf protobuf lite action compile src google protobuf stubs statusor cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin external com google protobuf objs protobuf lite statusor pic d frandom seed bazel out k8 opt bin external com google protobuf objs protobuf lite statusor pic o fpic iquote external com google protobuf iquote bazel out k8 opt bin external com google protobuf isystem external com google protobuf src isystem bazel out k8 opt bin external com google protobuf src wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 dhave zlib woverloade virtual wno sign compare fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external com google protobuf src google protobuf stub statusor cc o bazel out k8 opt bin external com google protobuf objs protobuf lite statusor pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 228 1 232 2 action 1 run compile tensorflow lite kernel add cc 5s local prepa compile src google protobuf stubs statusor cc 1 228 1 232 2 action run compile tensorflow lite kernel add cc 6s local compile src google protobuf stubs statusor cc 0s local 1 229 1 232 2 action 1 run compile tensorflow lite kernel add cc 6s local scann compile absl synchronization internal create thread identity cc subcommand com google absl absl synchronization synchronization action compile absl synchronization internal create thread identity cc configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out k8 opt bin external com google absl absl synchronization objs synchronization create thread identity pic d frandom seed bazel out k8 opt bin external com google absl absl synchronization objs synchronization create thread identity pic o fpic iquote external com google absl iquote bazel out k8 opt bin external com google absl wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel o3 march native std c 17 wall wextra wcast qual wconversion null wformat security wmisse declaration woverlength string wpoint arith wundef wunuse local typedef wunuse result wvarargs wvla wwrite string dnominmax fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c external com google absl absl synchronization internal create thread identity cc o bazel out k8 opt bin external com google absl absl synchronization objs synchronization create thread identity pic o configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 229 1 232 2 action 1 run compile tensorflow lite kernel add cc 6s local prepa compile absl synchronization internal create thread identity cc 1 229 1 232 2 action run compile tensorflow lite kernel add cc 6s local compile synchronization internal create thread identity cc 0s local 1 230 1 232 compile tensorflow lite kernel add cc 7 local 1 230 1 232 compile tensorflow lite kernel add cc 9s local 1 230 1 232 compile tensorflow lite kernel add cc 10s local 1 230 1 232 compile tensorflow lite kernel add cc 11s local 1 231 1 232 prepa r wrapper pywrap tensorflow interpreter wrapper so subcommand tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so action link tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform cd root cache bazel bazel root 44af44c54090ffd8c730879fc5d7b491 execroot org tensorflow exec env ld library path usr local cuda lib64 usr local cuda lib usr local lib x86 64 linux gnu usr local nvidia lib usr local nvidia lib64 usr local nvidia lib usr local nvidia lib64 opt conda lib path opt bin opt conda bin usr local nvidia bin usr local cuda bin usr local sbin usr local bin usr sbin usr bin sbin bin pwd proc self cwd tf2 behavior 1 usr bin gcc bazel out k8 opt bin tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 2 param configuration 6bf13bb09c259727d5061837858294f1092bec30d275f05710212182ee5e1ce2 execution platform local execution config platform platform 1 231 1 232 prepa r wrapper pywrap tensorflow interpreter wrapper so 1 231 1 232 wrapper pywrap tensorflow interpreter wrapper so 0s local target tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper up to date nothing to build 1 232 1 232 check cache action info elapse time 1504 620 critical path 75 80 1 232 1 232 check cache action info 1232 process 204 internal 1028 local 1 232 1 232 check cache action info build complete successfully 1232 total action info build complete successfully 1232 total action cp kaggle working tensorflow tensorflow lite tool pip package bazel bin tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime chmod u w kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime pywrap tensorflow interpreter wrapper so cd kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 case tensorflow target in n python3 setup py bdist bdist wheel opt conda lib python3 10 site package setuptool distutil cmd py 66 setuptoolsdeprecationwarne setup py install be deprecate please avoid run setup py directly instead use pypa build pypa installer or other standard base tool see for detail self initialize option opt conda lib python3 10 site package setuptool distutil dist py 947 setuptoolsdeprecationwarne setup py install be deprecate please avoid run setup py directly instead use pypa build pypa installer or other standard base tool see for detail command initialize option echo output can be find here output can be find here find kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 manifest in kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 dist kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 dist tflite runtime 2 13 0 linux x86 64 tar gz kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 dist tflite runtime 2 13 0 cp310 cp310 linux x86 64 whl kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build bdist linux x86 64 kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime pywrap tensorflow interpreter wrapper so kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime metric interface py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime init py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime interpreter py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 build lib linux x86 64 cpython 310 tflite runtime metric portable py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian changelog kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian copyright kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian rule kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian control kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 debian compat kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper numpy cc kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper interpreter wrapper h kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper python util h kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper numpy h kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper interpreter wrapper pybind11 cc kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper python error reporter cc kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper python error reporter h kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper python util cc kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper interpreter wrapper cc kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 interpreter wrapper build kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info pkg info kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info dependency link txt kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info require txt kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info top level txt kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime egg info source txt kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime pywrap tensorflow interpreter wrapper so kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime metric interface py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime init py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime interpreter py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 tflite runtime metric portable py kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 setup py y exit 0 total 7748 rw r r 1 root root 3960944 sep 19 03 23 tflite runtime 2 13 0 linux x86 64 tar gz rw r r 1 root root 3966013 sep 19 03 23 tflite runtime 2 13 0 cp310 cp310 linux x86 64 whl kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 dist processing tflite runtime 2 13 0 cp310 cp310 linux x86 64 whl requirement already satisfied numpy 1 21 2 in opt conda lib python3 10 site package from tflite runtime 2 13 0 1 23 5 instal collect package tflite runtime successfully instal tflite runtime 2 13 0 importerror traceback most recent call last cell in 5 line 17 15 get ipython run line magic cd kaggle working tensorflow tensorflow lite tool pip package gen tflite pip python3 dist 16 get ipython system pip install tflite runtime 2 13 0 cp310 cp310 linux x86 64 whl 17 import tflite runtime interpreter as tflite file opt conda lib python3 10 site package tflite runtime interpreter py 33 30 from tensorflow python util tf export import tf export as tf export 31 else 32 this file be part of tflite runtime package 33 from tflite runtime import pywrap tensorflow interpreter wrapper as interpreter wrapper 34 from tflite runtime import metric portable as metric 36 def tf export x kwargs importerror opt conda lib python3 10 site package tflite runtime pywrap tensorflow interpreter wrapper so undefined symbol zn6tflite29farthestpointsamplinglaunchereiiipkfpfpi standalone code to reproduce the issue I try to add custom op and build tflite runtime with shim see issue and the custom op include some gpu code I build tflite runtime successfully and the installation seem successfully as well but when try to import tflite runtime in python there raise error about gpu api farthestpointsamplinglauncher miss the share library of gpu part code and change l159 l744 any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
extra semicolon in tensorflow lite micro micro profiler h 90
Bug
remove extra semicolon in tensorflow lite micro micro profiler h on line 90 before tf lite remove virtual delete after tf lite remove virtual delete
tensorflowtensorflow
tensorflow tf concurrency issue
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version unknown 2 11 0 from nvcr io nvidia tensorflow 23 03 tf2 py3 custom code yes os platform and distribution ubuntu 20 04 6 lts mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 1 r12 1 compiler 32415258 0 gpu model and memory no response current behavior we find that run tensorflow concat operation will have concurrency issue only in the nvidia tensorflow image we also try with other image but the issue can not reproduce we expect that tensorflow concat work fine for multi thread environment but it s not be it expect fyi add a lock for concat can avoid such issue what s the good practice our environment gpu a100 image nvcr io nvidia tensorflow 23 03 tf2 py3 driver nvidia smi 525 125 06 driver version 525 125 06 cuda version 12 0 standalone code to reproduce the issue shell import tensorflow as tf from concurrent future import threadpoolexecutor as complete client num 10 repeat 10000 executor threadpoolexecutor max workers client num future 2 input shape import threading lock threading lock test case 1 2 31 2 3 1 2 8 90 31 2 3 3 1 7 4 3 8 4 3 2 10 3 15 11 3 63 def do concat t1 t2 with lock return tf concat t1 t2 0 return tf concat t1 t2 0 print create task for in range repeat for test case in test case a tf constant test case 0 b tf constant test case 1 future executor submit do concat a b future 2 input shape future a shape b shape print wait task count 0 for future in as complete future 2 input shape key print f count future 2 input shape future datum future result count count 1 relevant log output shell 17086 tensorshape 62 tensorshape 2 17087 tensorshape 2 3 tensorshape 126 3 17088 tensorshape 124 tensorshape 4 17089 tensorshape 124 tensorshape 4 17090 tensorshape 2 3 tensorshape 126 3 17091 tensorshape 124 tensorshape 4 17092 tensorshape 2 3 tensorshape 126 3 traceback most recent call last file x py line 37 in datum future result file usr lib python3 8 concurrent future base py line 437 in result return self get result file usr lib python3 8 concurrent future base py line 389 in get result raise self exception file usr lib python3 8 concurrent future thread py line 57 in run result self fn self args self kwargs file x py line 22 in do concat return tf concat t1 t2 0 file usr local lib python3 8 dist package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file usr local lib python3 8 dist package tensorflow python framework op py line 7215 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl invalidargumenterror function node wrap concatv2 n 2 device job localhost replica 0 task 0 device gpu 0 concatop rank of all input tensor should match shape 0 124 vs shape 1 126 3 op concatv2 name concat
tensorflowtensorflow
fail to build on aarch64
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 9 17 bazel version 6 1 0 gcc compiler version 16 0 6 cuda cudnn version n a gpu model and memory n a current behavior tensorflow lite kernel rng util h 26 12 error use of undeclared identifier uint32 t standalone code to reproduce the issue shell bazel test config mkl aarch64 threadpool copt flax vector conversion test env tf enable onednn opt 1 test env tf2 behavior 1 define tf api version 2 test lang filter py flaky test attempt 3 test size filter small medium test output error verbose failure true test keep go not verbose timeout warning action env python bin path usr local bin python3 build tag filter no oss oss exclude oss serial v1only benchmark test no aarch64 gpu tpu no oss py39 no oss py310 test tag filter no oss oss exclude oss serial v1only benchmark test no aarch64 gpu tpu no oss py39 no oss py310 local test job 64 build test only tensorflow tensorflow compiler tf2tensorrt tensorflow compiler xrt tensorflow core tpu tensorflow go tensorflow java tensorflow python integration testing tensorflow tool toolchain tensorflow lite tensorflow core kernel image resize bicubic op test tensorflow core grappler optimizer auto mixed precision test cpu tensorflow core grappler optimizer remapper test cpu relevant log output shell error workspace tensorflow lite kernel build 497 11 compile tensorflow lite kernel rng util cc fail exit 1 clang fail error execute command from target tensorflow lite kernel rng util cd tmpfs bazel output bazel ubuntu eab0d61a99b6696edb3d2aff87b585e8 execroot org tensorflow exec env cachebuster 20220325 clang compiler path usr lib llvm 16 bin clang ld library path path home ubuntu action runner work tensorflow tensorflow bazel ci build cache cache bazelisk download bazelbuild bazel 6 1 0 linux arm64 bin home ubuntu action runner work tensorflow tensorflow bazel ci build cache bin usr local sbin usr local bin usr sbin usr bin sbin bin snap bin python bin path usr local bin python3 python lib path usr lib python3 dist package tf2 behavior 1 usr lib llvm 16 bin clang md mf bazel out aarch64 opt bin tensorflow lite kernel objs rng util rng util pic d frandom seed bazel out aarch64 opt bin tensorflow lite kernel objs rng util rng util pic o dbazel current repository iquote iquote bazel out aarch64 opt bin fmerge all constant wno builtin macro redefine d date redact d timestamp redact d time redact fpic u fortify source d fortify source 1 fstack protector wall wno invalid partial specialization fno omit frame pointer no canonical prefix dndebug g0 o2 ffunction section fdata section wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel wno gnu offsetof extensions wno gnu offsetof extension mtune generic march armv8 a o3 flax vector conversion std c 17 dfarmhash no cxx string deigen allow unaligned scalar wno sign compare o3 fno exception sysroot dt10 c tensorflow lite kernel rng util cc o bazel out aarch64 opt bin tensorflow lite kernel objs rng util rng util pic o configuration 91cacbf6409fd17883ece1a0e16168e33815822fe5d36a44e64642fa9b0e32ee execution platform local execution config platform platform in file include from tensorflow lite kernel rng util cc 15 tensorflow lite kernel rng util h 26 12 error use of undeclared identifier uint32 t std array threefry2x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 26 38 error unknown type name uint32 t std array threefry2x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 26 54 error unknown type name uint32 t std array threefry2x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 27 49 error use of undeclared identifier uint32 t std array ctr tensorflow lite kernel rng util h 32 12 error use of undeclared identifier uint32 t std array philox4x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 32 36 error unknown type name uint32 t std array philox4x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 32 52 error unknown type name uint32 t std array philox4x32 uint32 t key 0 uint32 t key 1 tensorflow lite kernel rng util h 33 47 error use of undeclared identifier uint32 t std array ctr 8 error generate
tensorflowtensorflow
body function of while loop can not access the external variable after compilation
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 dev20230914 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the body function of while loop can not access the external variable after compilation it will raise the error unboundlocalerror local variable x reference before assignment however if I run the model without the tf function jit compile true the model can be execute without any error standalone code to reproduce the issue shell class model tf keras model def init self super model self init tf function jit compile true comment this line it will succeed def call self x def cond I return I 10 def body I y y tf math add y 2 0 x tf math multiply x 2 0 return tf math subtract I 1 y x I tf constant 10 y tf constant 1 0 final y tf while loop cond body I y shape invariant I shape y shape return final y m model input shape 1 2 x tf constant 4 5 shape input shape y m x relevant log output shell unboundlocalerror exception encounter when call layer model 28 type model in user code file line 16 in body x tf math multiply x 2 0 unboundlocalerror local variable x reference before assignment call argument receive by layer model 28 type model x tf tensor shape 1 2 dtype float32
tensorflowtensorflow
could not find match concrete function to call load from the savedmodel
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf2 10 custom code yes os platform and distribution win10 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior how to match datum of signature standalone code to reproduce the issue shell model save filepath save model dir save format tf signature none local model tf keras model load model filepath save model dir y local pre local model predict x test y model pre model predict x test print y local pre y model pre numpy allclose y local pre y model pre user input tf tensorspec from tensor tf convert to tensor user input 0 name input 0 tf tensorspec from tensor tf convert to tensor user input 1 name input 1 tf tensorspec from tensor tf convert to tensor user input 2 name input 2 user output local model user fn user input relevant log output shell could not find match concrete function to call load from the savedmodel get positional argument 1 total keyword argument expect these argument to match one of the follow 1 option s option 1 positional argument 1 total tensorspec shape none 5 dtype tf float32 name input 0 tensorspec shape none 10 dtype tf int32 name input 1 tensorspec shape none 3 5 dtype tf int32 name input 2 keyword argument
tensorflowtensorflow
valueerror unable to create dataset name already exist
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior epoch 1 10 1 1 eta 0s loss 6 8405 accuracy 0 3250 valueerror traceback most recent call last in 1 transformer fit train ds epoch epoch validation datum val ds batch size batch size callback early stop checkpoint call plot loss 2 frame usr local lib python3 10 dist package h5py hl dataset py in make new dset parent shape dtype data name chunk compression shuffle fletcher32 maxshape compression opt fillvalue scaleoffset track time external track order dcpl dapl efile prefix virtual prefix allow unknown filter rdcc nslot rdcc nbytes rdcc w0 161 sid h5s create simple shape maxshape 162 163 dset i d h5d create parent i d name tid sid dcpl dcpl dapl dapl 164 165 if datum be not none and not isinstance datum empty h5py object pyx in h5py object with phil wrapper h5py object pyx in h5py object with phil wrapper h5py h5d pyx in h5py h5d create valueerror unable to create dataset name already exist I want to save each epoch as a checkpoint two week back back without any any error each checkpoint will save as a checkpoint but suddenly now get error standalone code to reproduce the issue shell import matplotlib pyplot as plt from tensorflow keras callbacks import callback import os checkpoint dir model checkpoint m 1 if not os path exist checkpoint dir os makedirs checkpoint dir from tensorflow keras callbacks import earlystopping early stop earlystoppe monitor val loss patience 3 set up the model checkpoint callback checkpoint call modelcheckpoint filepath checkpoint dir checkpoint epoch hdf5 monitor val loss save well only true save weight only false mode min save freq epoch transformer fit train ds epoch epoch validation datum val ds batch size batch size callback early stop checkpoint call relevant log output no response
tensorflowtensorflow
tf datum dataset list file you must feed a value for placeholder tensor
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 12 0 rc1 12 g0db597d0d75 2 12 0 custom code no os platform and distribution ubuntu 20 04 5 lts mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior read a dataset obtain with tf datum dataset list file print incomprehensible warning create two file bash touch a txt touch b txt run this python program python import tensorflow as tf dataset tf datum dataset list file a txt b txt for f in dataset print f print some incomprehensible warning tf tensor b b txt shape dtype string tf tensor b a txt shape dtype string 2023 09 07 17 19 04 634978 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 0 with dtype string and shape 2 node placeholder 0 2023 09 07 17 19 04 635273 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 0 with dtype string and shape 2 node placeholder 0 this be a duplicate of that be mark as resolve 3 year ago standalone code to reproduce the issue shell with tensorflow 2 12 0 no warning with tensorflow 2 11 1 relevant log output shell 2023 09 07 17 19 04 635273 I tensorflow core common runtime executor cc 1197 device cpu 0 debug info executor start abort this do not indicate an error and you can ignore this message invalid argument you must feed a value for placeholder tensor placeholder 0 with dtype string and shape 2 node placeholder 0
tensorflowtensorflow
linux tensorflow build from source bash fail genrule setup sh have carriage return
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 13 custom code yes os platform and distribution linux mobile device no response python version 3 10 bazel version 5 3 0 gcc compiler version llvm clang 16 cuda cudnn version cpu only gpu model and memory n a current behavior bazel build fail with bash fail error execute command bin bash line 1 r command not find bash fail genrule setup sh have carriage return window line ending modify this sh file do not work because bazel win t even start the build say that file be modify it might be corrupt doc need to tell we we should check out tf repo after git config global core autocrlf input thank standalone code to reproduce the issue shell git config global core autocrlf input very important otherwise your bazel build will fail because an invoke shell script will have window line ending git config global core eol lf git clone cd tensorflow git checkout b r2 13 origin r2 13 apt install python3 10 venv cd python3 m venv tf venv tf venv bin pip install u pip numpy wheel packaging request opt einsum tf venv bin pip install u kera preprocesse no dep wget mv bazelisk linux amd64 bazel chmod x bazel mv v bazel usr local bin install clang 16 cd tensorflow tf venv bin python3 configure py dl clang n opti flag msse4 1 bazel build local ram resource 2048 job 4 verbose failure tensorflow tool pip package build pip package relevant log output no response
tensorflowtensorflow
bazel test fail with some ct
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf2 12 custom code yes os platform and distribution centos7 6 mobile device no response python version 3 9 bazel version 5 3 0 gcc compiler version 9 3 1 cuda cudnn version no response gpu model and memory no response current behavior after compile tensorflow base on the source code bazel test be execute for unit test some test item be pass but there be many failure and the reason for the error be as follow so I want to know if I m execute the statement incorrectly how should I set it up bazel test c opt config cuda test sharding strategy disable tensorflow core kernel output information exec pager usr bin less 0 exit 1 execute test from tensorflow core kernel image resize op test gpu image image standalone code to reproduce the issue shell bazel test c opt config cuda test sharding strategy disable tensorflow core kernel relevant log output no response
tensorflowtensorflow
unable to open file truncated file eof 2568306 sblock base addr 0 store eof 9406464
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 13 0 rc2 7 g1cb1a030a62 2 13 0 custom code no os platform and distribution mac os 13 5 1 mobile device no response python version 3 11 3 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior oserror traceback most recent call last cell in 15 line 3 1 create the base model from the pre train model mobilenet v2 2 img shape img size 3 3 base model tf keras application mobilenet v2 mobilenetv2 input shape img shape 4 include top false 5 weight imagenet file pyenv version 3 11 3 lib python3 11 site package keras src application mobilenet v2 py 481 in mobilenetv2 input shape alpha include top weight input tensor pool class classifier activation kwargs 477 weight path base weight path model name 478 weight path datum util get file 479 model name weight path cache subdir model 480 481 model load weight weight path 482 elif weight be not none 483 model load weight weight file pyenv version 3 11 3 lib python3 11 site package keras src util traceback util py 70 in filter traceback error handler args kwargs 67 filter tb process traceback frames e traceback 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb file h5py object pyx 55 in h5py object with phil wrapper file h5py h5f pyx 106 in h5py h5f open oserror unable to open file truncated file eof 2568306 sblock base addr 0 store eof 9406464 standalone code to reproduce the issue shell create the base model from the pre train model mobilenet v2 img shape img size 3 base model tf keras applications mobilenetv2 input shape img shape include top false weight imagenet relevant log output no response
tensorflowtensorflow
xla compile tf where with know output shape error set shape tf concat tf stack
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 13 custom code no os platform and distribution google colab mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior output of tf where when use inside tf function with jit compile true can sometimes be use correctly as with sum and sometimes raise shape mismatch error as with concatenation this error be present even if output shape be set manually with set shape the code below run without jit compile or with sum instead of tf concat and only fail if concatenate inside a compile function note autoclustere solve the issue on the toy example but not on the codebase I be work on standalone code to reproduce the issue colab python import tensorflow as tf def fun x y x tf where x 1 print f shape before unknown x shape x set shape shape y shape 0 2 print f shape after know x shape return tf concat x y axis 1 concatentation fail return x y sum would succeed x tf constant 0 0 1 1 0 0 1 0 1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 0 dtype tf int32 y tf expand dim tf range x shape 0 2 dtype tf int64 axis 1 fun x y tf function fun x y tf function fun jit compile true x y fail as describe above relevant log output shell shape before unknown none 2 shape after know 10 2 invalidargumenterror can not concatenate array that differ in dimension other than the one be concatenate dimension 0 in both shape must be equal s64 25 2 vs s64 10 1
tensorflowtensorflow
will there a mlp model in the future version
Bug
issue type feature request have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 13 0 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when build deep learning model like multi layer perceptron mlp code reusability and conciseness be crucial factor currently use tf keras sequential in tensorflow allow for convenient creation of sequential model however manually add common operation such as batch normalization or dropout to each layer can lead to code redundancy and an increase burden in term of code and maintenance therefore propose the addition of a feature in tensorflow to directly create mlp with batch normalization and dropout be highly beneficial here be several reason why this feature would be advantageous for tensorflow user 1 simplify code user win t need to manually add batch normalization and dropout operation to each layer result in clean more readable and maintainable code 2 reduce error rate manual copy pasting of code be error prone especially as model complexity increase automate the integration of batch normalization and dropout operation can reduce issue arise from code error 3 increase productivity developer can build and iterate on model more swiftly focus on design and parameter tune rather than rewrite the same code segment for every new model 4 education and learning for newcomer to tensorflow this feature can provide a quick onboarding process lower the learning curve and enable they to grasp and apply deep learning concept fast certainly here s the additional information I also believe that pytorch have implement mlp functionality quite effectively an example of this can be find in the follow url pytorch mlp pytorch s approach to create mlp provide a good reference for how tensorflow could potentially integrate similar feature standalone code to reproduce the issue shell origin model tf keras sequential tf keras layer dense 128 tf keras layer batchnormalization tf keras layers relu tf keras layers dropout 0 2 tf keras layer dense 64 tf keras layer batchnormalization tf keras layers relu tf keras layers dropout 0 2 tf keras layer dense 32 tf keras layer batchnormalization tf keras layers relu tf keras layers dropout 0 2 tf keras layer dense 10 with mlp model python model tf keras mlp hide channel 128 64 32 10 norm layer tf keras layers batchnormalization activation layer tf keras layers relu dropout 0 2 relevant log output no response
tensorflowtensorflow
please help
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 1 13 1 custom code yes os platform and distribution linux ubuntu 20 0 mobile device no response python version 3 6 bazel version no response gcc compiler version 10 5 0 cuda cudnn version 10 gpu model and memory rtx 3060 12 g current behavior I be run the octopus repo which use tensorflow gpu version 1 13 1 when I run that model with python I get some error from tensorflow please help I standalone code to reproduce the issue shell import os import argparse import tensorflow as tf import kera backend as k from glob import glob from lib io import openpose from file read segmentation write mesh from model octopus import octopus def main weight name segm dir pose dir out dir opt pose step opt shape step segm file sort glob os path join segm dir png pose file sort glob os path join pose dir json if len segm file len pose file or len segm file len pose file 0 exit inconsistent input k set session tf session config tf configproto gpu option tf gpuoption allow growth true model octopus num len segm file model load weight segmentation read segmentation f for f in segm file joint 2d face 2d for f in pose file j f openpose from file f assert len j 25 assert len f 70 joint 2d append j face 2d append f if opt pose step print optimize for pose model opt pose segmentation joint 2d opt step opt pose step if opt shape step print optimize for shape model opt shape segmentation joint 2d face 2d opt step opt shape step print estimate shape pre model predict segmentation joint 2d write mesh obj format out dir name pre vertice 0 pre face print do if name main parser argparse argumentparser parser add argument name type str help sample name parser add argument segm dir type str help segmentation image directory parser add argument pose dir type str help 2d pose keypoint directory parser add argument opt step pose p default 5 type int help optimization step pose parser add argument opt step shape s default 15 type int help optimization step parser add argument out dir od default out help output directory parser add argument weight w default weight octopus weight hdf5 help model weight file hdf5 args parser parse args main args weight args name args segm dir args pose dir args out dir args opt step pose args opt step shape relevant log output shell processing sample optimize for pose 0 0 10 00 00 it s 2023 08 22 16 24 18 296359 I tensorflow stream executor dso loader cc 152 successfully open cuda library libcubla so 10 0 locally 2023 08 22 16 25 50 156420 I tensorflow core kernel cuda solver cc 159 create cudasolver handle for stream 0x55fa094fdcf0 2023 08 22 16 26 08 284736 e tensorflow stream executor cuda cuda blas cc 698 fail to run cubla routine cublasgemmbatchedex cubla status execution fail 2023 08 22 16 26 08 284773 e tensorflow stream executor cuda cuda blas cc 2620 internal fail blas call see log for detail 2023 08 22 16 26 08 326578 I tensorflow stream executor stream cc 5014 stream 0x55fa0950bb90 impl 0x55fa093dbf20 do not memcpy device to host source 0x813bc6700 2023 08 22 16 26 08 326623 f tensorflow core framework op kernel cc 1408 check fail nullptr ctx op kernel asasync nullptr vs 0x55fa38108400 use op require async in asyncopkernel implementation abort