repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
fail to expand return if roctracer error it be invalid to use a preprocessor directive as macro parameter
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen why be this an issue at codacy we strive to provide great description for our pattern with good explanation developer can well understand issue and even learn how to fix they for this tool we be not yet meet this standard but you can help we improve the doc to know more take a look at our tool documentation guide you can also visit the tool s website to find useful tip about the pattern standalone code to reproduce the issue shell return if roctracer error static cast if tf rocm version 50300 se wrap roctracer next record record record else relevant log output no response
tensorflowtensorflow
new
Invalid
issue
tensorflowtensorflow
tr1
Invalid
system information android device information use adb shell getprop ro build fingerprint if possible tensorflow lite in play service sdk version find in build gradle google play service version setting app google play service app detail standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to or attach code demonstrating the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
update label image py to be compatible with tf 2 x
Bug
the label image py be outdate add necessary change to be able to run in tf 2 x version please do the needful fix 59900 thank
tensorflowtensorflow
s3 use https 0 be be ignore tf insist on https
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code no os platform and distribution cos containerd mobile device no response python version 3 9 2 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell run the standalone code to reproduce the issue on kubeflow 1 6 try to use minio storage via s3 protocol produce the log attach with the follow excerpt show it be ignore the s3 use https 0 environment variable 2023 03 14 18 07 48 690914 I tensorflow c log cc 34 make request to error message curlcode 35 ssl connect error note the follow code use boto3 work just fine import boto3 session boto3 session session s3client session client service name s3 aw access key i d minio aw secret access key wi939w9 endpoint url use ssl 0 verify 0 region name we east 1 responsemetadata requestid 174c5dc82c4e602d hostid httpstatuscode 200 httpheader accept range byte content length 667 content security policy block all mixed content content type application xml server envoy vary origin x amz request i d 174c5dc82c4e602d x xss protection 1 mode block date tue 14 mar 2023 18 51 15 gmt x envoy upstream service time 1 retryattempt 0 istruncate false marker content key tfx template datum taxi datum csv lastmodifie datetime datetime 2023 2 27 23 47 55 766000 tzinfo tzlocal etag 43550bbc6946bcabc55328e0adb129b3 1 size 1922812 storageclass standard owner displayname i d 02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4 name mlpipeline prefix tfx template datum taxi datum csv delimiter maxkey 1000 encodingtype url standalone code to reproduce the issue shell import tensorflow as tf import tensorflow io as tfio import os os environ aws region we east 1 os environ s3 endpoint os environ aw access key i d minio os environ aw secret access key wi939w9 os environ s3 use https 0 os environ s3 verify ssl 0 os environ tf cpp min log level 0 os environ tf cpp min vlog level 5 os environ aws log level trace tf io gfile glob s3 mlpipeline tfx template datum taxi datum csv exist produce similar log tf io gfile exist s3 mlpipeline tfx template datum taxi datum csv relevant log output shell 2023 03 14 18 07 48 690794 I tensorflow c log cc 34 no content body content length header 2023 03 14 18 07 48 690823 I tensorflow c log cc 34 find credential in environment with access key i d minio 2023 03 14 18 07 48 690830 I tensorflow c log cc 34 find secret key 2023 03 14 18 07 48 690836 I tensorflow c log cc 34 note http payload be not be sign signpayload 0 http scheme https 2023 03 14 18 07 48 690857 I tensorflow c log cc 34 canonical header string amz sdk invocation i d 26cffbe4 9f97 48d7 bc0a ffa9613b9dc0 amz sdk request attempt 1 content type application xml host 10 244 1 171 9000 x amz api version 2006 03 01 x amz content sha256 unsigned payload x amz date 20230314t180748z 2023 03 14 18 07 48 690861 I tensorflow c log cc 34 sign header value amz sdk invocation i d amz sdk request content type host x amz api version x amz content sha256 x amz date 2023 03 14 18 07 48 690872 I tensorflow c log cc 34 canonical request string head mlpipeline tfx template datum taxi datum csv amz sdk invocation i d 26cffbe4 9f97 48d7 bc0a ffa9613b9dc0 amz sdk request attempt 1 content type application xml host 10 244 1 171 9000 x amz api version 2006 03 01 x amz content sha256 unsigned payload x amz date 20230314t180748z amz sdk invocation i d amz sdk request content type host x amz api version x amz content sha256 x amz date unsigned payload 2023 03 14 18 07 48 690886 I tensorflow c log cc 34 final string to sign aws4 hmac sha256 20230314t180748z 20230314 us east 1 s3 aws4 request 04da96fd45298dbfd632d7a49f6b1d7de1b62583c81bb099e9b860b2b86628a3 2023 03 14 18 07 48 690891 I tensorflow c log cc 34 final compute signing hash 23e330b317746f6b48081889f4b7bb2756aa9eefe49c098b8c82ca81da3a6c39 2023 03 14 18 07 48 690899 I tensorflow c log cc 34 signing request with aws4 hmac sha256 credential minio 20230314 us east 1 s3 aws4 request signedheader amz sdk invocation i d amz sdk request content type host x amz api version x amz content sha256 x amz date signature 23e330b317746f6b48081889f4b7bb2756aa9eefe49c098b8c82ca81da3a6c39 2023 03 14 18 07 48 690905 I tensorflow c log cc 34 request successfully sign 2023 03 14 18 07 48 690914 I tensorflow c log cc 34 make request to 2023 03 14 18 07 48 690919 I tensorflow c log cc 34 include header 2023 03 14 18 07 48 690922 I tensorflow c log cc 34 amz sdk invocation i d 26cffbe4 9f97 48d7 bc0a ffa9613b9dc0 2023 03 14 18 07 48 690926 I tensorflow c log cc 34 amz sdk request attempt 1 2023 03 14 18 07 48 690929 I tensorflow c log cc 34 authorization aws4 hmac sha256 credential minio 20230314 us east 1 s3 aws4 request signedheader amz sdk invocation i d amz sdk request content type host x amz api version x amz content sha256 x amz date signature 23e330b317746f6b48081889f4b7bb2756aa9eefe49c098b8c82ca81da3a6c39 2023 03 14 18 07 48 690932 I tensorflow c log cc 34 content type application xml 2023 03 14 18 07 48 690935 I tensorflow c log cc 34 host 10 244 1 171 9000 2023 03 14 18 07 48 690939 I tensorflow c log cc 34 user agent aw sdk cpp 1 8 186 linux 5 15 0 67 generic x86 64 gcc 7 3 1 2023 03 14 18 07 48 690945 I tensorflow c log cc 34 x amz api version 2006 03 01 2023 03 14 18 07 48 690949 I tensorflow c log cc 34 x amz content sha256 unsigned payload 2023 03 14 18 07 48 690954 I tensorflow c log cc 34 x amz date 20230314t180748z 2023 03 14 18 07 48 690960 I tensorflow c log cc 34 attempt to acquire curl connection 2023 03 14 18 07 48 690966 I tensorflow c log cc 34 connection have be release continue 2023 03 14 18 07 48 690971 I tensorflow c log cc 34 return connection handle 0x56396c2af710 2023 03 14 18 07 48 690976 I tensorflow c log cc 34 obtain connection handle 0x56396c2af710 2023 03 14 18 07 48 695426 e tensorflow c log cc 40 curl return error code 35 ssl connect error 2023 03 14 18 07 48 695462 I tensorflow c log cc 34 destroy curl handle 0x56396c2af710 2023 03 14 18 07 48 695483 I tensorflow c log cc 34 create replacement handle and release to pool 0x56396c2af710 2023 03 14 18 07 48 695495 I tensorflow c log cc 34 request return error attempt to generate appropriate error code from response 2023 03 14 18 07 48 695504 e tensorflow c log cc 40 http response code 1 resolve remote host ip address request i d exception name error message curlcode 35 ssl connect error 0 response header 2023 03 14 18 07 48 695516 w tensorflow c log cc 37 if the signature check fail this could be because of a time skew attempt to adjust the signer 2023 03 14 18 07 48 695521 I tensorflow c log cc 34 date header be not find in the response can t attempt to detect clock skew 2023 03 14 18 07 48 695527 w tensorflow c log cc 37 request fail now wait 0 ms before attempt again 2023 03 14 18 07 48 695595 I tensorflow c log cc 34 no content body content length header 2023 03 14 18 07 48 695603 I tensorflow c log cc 34 find credential in environment with access key i d minio 2023 03 14 18 07 48 695609 I tensorflow c log cc 34 find secret key 2023 03 14 18 07 48 695617 I tensorflow c log cc 34 note http payload be not be sign signpayload 0 http scheme https 2023 03 14 18 07 48 695637 I tensorflow c log cc 34 canonical header string amz sdk invocation i d 26cffbe4 9f97 48d7 bc0a ffa9613b9dc0
tensorflowtensorflow
nan while do inference in c with half precision
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 11 custom code yes os platform and distribution debian gnu linux 10 mobile device no response python version 3 7 bazel version 6 1 0 gcc compiler version llvm 12 cuda cudnn version 11 6 gpu model and memory tesla t4 current behaviour shell I ve train a model in python use mixed precision however when load the model in c and try to perform inference the output be all nan I ve test load the model in python and it work some idea might be toolchain be different tensor in c be not zero initialize standalone code to reproduce the issue shell python class model tf keras model def init self self conv tf keras layer conv2d 128 activation relu def call self x return self conv x x tf convert to tensor np random random 1 8 8 model model model x model save tmp model c use namespace tensorflow const static std vector kinputname serve default args 0 0 const static std vector koutputname statefulpartitionedcall 0 int main int argc char argv savedmodelbundlelite model loadsavedmodel sessionoption runoption tmp model serve model bundle tensor input feature datatype dt float 1 8 8 std vector cast output status status session run op cast scope input feature datatype dt half op cast scope input state datatype dt half cast output std vector std make pair kinputname 0 cast output 0 std vector output buf tensor datatype dt half 1 6 6 this be nan model getsession run input output name output buf relevant log output shell 2023 03 14 05 08 59 641765 I cc script play model cc 94 nan 2023 03 14 05 08 59 641818 I cc script play model cc 95 nan 2023 03 14 05 08 59 655568 I cc script play model cc 119 model stat 2023 03 14 05 08 59 655609 I cc script play model cc 120 top move loc 0 0 2023 03 14 05 08 59 655621 I cc script play model cc 121 win nan loss nan 2023 03 14 05 08 59 655677 I cc script play model cc 123 board 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 a b c d e f g h i j k l m n o p q r s
tensorflowtensorflow
tensorflow lite for microcontroller documentation have break link to cc file
Bug
click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version 2 11 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell documentation link to cc file be break in section run inference link to hello world test cc and the other remain cc file link can not be open standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
fvr
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version dewxc custom code yes os platform and distribution sdac mobile device sx python version c bazel version xsxd gcc compiler version scdx cuda cudnn version dc gpu model and memory wsc current behaviour shell a bug happen standalone code to reproduce the issue shell sdacv relevant log output shell recfdssdc
tensorflowtensorflow
outdate example
Bug
click to expand issue type other have you reproduce the bug with tf nightly no source source tensorflow version tf 2 6 0 custom code no os platform and distribution window 10 build 19045 2673 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version 11 0 2 gpu model and memory no response current behaviour label image example be outdate standalone code to reproduce the issue shell not applicable relevant log output shell not applicable
tensorflowtensorflow
error tensorflow tensorflow compiler xla service gpu runtime build 146 11 in deps attribute of cc library rule tensorflow compiler xla service gpu runtime gemm target tensorflow compiler xla service gpu non atomically upgradeable rw lock do not exist
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 12 custom code no os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 10 bazel version 5 3 0 gcc compiler version 11 3 0 cuda cudnn version 12 1 8 8 0 121 gpu model and memory nvidia geforce 940mx current behaviour shell bazel build config mkl config opt tensorflow tool pip package build pip package extract bazel installation start local bazel server and connect to it info option provide by the client inherit common option isatty 1 terminal column 211 info read rc option for build from tensorflow bazelrc inherit common option experimental repo remote exec info read rc option for build from tensorflow bazelrc build option define framework share object true define tsl protobuf header only true define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no are true noincompatible remove legacy whole archive enable platform specific config define with xla support true config short log config v2 define no aw support true define no hdfs support true experimental cc shared library experimental link static library once false incompatible enforce config set visibility info read rc option for build from tensorflow tf configure bazelrc build option action env python bin path document dev programs anaconda3 envs tf bin python3 action env python lib path document dev programs anaconda3 envs tf lib python3 10 site package python path document dev programs anaconda3 envs tf bin python3 action env cuda toolkit path usr local cuda 12 1 action env tf cuda compute capabilitie 5 0 action env ld library path usr lib libreoffice program usr local cuda target x86 64 linux lib usr lib x86 64 linux gnu action env gcc host compiler path usr bin x86 64 linux gnu gcc 11 config cuda info read rc option for build from document dev git tensorflow bazelrc build option delete package tensorflow compiler mlir tfrt tensorflow compiler mlir tfrt benchmark tensorflow compiler mlir tfrt jit python bind tensorflow compiler mlir tfrt jit transform tensorflow compiler mlir tfrt python test tensorflow compiler mlir tfrt test tensorflow compiler mlir tfrt test ir tensorflow compiler mlir tfrt test analysis tensorflow compiler mlir tfrt test jit tensorflow compiler mlir tfrt test lhlo to tfrt tensorflow compiler mlir tfrt test lhlo to jitrt tensorflow compiler mlir tfrt test tf to corert tensorflow compiler mlir tfrt test tf to tfrt data tensorflow compiler mlir tfrt test save model tensorflow compiler mlir tfrt transform lhlo gpu to tfrt gpu tensorflow core runtime fallback tensorflow core runtime fallback conversion tensorflow core runtime fallback kernel tensorflow core runtime fallback opdef tensorflow core runtime fallback runtime tensorflow core runtime fallback util tensorflow core tfrt eager tensorflow core tfrt eager backends cpu tensorflow core tfrt eager backend gpu tensorflow core tfrt eager core runtime tensorflow core tfrt eager cpp test core runtime tensorflow core tfrt gpu tensorflow core tfrt run handler thread pool tensorflow core tfrt runtime tensorflow core tfrt save model tensorflow core tfrt graph executor tensorflow core tfrt save model test tensorflow core tfrt tpu tensorflow core tfrt util info find applicable config definition build short log in file document dev git tensorflow bazelrc output filter do not match anything info find applicable config definition build v2 in file document dev git tensorflow bazelrc define tf api version 2 action env tf2 behavior 1 info find applicable config definition build cuda in file document dev git tensorflow bazelrc repo env tf need cuda 1 crosstool top local config cuda crosstool toolchain local config cuda enable cuda info find applicable config definition build mkl in file document dev git tensorflow bazelrc define build with mkl true define enable mkl true define tensorflow mkldnn contraction kernel 0 define build with openmp true c opt info find applicable config definition build opt in file document dev git tensorflow tf configure bazelrc copt wno sign compare host copt wno sign compare info find applicable config definition build linux in file document dev git tensorflow bazelrc host copt w copt wno all copt wno extra copt wno deprecate copt wno deprecate declaration copt wno ignore attribute copt wno array bound copt wunuse result copt werror unused result copt wswitch copt werror switch copt wno error unused but set variable define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c 17 host cxxopt std c 17 config dynamic kernel distinct host configuration false experimental guard against concurrent change info find applicable config definition build dynamic kernel in file document dev git tensorflow bazelrc define dynamic load kernel true copt dautoload dynamic kernel warn download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find warning download from fail class java io filenotfoundexception get return 404 not find error document dev git tensorflow tensorflow compiler xla service gpu runtime build 146 11 in deps attribute of cc library rule tensorflow compiler xla service gpu runtime gemm target tensorflow compiler xla service gpu non atomically upgradeable rw lock do not exist error document dev git tensorflow tensorflow compiler xla service gpu runtime build 146 11 analysis of target tensorflow compiler xla service gpu runtime gemm fail info repository cudnn frontend archive instantiate at document dev git tensorflow workspace 15 14 in document dev git tensorflow tensorflow workspace2 bzl 967 21 in workspace document dev git tensorflow tensorflow workspace2 bzl 171 20 in tf repository document dev git tensorflow third party repo bzl 136 21 in tf http archive repository rule tf http archive define at document dev git tensorflow third party repo bzl 89 35 in error analysis of target tensorflow tool pip package build pip package fail build abort info elapse time 92 916s info 0 process fail build do not complete successfully 602 package load 36603 target configure fetch 8 793 160b 8s standalone code to reproduce the issue shell configure bazel build config mkl config opt tensorflow tool pip package build pip package relevant log output no response
tensorflowtensorflow
layer input spec must be an instance of inputspec
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version tf 2 6 custom code no os platform and distribution window mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell from tensorflow python keras engine input spec import inputspec as inputspec1 def input spec self value for v in tf nest flatten value if v be not none and not isinstance v inputspec if not isinstance v inputspec1 raise typeerror layer input spec must be an instance of inputspec get format v self input spec value isinstance v inputspec return false when the v be inputspec shape none 500 768 ndim 3 after add if not isinstance v inputspec1 the assert be fix anyway after modify it my bert tuning model work fine on gpu on tensflow2 6 on python 3 9 thank for the reply for correct solution standalone code to reproduce the issue shell below be the breakpoint output right before the assert v inputspec shape none 500 768 ndim 3 type v inputspec from tensorflow python keras engine input spec import inputspec as inputspec1 isinstance v inputspec1 true isinstance v inputspec false relevant log output no response
tensorflowtensorflow
error when try to get value for bert model in classify text with bert tutorial
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 4 4 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I be try to run the classify text with bert tutorial use jupyter notebook on a remote server when I try to implement bert model hub keraslayer tfhub handle encoder it never return a result and it be still execute with the asterik next to the cell be this suppose to happen and be it normal for that line to take a long time to execute standalone code to reproduce the issue shell the code be this line bert model hub keraslayer tfhub handle encoder relevant log output no response
tensorflowtensorflow
tensorflow pypi doesn t support protobuf 3 20 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 sw ver productname macos productversion 12 6 3 buildversion 21g419 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device macbook tensorflow instal from source or binary pip binary tensorflow version use command below the most recent one python version 3 8 15 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce see describe the problem section describe the problem I have a pyproject toml I want to upgrade its onnx from 1 12 0 to 1 13 0 when I run poetry lock it tell I because concrete ml depend on onnx 1 13 0 which depend on protobuf 3 20 2 4 protobuf be require so because concrete ml depend on protobuf 3 19 6 version solving fail so I upgrade protobuf and now it tell I because tensorflow 2 11 0 depend on protobuf 3 9 2 3 20 and concrete ml depend on protobuf 3 20 0 tensorflow be forbid so because concrete ml depend on tensorflow 2 11 0 version solving fail which I understand as the most recent version of tf 2 11 0 currently doesn t support a recent version of protobuf this be for I a problem since for now when I run pip audit it tell I find 3 know vulnerability in 3 package name version i d fix version mpmath 1 2 1 pysec 2021 427 onnx 1 12 0 ghsa ffxj 547x 5j7c 1 13 0 py 1 11 0 pysec 2022 42969 so I can t upgrade onnx and in my understanding it be relate to tf thank for your hard work guy what you do for the ml community be amazing
tensorflowtensorflow
fix dtype in code comment in constant test cc
Bug
the dtype to be change to f16 from f8 at line number 85 as the function convert f16 to f32 merging this close the issue 59748
tensorflowtensorflow
simplernn doesn t appear to use its recurrent machinery
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 7 0 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version python 3 9 15 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell code generate the issue be report here basically I expect that all weight in my model change when change the epoch number while what I observe be that the state weight in my simple code the state matrix reduce to only one weight stay stuck to its initialization standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow keras model import sequential from tensorflow keras layer import dense from tensorflow addon layer import esn from tensorflow addon rnn import esncell from tensorflow keras layer import rnn from tensorflow keras layers import simplernn simplernncell from sklearn preprocesse import minmaxscaler from tensorflow import random as rnd fix the seed rnd set seed 0 the datum can be download from datum np loadtxt mackeyglass t17 txt normalize scaler minmaxscaler feature range 0 1 scale scaler fit transform datum reshape 1 1 split dataset in train and test train test scale 0 100 scale 100 split into input and output train x train y train 1 train 1 test x test y test 1 test 1 reshape train x train x reshape train x shape 0 1 train x shape 1 test x test x reshape test x shape 0 1 test x shape 1 batch and epoch batch size 20 epoch 3 design and run the model model sequential model add rnn simplernncell 1 model add esn unit 12 spectral radius spectral radius leaky 0 75 connectivity 0 9 this line work exactly like the next one model add rnn esncell 12 spectral radius spectral radius leaky 0 75 connectivity 0 9 model add dense train y shape 1 model compile loss huber optimizer adam model fit train x train y epochs epoch batch size batch size validation data test x test y verbose 0 shuffle false print the weight of the dense layer print model layer 1 get weight for layer in model layer print layer get config layer get weight for layer in model layer print layer get weight relevant log output shell f I run this code with 2 epoch I receive the follow output array 0 8942287 dtype float32 array 1 dtype float32 array 0 05435111 dtype float32 array 1 272426 dtype float32 array 0 04711587 dtype float32 if I run this code with 3 epoch I receive the follow output array 0 89395165 dtype float32 array 1 dtype float32 array 0 06734365 dtype float32 array 1 2927996 dtype float32 array 0 05247825 dtype float32 so the state weight array 1 be the only one not change
tensorflowtensorflow
inconsistent selection of optimizer
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 custom code no os platform and distribution ubuntu mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I be upgrade to tensorflow 2 11 from 2 6 it seem kera now have multiple class of optimizer two of they be optimzi v2 and optimizer experimental it be not clear in the tf version 2 11 which optimizer class be mean to be use as default when try to instantiate optimizer directly in python we get optimizer experimental however when initiate from the config it return the optimzi v2 we be try to log the current learning rate as the training progress and the different interface of the two optimizer class cause thing to break we can check the class type when get the learning rate from the optimizer object as a workaround but it will be nice to have consistent default behavior standalone code to reproduce the issue shell import tensorflow keras optimizer as optimizer type optimizer adam keras optimizer optimizer experimental adam adam import tensorflow kera optimizer as optimizer cfg class name adam config beta 1 0 9 beta 2 0 99 epsilon 1 0e 05 learning rate class name exponentialdecay config decay rate 1 0 decay step 10000 initial learning rate 0 01 import yaml cfg yaml safe load cfg opt optimizer get cfg type opt keras optimizer optimizer v2 adam adam relevant log output no response
tensorflowtensorflow
why my code can run in tensorflow gpu can t run in tensorflow cpu
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 6 custom code yes os platform and distribution windows10 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell my code can run in tensorflow gpu 2 6 0 but can t run in tensorflow cpu 2 6 0 they use same code and ds why error be traceback most recent call last file c user dh pycharmproject four model test py line 53 in movie combine layer flat val movie layer model np reshape item take 0 1 1 title actor category file c user dh anaconda3 envs test lib site package keras util traceback util py line 70 in error handler raise e with traceback filter tb from none file c user dh appdata roam python python39 site package tensorflow python framework op py line 7215 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl invalidargumenterror exception encounter when call layer movie actor embed layer f type embed function node wrap resourcegather device job localhost replica 0 task 0 device cpu 0 indice 0 3 2010 be not in 0 2006 op resourcegather call argument receive by layer movie actor embed layer f type embed input tf tensor shape 1 30 dtype int32 it look like matrix boundary problem but it can t happen in tensorflow gpu 2 6 0 standalone code to reproduce the issue shell for item in movie value title np zero 1 title count title 0 item take 1 actor np zero 1 30 actor 0 item take 2 category np zero 1 10 category 0 item take 3 movie combine layer flat val movie layer model np reshape item take 0 1 1 title actor category movie matric append movie combine layer flat val relevant log output shell traceback most recent call last file c user dh pycharmproject four model test py line 53 in movie combine layer flat val movie layer model np reshape item take 0 1 1 title actor category file c user dh anaconda3 envs test lib site package keras util traceback util py line 70 in error handler raise e with traceback filter tb from none file c user dh appdata roam python python39 site package tensorflow python framework op py line 7215 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl invalidargumenterror exception encounter when call layer movie actor embed layer f type embed function node wrap resourcegather device job localhost replica 0 task 0 device cpu 0 indice 0 3 2010 be not in 0 2006 op resourcegather call argument receive by layer movie actor embed layer f type embed input tf tensor shape 1 30 dtype int32
tensorflowtensorflow
find minor error in comment
Bug
l85 f8 might need to be f16 in this comment
tensorflowtensorflow
tensorflow cpu aw 2 12 0rc0 only have the python 3 11 wheel
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 12 0rc0 custom code no os platform and distribution aarch64 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell see file there be only one file standalone code to reproduce the issue shell just go to file or reproduce use pip on aarch64 and python 3 10 pip install tensorflow 2 12 0rc0 relevant log output shell collect tensorflow 2 12 0rc0 download tensorflow 2 12 0rc0 cp310 cp310 manylinux 2 17 aarch64 manylinux2014 aarch64 whl 2 0 kb error could not find a version that satisfy the requirement tensorflow cpu aw 2 12 0 rc0 platform system linux and platform machine arm64 or platform machine aarch64 from tensorflow from version 2 9 1 2 10 0rc0 2 10 0rc2 2 10 0rc3 2 10 0 2 10 1 2 11 0rc0 2 11 0rc1 2 11 0rc2 2 11 0 error no match distribution find for tensorflow cpu aw 2 12 0 rc0 platform system linux and platform machine arm64 or platform machine aarch64
tensorflowtensorflow
tensor ndim not work in tensorflow function decorator
Bug
tensorflow version v2 11 0 rc2 17 gd5b57ca93e5 2 11 0 custom code yes os platform and distribution google colab standalone code to reproduce the issue python3 import tensorflow as tf tensor tf random normal 32 128 128 3 some image print work here tensor ndim tf function def some function image assert image ndim 4 not work here also some complex code print not work here image ndim some function tensor relevant log output shell attributeerror traceback most recent call last in 10 print not work here image ndim 11 12 some function tensor 1 frame usr local lib python3 8 dist package tensorflow python util traceback util py in error handler args kwargs 151 except exception as e 152 filter tb process traceback frames e traceback 153 raise e with traceback filter tb from none 154 finally 155 del filter tb tmp autograph generate filesflyrml3 py in tf some function image 6 def tf some function image 7 with ag functionscope some function fscope ag conversionoption recursive true user request true optional feature internal convert user code true as fscope 8 ag ld print not work here ag ld image ndim 9 return tf some function 10 return inner factory attributeerror in user code file line 10 in some function print not work here image ndim attributeerror tensor object have no attribute ndim
tensorflowtensorflow
tf distribute look for libcublaslt so 10 when instal cuda version be 11 6
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 8 2 11 custom code yes os platform and distribution rhel 7 9 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version 11 6 8 4 gpu model and memory no response current behaviour shell when use tf multi worker the chief node look for libcublaslt from cuda 10 2 while the load cuda version be 11 6 the expect behaviour be that tensorflow should be use libcublaslt from the currently available cuda version rather than 10 2 the worker node exit with a communication error connection reset by peer standalone code to reproduce the issue shell train the model relevant log output shell 2023 02 16 09 26 37 626920 I tensorflow core platform cpu feature guard cc 151 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 02 16 09 26 38 332105 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 0 with 11641 mb memory device 0 name tesla v100 pcie 16 gb pci bus i d 0000 3b 00 0 compute capability 7 0 2023 02 16 09 26 38 334955 I tensorflow core common runtime gpu gpu device cc 1525 create device job worker replica 0 task 0 device gpu 0 with 11641 mb memory device 0 name tesla v100 pcie 16 gb pci bus i d 0000 3b 00 0 compute capability 7 0 2023 02 16 09 26 38 341938 I tensorflow core distribute runtime rpc grpc channel cc 272 initialize grpcchannelcache for job worker 0 10 149 255 254 12345 1 10 149 0 4 23456 2023 02 16 09 26 38 342153 I tensorflow core distribute runtime rpc grpc server lib cc 437 start server with target grpc 10 149 255 254 12345 2023 02 16 09 26 44 265969 w tensorflow core grappler optimizer datum auto shard cc 776 auto sharding policy will apply datum sharde policy as it fail to apply file sharde policy because of the follow reason find an unshardable source dataset name tensorslicedataset 2 op tensorslicedataset input placeholder 0 input placeholder 1 attr key toutput type value list type dt float type dt int64 attr key cardinality value I 60000 attr key be file value b false attr key metadata value s n 024tensorslicedataset 0 attr key output shape value list shape dim size 28 dim size 28 shape experimental type type i d tft product args type i d tft dataset args type i d tft product args type i d tft tensor args type i d tft float args type i d tft tensor args type i d tft int64 args type i d tft dataset args type i d tft product args type i d tft tensor args type i d tft float args type i d tft tensor args type i d tft int64 epoch 1 3 could not load library libcublaslt so 10 error libcublaslt so 10 can not open share object file no such file or directory
tensorflowtensorflow
video cnn tutorial
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 11 0 custom code no os platform and distribution no response mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I do not know if this belong here I be try to work through your video classification with a 3d convolutional neural network tutorial go through the load video datum portion I keep come across an error that I can not resolve line 102 in convert to eager tensor return op eagertensor value ctx device name dtype valueerror attempt to convert a value none with an unsupported type to a tensor this be the code step ucf sample video frame from video file next subset path train glob avi 50 the isolated video path argument in the tutorial work it just seem to be this assignment and I be not sure how to fix it I have try to work through with little success and end up get the same error far down in the tutorial with for frame label in train ds take 10 print label I apologize if this be not where I should be post stuff about the tutorial standalone code to reproduce the issue shell I have copy and paste everything that you see in your tutorial relevant log output no response
tensorflowtensorflow
test issue
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 11 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell sample issue relevant log output no response
tensorflowtensorflow
unit test tensorflow tsl framework convolution spatial convolution test fail to build
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution rhel 8 7 mobile device n a python version 3 9 13 bazel version 5 3 0 gcc compiler version 10 3 0 cuda cudnn version n a gpu model and memory n a current behaviour shell when build for aarch64 the unit test tensorflow tsl framework convolution spatial convolution test fail to build with tensorflow tsl framework convolution eigen spatial convolution inl h 1490 27 error static assertion fail you make a programming mistake standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config mkl aarch64 threadpool copt mtune generic copt march armv8 a copt o3 build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only tensorflow tsl framework convolution spatial convolution test relevant log output shell error home andrew src tensorflow tensorflow tsl framework convolution build 99 12 compile tensorflow tsl framework convolution eigen spatial convolution test cc fail exit 1 gcc fail error execute command cd home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow exec env ld library path opt rh gcc toolset 10 root usr lib64 opt rh gcc toolset 10 root usr lib opt rh gcc toolset 10 root usr lib64 dyninst opt rh gcc toolset 10 root usr lib dyninst opt rh gcc toolset 10 root usr lib64 opt rh gcc toolset 10 root usr lib path home andrew cache bazelisk download bazelbuild bazel 5 3 0 linux arm64 bin home andrew local bin home andrew bin usr share module bin usr local bin usr bin usr local sbin usr sbin pwd proc self cwd python bin path home andrew src venv38 bin python3 python lib path home andrew src venv38 lib python3 8 site package tf2 behavior 1 opt rh gcc toolset 10 root usr bin gcc u fortify source fstack protector wall wunuse but set parameter wno free nonheap object fno omit frame pointer g0 o2 d fortify source 1 dndebug ffunction section fdata section std c 0x md mf bazel out aarch64 opt bin tensorflow tsl framework convolution objs spatial convolution test eigen spatial convolution test d frandom seed bazel out aarch64 opt bin tensorflow tsl framework convolution objs spatial convolution test eigen spatial convolution test o deigen mpl2 only deigen max align byte 64 dtensorflow use custom contraction kernel dgemm kernel h deigen altivec use custom pack 0 deigen neon gebp nr 4 iquote iquote bazel out aarch64 opt bin iquote external com google absl iquote bazel out aarch64 opt bin external com google absl iquote external eigen archive iquote bazel out aarch64 opt bin external eigen archive iquote external nsync iquote bazel out aarch64 opt bin external nsync iquote external double conversion iquote bazel out aarch64 opt bin external double conversion iquote external com google googlet iquote bazel out aarch64 opt bin external com google googlet iquote external com google benchmark iquote bazel out aarch64 opt bin external com google benchmark iquote external com google protobuf iquote bazel out aarch64 opt bin external com google protobuf iquote external zlib iquote bazel out aarch64 opt bin external zlib iquote external bazel tool iquote bazel out aarch64 opt bin external bazel tool ibazel out aarch64 opt bin external com google benchmark virtual include benchmark isystem third party eigen3 mkl include isystem bazel out aarch64 opt bin third party eigen3 mkl include isystem external eigen archive isystem bazel out aarch64 opt bin external eigen archive isystem external nsync public isystem bazel out aarch64 opt bin external nsync public isystem external com google googletest googlemock isystem bazel out aarch64 opt bin external com google googletest googlemock isystem external com google googlet googlemock include isystem bazel out aarch64 opt bin external com google googlet googlemock include isystem external com google googlet googlet isystem bazel out aarch64 opt bin external com google googlet googlet isystem external com google googlet googlet include isystem bazel out aarch64 opt bin external com google googlet googlet include isystem external com google protobuf src isystem bazel out aarch64 opt bin external com google protobuf src isystem external zlib isystem bazel out aarch64 opt bin external zlib wno all wno extra wno deprecate wno deprecate declaration wno ignore attribute wno array bound wunuse result werror unused result wswitch werror switch wno error unused but set variable dautoload dynamic kernel mtune generic march armv8 a o3 std c 17 fno canonical system header wno builtin macro redefine d date redact d timestamp redact d time redact c tensorflow tsl framework convolution eigen spatial convolution test cc o bazel out aarch64 opt bin tensorflow tsl framework convolution objs spatial convolution test eigen spatial convolution test o configuration 67e3477bbfd3aa6df692c90e4aaaf7a6ee0f55b121a5556fe852592ce2c633e2 execution platform local execution config platform platform in file include from external eigen archive unsupported eigen cxx11 eigen core 162 from external eigen archive unsupported eigen cxx11 tensor 14 from third party eigen3 unsupported eigen cxx11 tensor 1 from tensorflow tsl framework convolution eigen spatial convolution h 19 from tensorflow tsl framework convolution eigen spatial convolution test cc 16 tensorflow tsl framework convolution eigen spatial convolution inl h in instantiation of struct eigen internal gemm pack rhs const eigen tensorimagepatchop 1 1 eigen tensormap 16 eigen makepointer eigen defaultdevice std array std array 1 true false 0 eigen makepointer 1 0 false false tensorflow tsl framework convolution eigen spatial convolution test cc 924 15 require from void eigen packrhshelper benchmark state int int int int int int int eigen paddingtype int int int int eigen index eigen index with t eigen qint8 eigen index long int tensorflow tsl framework convolution eigen spatial convolution test cc 1375 1 require from here tensorflow tsl framework convolution eigen spatial convolution inl h 1490 27 error static assertion fail you make a programming mistake 1490 eigen static assert nr 4 you make a programming mistake target tensorflow tsl framework convolution spatial convolution test fail to build info elapse time 9 629s critical path 9 11 info 3 process 2 internal 1 local fail build do not complete successfully tensorflow tsl framework convolution spatial convolution test fail to build fail build do not complete successfully
tensorflowtensorflow
protobuf 4 cause segmentation fault on python 3 8 in unit test
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 8 13 bazel version 5 3 0 gcc compiler version 10 3 0 cuda cudnn version n a gpu model and memory n a current behaviour shell unit test tensorflow dtensor python test spmd test cpu fail when run with python 3 8 and protobuf 4 be instal instal protobuf 3 20 3 resolve the issue standalone code to reproduce the issue shell bazel test config mkl aarch64 threadpool test env tf enable onednn opt 1 cache test result no test timeout 500 900 1 1 copt mtune generic copt march armv8 a copt o3 build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu build test only job 100 tensorflow dtensor python test spmd test cpu relevant log output shell fatal python error segmentation fault current thread 0x0000ffffb7906370 most recent call first file home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python eager context py line 1108 in config file home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python eager context py line 568 in ensure initialize file home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python eager context py line 1401 in remove function file home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python eager context py line 2739 in remove function file home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python eager polymorphic function monomorphic function py line 172 in del receive signal 11 begin mangle stack trace home andrew cache bazel bazel andrew c61c5f84d239689cb19a72cfde16be9f execroot org tensorflow bazel out aarch64 opt bin tensorflow dtensor python test spmd test cpu runfiles org tensorflow tensorflow python platform solib aarch64 u s stensorflow clibtensorflow uframework uimport ulib utensorflow libtensorflow framework so 2 0x15ae14c 0xffff145de14c linux vdso so 1 kernel rt sigreturn 0x0 0xffffb78b07a0 lib64 libpthread so 0 raise 0xac 0xffffb71b2af4 linux vdso so 1 kernel rt sigreturn 0x0 0xffffb78b07a0 lib64 libpython3 8 so 1 0 pymodule getstate 0x4 0xffffb72f9a3c home andrew src venv38 lib64 python3 8 site package google upb message abi3 so 0xa390 0xffff1527a390 home andrew src venv38 lib64 python3 8 site package google upb message abi3 so 0x13c9c 0xffff15283c9c lib64 libpython3 8 so 1 0 pyobject maketpcall 0x1a8 0xffffb72ed9c0 lib64 libpython3 8 so 1 0 pyeval evalframedefault 0x53f4 0xffffb73c1114 lib64 libpython3 8 so 1 0 pyeval evalcodewithname 0xc8c 0xffffb7371fe4 lib64 libpython3 8 so 1 0 pyfunction vectorcall 0x474 0xffffb73734b4 lib64 libpython3 8 so 1 0 0x12662c 0xffffb734662c lib64 libpython3 8 so 1 0 pyobject getattr 0x27c 0xffffb7361e4c lib64 libpython3 8 so 1 0 pyeval evalframedefault 0xa08 0xffffb73bc728 lib64 libpython3 8 so 1 0 pyfunction vectorcall 0x1d0 0xffffb7373210 lib64 libpython3 8 so 1 0 pyeval evalframedefault 0x884 0xffffb73bc5a4 lib64 libpython3 8 so 1 0 pyfunction vectorcall 0x1d0 0xffffb7373210 lib64 libpython3 8 so 1 0 pyeval evalframedefault 0x884 0xffffb73bc5a4 lib64 libpython3 8 so 1 0 pyfunction vectorcall 0x1d0 0xffffb7373210 lib64 libpython3 8 so 1 0 pyeval evalframedefault 0x4de8 0xffffb73c0b08 lib64 libpython3 8 so 1 0 pyfunction vectorcall 0x1d0 0xffffb7373210 lib64 libpython3 8 so 1 0 0x133640 0xffffb7353640 lib64 libpython3 8 so 1 0 0x1f7248 0xffffb7417248 lib64 libpython3 8 so 1 0 0xcc72c 0xffffb72ec72c lib64 libpython3 8 so 1 0 pygc collectnofail 0x38 0xffffb745f060 lib64 libpython3 8 so 1 0 pyimport cleanup 0x394 0xffffb745f40c lib64 libpython3 8 so 1 0 py finalizeex 0x6c 0xffffb7462c34 lib64 libpython3 8 so 1 0 py exit 0x14 0xffffb72cb01c lib64 libpython3 8 so 1 0 0xab060 0xffffb72cb060 lib64 libpython3 8 so 1 0 0xab0b8 0xffffb72cb0b8 lib64 libpython3 8 so 1 0 pyrun simplefileexflag 0x3c4 0xffffb72cbac0 lib64 libpython3 8 so 1 0 py runmain 0x2b8 0xffffb74645d0 lib64 libpython3 8 so 1 0 py bytesmain 0x3c 0xffffb7464d1c lib64 libc so 6 libc start main 0xdc 0xffffb6f14384 home andrew src venv38 bin python3 0x928 0xaaaab41c0928 end mangle stack trace begin stack trace tsl currentstacktrace abi cxx11 kernel rt sigreturn raise kernel rt sigreturn pymodule getstate pyobject maketpcall pyeval evalframedefault pyeval evalcodewithname pyfunction vectorcall pyobject getattr pyeval evalframedefault pyfunction vectorcall pyeval evalframedefault pyfunction vectorcall pyeval evalframedefault pyfunction vectorcall pyeval evalframedefault pyfunction vectorcall pygc collectnofail pyimport cleanup py finalizeex py exit pyrun simplefileexflag py runmain py bytesmain libc start main end stack trace
tensorflowtensorflow
tensorflow dataset 3 1 0 valueerror name str err format name str when load huggingface dataset
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 11 0 custom code no os platform and distribution macos mobile device no response python version 3 8 3 9 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell valueerror name str err format name str when load huggingface dataset via tensorflow dataset load standalone code to reproduce the issue shell import tensorflow dataset datum tensorflow dataset load huggingface glue mrpc relevant log output shell pip show tensorflow dataset name tensorflow dataset version 3 1 0 summary tensorflow dataset be a library of dataset ready to use with tensorflow home page author google inc author email license apache 2 0 traceback most recent call last file line 1 in file user local conda envs tf base lib python3 8 site package tensorflow dataset core api util py line 69 in disallow positional args dec return fn args kwargs file user local conda envs tf base lib python3 8 site package tensorflow dataset core register py line 356 in load name name builder kwargs dataset name and kwargs from name str name file user local conda envs tf base lib python3 8 site package tensorflow dataset core register py line 391 in dataset name and kwargs from name str raise valueerror name str err format name str valueerror parse builder name string huggingface glue mrpc fail the builder name string must be of the follow format dataset name config name version kwargs where dataset name and config name be stre follow python variable naming version be of the form x y z where x y z can be any digit or kwargs be a comma list separate of argument and value to pass to builder example my dataset my dataset 1 2 my dataset config1 my dataset config1 1 my dataset config1 arg1 val1 arg2 val2 my dataset config1 1 2 3 right true foo bar rate 1 2
tensorflowtensorflow
change in reproducibility behavior of glorotuniform between tf 2 6 and 2 11
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 11 custom code no os platform and distribution linux ubuntu 20 04 5 lts mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour with tf 2 6 the follow code always produce same result but produce vary result in tf 2 11 python import tensorflow as tf tf random set seed 42 initializer tf keras initializers glorotuniform initializer shape 2 2 standalone code to reproduce the issue relevant log output no response
tensorflowtensorflow
segmentation fault when inference in google coral dev board mini
Bug
1 system information mendel linux pycoral package build in for coral dev board mini 2 code model training link tf lite conversion link scrollto usykpoui0qru tf lite conversion code snippet load the hdf5 model model tf keras model load model lung segmentation 80epochs avg pool 700kparam aug hdf5 custom object dice coef loss dice coef loss dice coef dice coef converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter representative dataset representative datum gen ensure that if any op can t be quantize the converter throw an error converter target spec support op tf lite opsset tflite builtins int8 set the input and output tensor to uint8 apis add in r2 3 converter inference input type tf uint8 converter inference output type tf uint8 tflite model quant converter convert save the quantize model to disk with open new model quant tflite wb as f f write tflite model quant 3 failure after conversion when model be use to inference in google coral dev board mini segmentation fault occur python3 semantic segmentation py model lung segmentation quant tflite input test png keep aspect ratio output home segmentation result jpg
tensorflowtensorflow
ios tflitebenchmark compile failer
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 10 or 2 11 custom code yes os platform and distribution ios mobile device iphone python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen I use bazel compile io file be tensorflow tensorflow lite tool benchmark io build benchmark framework sh error traceback most recent call last file tensorflow tensorflow core platform default rule cc bzl line 6 column 28 in cc shared library native cc shared library error no native function or rule cc shared library before this tensorflow configure I also run it select io standalone code to reproduce the issue shell I want to know the reason thank you relevant log output no response
tensorflowtensorflow
tf keras callbacks modelcheckpoint initial value threshold not work definition with a mistake
Bug
click to expand issue type documentation bug have you reproduce the bug with tf nightly no source source tensorflow version 2 3 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the manual define initial value threshold as float point initial good value of the metric to be monitor only apply if save good value true only overwrite the model weight already save if the performance of current model be well than this value save good value be not define if save well only be mean then it be not work like the manual say I have no other idea how should I get initial value threshold work it be still start with inf standalone code to reproduce the issue shell es earlystopping monitor val loss verbose 1 patience 20 tmpmodelfile model model config name temp1 good model h5 evaluation model evaluate vd feature col vd target batch size batch size verbose 2 return dict true 3231 3231 2s loss 0 0500 mean square error 0 0500 mc modelcheckpoint tmpmodelfile monitor val loss verbose 1 save well only true initial value threshold evaluation loss save good value true history model fit td feature col td target validation datum vd feature col vd target batch size batch size epoch epoch verbose 2 callback es mc epoch 1 400 epoch 00001 val loss improve from inf to 0 04999 save model to model relevant log output no response
tensorflowtensorflow
tf distribute multiworkermirroredstrategy program hang when connect with grpc
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version tensorflow aarch64 2 11 0 custom code no os platform and distribution ubuntu 22 10 mobile device no response python version python 3 10 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the script hang when define strategy and not just for multiworkermirrored but any of the strategy from tf distribute standalone code to reproduce the issue shell def train dense model batch size limit import oustide the call to the function in order to launch quickly when use dask import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import json model building tf keras backend clear session for easy reset of notebook state slurm resolver tf distribute cluster resolver slurmclusterresolver port base 15000 tf config json dump cluster slurm resolver cluster spec as dict os environ tf config tf config communication option tf distribute experimental communicationoption byte per pack 50 1024 1024 timeout second 120 0 implementation tf distribute experimental communicationimplementation ring mirror strategy tf distribute multiworkermirroredstrategy cluster resolver slurm resolver communication option communication option relevant log output shell 2023 02 05 19 36 27 162088 I tensorflow core distribute runtime rpc grpc server lib cc 447 start server with target grpc node1 15000 2023 02 05 19 36 38 280982 I tensorflow core distribute runtime rpc grpc server lib cc 447 start server with target grpc node2 15000 2023 02 05 19 36 38 691138 I tensorflow core distribute runtime rpc grpc server lib cc 447 start server with target grpc node4 15000 2023 02 05 19 36 40 504698 I tensorflow core distribute runtime rpc grpc server lib cc 447 start server with target grpc node3 15000 accompany script for sbatch script sh shell bin bash sbatch job name mnist tf distribute job name sbatch nodelist node 01 04 number of node sbatch ntask per node 1 number of mpi task per node sbatch cpus per task 4 since node have 4 cpus sbatch distribution block block distribution might be well to have contiguous block sbatch time 00 10 00 job length sbatch exclusive we reserve the entire node for our job sbatch output mnist tf distr log j out std out sbatch error mnist tf distr log j out std err unset http proxy https proxy http proxy https proxy set x cd slurm submit dir srun nodelist node 01 04 python mnist example py
tensorflowtensorflow
valueerror invalid model at calibration wrapper addintermediatetensor model content
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow installation pip package or build from source pip install tensorflow library version if pip package or github sha if build from source 2 11 2 description when I try to convert diffusion model into tflite with integer only I get the follow error traceback most recent call last file main py line 37 in main file main py line 29 in main tflite model converter convert file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 933 in wrapper return self convert and export metric convert func args kwargs file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 911 in convert and export metric result convert func self args kwargs file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 1216 in convert return self convert from save model graph def file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 1100 in convert from save model return self optimize tflite model file home minkyukim project test venv lib python3 8 site package tensorflow lite python convert phase py line 215 in wrapper raise error from none re throw the exception file home minkyukim project test venv lib python3 8 site package tensorflow lite python convert phase py line 205 in wrapper return func args kwargs file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 871 in optimize tflite model model self quantize model q in type q out type q activation type file home minkyukim project test venv lib python3 8 site package tensorflow lite python lite py line 608 in quantize result calibrator add intermediate tensor result file home minkyukim project test venv lib python3 8 site package tensorflow lite python optimize calibrator py line 36 in add intermediate tensor return calibration wrapper addintermediatetensor model content valueerror invalid model if I modify the 17th line of main py to model diffusionmodelv2test img height height img width width max text length 77 it generate tflite output without error diffusionmodelv2test be a model which I modify to make the model small the original model contain 3 downsampling and 3 upsampling but diffusionmodelv2t only contain 1 for each also when if I comment out from line 25 28 which make the conversion to weight only quantization it generate tflite without error as well so my question be why do this error occur and regard the workaround with diffusionmodelv2test be there any limitation for the size of save model when use calibration fyi the model I m test be diffusion model originally from keras cv to make it convertable into tflite I only modify keras layers input with batch size specify as an argument 3 code reproducer gist
tensorflowtensorflow
val loss be very different from training loss when measure on the same data
Bug
issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 12 0 dev20221213 custom code yes os platform and distribution linux ubuntu 18 04 python version 3 7 3 9 current behaviour when fine tune a keras application model on mnist and use the same datum for training and validation for debug the validation loss be much high than the training loss p s I don t believe overfitte be relevant because the training and validation datum be the same I know certain layer like dropout and batchnormalization have different behavior during training and validation but the difference seem too large for this to account for it standalone code to reproduce the issue shell import numpy as np np random seed 1337 for reproducibility from tensorflow keras datasets import mnist from tensorflow keras model import sequential import tensorflow keras layer as layer from tensorflow keras optimizer import sgd adam rmsprop import tensorflow as tf batch size 32 nb class 10 nb epoch 10 load datum x train y train x test y test mnist load datum del x test del y test subsample for memory idx np random choice x train shape 0 1000 replace 0 x train x train idx y train y train idx preprocess image tmp for I in range x train shape 0 sample x train I sample tf expand dim sample axis 1 sample tf image grayscale to rgb sample sample tf image resize sample 224 224 sample 255 tmp append sample x train np stack tmp preprocess label y train tf one hot y train nb class numpy define model conv tf keras application mobilenetv2 weight imagenet include top false input shape 224 224 3 for layer in conv layer layer trainable true input layer input 224 224 3 x conv input x layer averagepooling2d 7 7 x x layer flatten x output layer dense nb class activation softmax x model tf keras model model inputs input output output compile model model compile loss categorical crossentropy optimizer adam train history model fit x train y train batch size batch size epoch nb epoch verbose 1 validation datum x train y train relevant log output shell epoch 1 10 32 32 50 1 step loss 0 4807 val loss 4 7196 epoch 2 10 32 32 29 912ms step loss 0 1487 val loss 9 1618 epoch 3 10 32 32 29 915ms step loss 0 1485 val loss 15 0174 epoch 4 10 32 32 30 950ms step loss 0 0986 val loss 8 3269 epoch 5 10 32 32 30 925ms step loss 0 0546 val loss 11 8151 epoch 6 10 32 32 30 926ms step loss 0 0379 val loss 11 5285 epoch 7 10 32 32 29 924m step loss 0 1218 val loss 9 5500 epoch 8 10 32 32 29 906ms step loss 0 0689 val loss 9 0055 epoch 9 10 32 32 29 908m step loss 0 0452 val loss 6 9891 epoch 10 10 32 32 30 952ms step loss 0 0319 val loss 10 5948
tensorflowtensorflow
crash when run tf random fix unigram candidate sampler
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
crash when run tf nn conv2d transpose
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
segfault when run tf math unsorted segment sqrt n
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
segfault when run tf keras layer upsampling2d
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell segfault standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
crash when run tf image combine non max suppression
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
segfault when run tensorflow python ops gen sparse op sparse concat
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen sparse op try arg 0 arg 1 0 tensor tf convert to tensor np one 4 dtype str arg 1 0 tf identity arg 1 0 tensor arg 1 1 tensor tf convert to tensor np one 6 dtype str arg 1 1 tf identity arg 1 1 tensor arg 1 arg 1 0 arg 1 1 arg 2 0 tensor tf random uniform 3 minval 256 maxval 257 dtype tf int64 arg 2 0 tf identity arg 2 0 tensor arg 2 1 tensor tf random uniform 3 minval 256 maxval 257 dtype tf int64 arg 2 1 tf identity arg 2 1 tensor arg 2 arg 2 0 arg 2 1 arg 3 1 out gen sparse op sparse concat arg 0 arg 1 arg 2 arg 3 except exception as e print error str e relevant log output shell 2023 01 22 18 04 13 670223 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 18 04 13 766992 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 22 18 04 14 194664 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 18 04 14 194828 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 18 04 14 194835 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 18 04 14 686955 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 14 692203 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 14 692312 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 14 692614 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 18 04 14 693090 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 14 693208 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 14 693333 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 15 052698 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 15 052898 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 15 052994 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 18 04 15 053079 I tensorflow core common runtime gpu gpu device cc 1616 create device job localhost replica 0 task 0 device gpu 0 with 4302 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 segmentation fault
tensorflowtensorflow
unknown crash when run tensorflow python ops gen nn ops max pool
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen nn op try arg 0 tensor tf random uniform 1 6 8 1 dtype tf float32 arg 0 tf identity arg 0 tensor ksize 0 1 ksize 1 3 ksize 2 16 ksize 3 1 ksize ksize 0 ksize 1 ksize 2 ksize 3 stride 0 1 stride 1 1 stride 2 1 stride 3 1 stride stride 0 stride 1 stride 2 stride 3 padding valid explicit padding datum format nhwc out gen nn op max pool arg 0 ksize ksize strides stride padding padding explicit padding explicit padding datum format datum format except exception as e print error str e relevant log output shell 2023 01 22 17 52 34 603243 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 17 52 34 697556 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 22 17 52 35 114243 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 17 52 35 114398 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 17 52 35 114405 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 17 52 35 589151 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 595319 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 595425 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 595718 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 17 52 35 596417 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 596518 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 596607 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 944546 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 944690 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 944781 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 52 35 944869 I tensorflow core common runtime gpu gpu device cc 1616 create device job localhost replica 0 task 0 device gpu 0 with 4299 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 01 22 17 52 36 501008 I tensorflow stream executor cuda cuda dnn cc 384 load cudnn version 8100 2023 01 22 17 52 36 501367 f tensorflow stream executor cuda cuda dnn cc 886 check fail cudnnsetpoolingnddescriptor handle get pool descriptor mode dnn poolingmode kmaximum cudnn max pooling mode cudnn pool average count exclude padding propagate nan cudnn propagate nan cudnn not propagate nan nd shape datum padding datum stride data cudnn status success 3 vs 0 abort
tensorflowtensorflow
tensorflow doesn t work with cuda 12 on wsl2
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version v2 11 0 rc2 17 gd5b57ca93e5 2 11 0 custom code no os platform and distribution wsl2 ubuntu 22 04 mobile device n a python version 3 10 6 bazel version no response gcc compiler version no response cuda cudnn version 12 0 gpu model and memory no response current behaviour shell when run tensorflow installation verification command right after pip install both regular release and nightly tensorflow be look for cuda 11 library instead of 12 standalone code to reproduce the issue shell 1 follow this instruction to enable gpu in wsl2 and test sample app fine 3 install nvidia cuda on ubuntu 2 run nvidia smi command with correct output 3 pip install tf nightly or tensorflow run fine both command 4 and 5 with error no finding cuda 12 library even though the ld library path be set correctly 4 python3 c import tensorflow as tf print tf config list physical device gpu 5 python3 c import tensorflow as tf print tf reduce sum tf random normal 1000 1000 relevant log output shell nvidia smi command output nvidia smi sun jan 22 17 07 57 2023 nvidia smi 527 92 01 driver version 528 02 cuda version 12 0 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 nvidia geforce on 00000000 0b 00 0 on off 0 34c p5 35w 450w 1619mib 24564mib 4 default n a process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 23 g xwayland n a nvcc command output nvcc version nvcc nvidia r cuda compiler driver copyright c 2005 2022 nvidia corporation build on mon oct 24 19 12 58 pdt 2022 cuda compilation tool release 12 0 v12 0 76 build cuda 12 0 r12 0 compiler 31968024 0 python3 c import tensorflow as tf print tf reduce sum tf random normal 1000 1000 command output python3 c import tensorflow as tf print tf reduce sum tf random normal 1000 1000 2023 01 22 17 41 38 117845 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 17 41 38 193983 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 38 194028 I tensorflow tsl cuda cudart stub cc 28 ignore above cudart dlerror if you do not have a gpu set up on your machine 2023 01 22 17 41 38 214984 e tensorflow tsl lib monitor collection registry cc 81 can not register 2 metric with the same name tensorflow core bfc allocator delay 2023 01 22 17 41 38 638503 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libnvinfer so 8 dlerror libnvinfer so 8 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 38 638571 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libnvinfer plugin so 8 dlerror libnvinfer plugin so 8 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 38 638592 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 17 41 39 158693 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 984 could not open file to read numa node sys bus pci device 0000 0b 00 0 numa node your kernel may have be build without numa support 2023 01 22 17 41 39 158768 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 158808 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcubla so 11 dlerror libcubla so 11 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 158844 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcublaslt so 11 dlerror libcublaslt so 11 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 158877 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 167698 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcusparse so 11 dlerror libcusparse so 11 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 167763 w tensorflow tsl platform default dso loader cc 67 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path usr local cuda lib64 2023 01 22 17 41 39 167772 w tensorflow core common runtime gpu gpu device cc 1955 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 22 17 41 39 168021 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag tf tensor 1520 5212 shape dtype float32
tensorflowtensorflow
cuda launch failure when run tensorflow python ops gen nn op fuse batch norm grad v3
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python ops import gen nn op try try with tf device cpu y backprop tensor tf random uniform 4 10 10 2 dtype tf float32 y backprop tf identity y backprop tensor x tensor tf random uniform 4 10 10 2 dtype tf float32 x tf identity x tensor scale tensor tf random uniform 2 dtype tf float32 scale tf identity scale tensor reserve space 1 tensor tf random uniform 2 dtype tf float32 reserve space 1 tf identity reserve space 1 tensor reserve space 2 tensor tf random uniform 2 dtype tf float32 reserve space 2 tf identity reserve space 2 tensor epsilon 21 999 datum format nhwc be train true reserve space 3 tensor tf random uniform dtype tf float32 reserve space 3 tf identity reserve space 3 tensor out gen nn op fuse batch norm grad v3 y backprop y backprop x x scale scale reserve space 1 reserve space 1 reserve space 2 reserve space 2 epsilon epsilon datum format datum format be training be train reserve space 3 reserve space 3 except exception as e print error str e try with tf device gpu 0 y backprop tf identity y backprop tensor y backprop tf cast y backprop tf float32 x tf identity x tensor x tf cast x tf float32 scale tf identity scale tensor scale tf cast scale tf float32 reserve space 1 tf identity reserve space 1 tensor reserve space 1 tf cast reserve space 1 tf float32 reserve space 2 tf identity reserve space 2 tensor reserve space 2 tf cast reserve space 2 tf float32 reserve space 3 tf identity reserve space 3 tensor reserve space 3 tf cast reserve space 3 tf float32 gen nn op fuse batch norm grad v3 y backprop y backprop x x scale scale reserve space 1 reserve space 1 reserve space 2 reserve space 2 epsilon epsilon datum format datum format be training be train reserve space 3 reserve space 3 except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 22 17 02 26 555737 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 17 02 26 649876 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 22 17 02 27 067009 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 17 02 27 067164 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs tf 2 10 lib 2023 01 22 17 02 27 067171 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 17 02 27 546127 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 552775 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 552903 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 553197 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 17 02 27 553646 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 553745 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 553833 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 911000 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 911133 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 911223 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 17 02 27 911307 I tensorflow core common runtime gpu gpu device cc 1616 create device job localhost replica 0 task 0 device gpu 0 with 4200 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 01 22 17 02 28 500075 I tensorflow stream executor cuda cuda dnn cc 384 load cudnn version 8100 2023 01 22 17 02 28 501974 e tensorflow stream executor dnn cc 868 cudnn status bad param in tensorflow stream executor cuda cuda dnn cc 5594 cudnnbatchnormalizationbackward cudnn handle mode one zero one zero x descriptor handle x opaque x descriptor handle y backprop opaque x descriptor handle x backprop opaque scale offset descriptor handle scale opaque scale backprop opaque offset backprop opaque epsilon mean opaque inv var opaque error function node wrap fusedbatchnormgradv3 device job localhost replica 0 task 0 device gpu 0 cudnn launch failure input shape 4 10 10 2 op fusedbatchnormgradv3
tensorflowtensorflow
crash when run
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen nn op try arg 0 tensor tf random uniform 1 6 8 1 dtype tf float32 arg 0 tf identity arg 0 tensor ksize 0 1 ksize 1 1 ksize 2 0 ksize 3 1 ksize ksize 0 ksize 1 ksize 2 ksize 3 stride 0 1 stride 1 1 stride 2 1 stride 3 1 stride stride 0 stride 1 stride 2 stride 3 padding valid explicit padding datum format nhwc out gen nn op max pool arg 0 ksize ksize strides stride padding padding explicit padding explicit padding datum format datum format except exception as e print error str e relevant log output shell 2023 01 22 11 22 43 127824 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 11 22 43 643039 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 11 22 43 643196 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 11 22 43 643203 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 11 22 44 118255 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 124694 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 124806 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 125150 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 11 22 44 125786 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 125926 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 126017 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 484008 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 484181 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 484305 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 11 22 44 484403 I tensorflow core common runtime gpu gpu device cc 1613 create device job localhost replica 0 task 0 device gpu 0 with 4269 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 01 22 11 22 45 031828 I tensorflow compiler xla stream executor cuda cuda dnn cc 428 load cudnn version 8100 2023 01 22 11 22 45 031891 f tensorflow compiler xla stream executor cuda cuda dnn cc 958 check fail cudnnsetpoolingnddescriptor handle get pool descriptor mode dnn poolingmode kmaximum cudnn max pooling mode cudnn pool average count exclude padding propagate nan cudnn propagate nan cudnn not propagate nan nd shape datum padding datum stride data cudnn status success 3 vs 0 abort
tensorflowtensorflow
segfault when run tensorflow python op gen log op print
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell probably due to nan input standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen logging op try arg 0 tensor tf saturate cast tf random uniform minval 0 maxval 2 dtype tf int64 dtype tf int8 arg 0 tf identity arg 0 tensor arg 1 0 tensor tf cast tf random uniform 3 minval 0 maxval 2 dtype tf int32 dtype tf bool arg 1 0 tf identity arg 1 0 tensor arg 1 arg 1 0 arg 2 nan arg 3 none arg 4 0 26650310319346027 arg 5 nan out gen log op print arg 0 arg 1 arg 2 arg 3 arg 4 arg 5 except exception as e print error str e relevant log output shell 2023 01 22 05 45 40 565981 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 05 45 41 084311 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 05 45 41 084470 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 05 45 41 084478 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 05 45 41 563115 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 569995 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 570117 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 570411 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 05 45 41 571019 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 571141 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 571230 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 922406 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 922534 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 922624 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 05 45 41 922703 I tensorflow core common runtime gpu gpu device cc 1613 create device job localhost replica 0 task 0 device gpu 0 with 4267 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 segmentation fault
tensorflowtensorflow
error when run tensorflow python keras layer dense attention low triangular mask
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell due to large tensor standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python keras layer import dense attention try arg 0 tensor tf constant 78450714517633 shape dtype tf int32 arg 0 tf identity arg 0 tensor out dense attention low triangular mask arg 0 except exception as e print error str e relevant log output shell 2023 01 22 04 45 46 895374 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 04 45 48 013379 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 04 45 48 013574 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 22 04 45 48 013581 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 22 04 45 48 955940 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 48 979903 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 48 980037 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 48 981184 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 22 04 45 48 982243 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 48 982356 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 48 982457 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 49 605987 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 49 606304 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 49 606404 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 22 04 45 49 606675 I tensorflow core common runtime gpu gpu device cc 1613 create device job localhost replica 0 task 0 device gpu 0 with 4193 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 01 22 04 45 49 668173 w tensorflow tsl framework cpu allocator impl cc 82 allocation of 4632444412 exceed 10 of free system memory 2023 01 22 04 46 00 072978 w tensorflow tsl framework bfc allocator cc 479 allocator gpu 0 bfc run out of memory try to allocate 4 31gib round to 4632444416 request by op cumsum if the cause be memory fragmentation maybe the environment variable tf gpu allocator cuda malloc async will improve the situation current allocation summary follow current allocation summary follow 2023 01 22 04 46 00 072996 I tensorflow tsl framework bfc allocator cc 1034 bfcallocator dump for gpu 0 bfc 2023 01 22 04 46 00 073002 I tensorflow tsl framework bfc allocator cc 1041 bin 256 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073007 I tensorflow tsl framework bfc allocator cc 1041 bin 512 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073012 I tensorflow tsl framework bfc allocator cc 1041 bin 1024 total chunk 1 chunk in use 1 1 2kib allocate for chunk 1 2kib in use in bin 1 0kib client request in use in bin 2023 01 22 04 46 00 073016 I tensorflow tsl framework bfc allocator cc 1041 bin 2048 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073020 I tensorflow tsl framework bfc allocator cc 1041 bin 4096 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073024 I tensorflow tsl framework bfc allocator cc 1041 bin 8192 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073029 I tensorflow tsl framework bfc allocator cc 1041 bin 16384 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073033 I tensorflow tsl framework bfc allocator cc 1041 bin 32768 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073037 I tensorflow tsl framework bfc allocator cc 1041 bin 65536 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073041 I tensorflow tsl framework bfc allocator cc 1041 bin 131072 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073045 I tensorflow tsl framework bfc allocator cc 1041 bin 262144 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073049 I tensorflow tsl framework bfc allocator cc 1041 bin 524288 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073053 I tensorflow tsl framework bfc allocator cc 1041 bin 1048576 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073057 I tensorflow tsl framework bfc allocator cc 1041 bin 2097152 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073061 I tensorflow tsl framework bfc allocator cc 1041 bin 4194304 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073065 I tensorflow tsl framework bfc allocator cc 1041 bin 8388608 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073069 I tensorflow tsl framework bfc allocator cc 1041 bin 16777216 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073073 I tensorflow tsl framework bfc allocator cc 1041 bin 33554432 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073077 I tensorflow tsl framework bfc allocator cc 1041 bin 67108864 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073081 I tensorflow tsl framework bfc allocator cc 1041 bin 134217728 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073085 I tensorflow tsl framework bfc allocator cc 1041 bin 268435456 total chunk 1 chunk in use 0 4 09gib allocate for chunk 0b in use in bin 0b client request in use in bin 2023 01 22 04 46 00 073090 I tensorflow tsl framework bfc allocator cc 1057 bin for 4 31gib be 256 00mib chunk state 2023 01 22 04 46 00 073096 I tensorflow tsl framework bfc allocator cc 1063 size 4 09gib request size 0b in use 0 bin num 20 prev size 1 2kib request size 1 0kib in use 1 bin num 1 2023 01 22 04 46 00 073100 I tensorflow tsl framework bfc allocator cc 1070 next region of size 4396810240 2023 01 22 04 46 00 073105 I tensorflow tsl framework bfc allocator cc 1090 inuse at 7fad3c000000 of size 1280 next 1 2023 01 22 04 46 00 073108 I tensorflow tsl framework bfc allocator cc 1090 free at 7fad3c000500 of size 4396808960 next 18446744073709551615 2023 01 22 04 46 00 073111 I tensorflow tsl framework bfc allocator cc 1095 summary of in use chunk by size 2023 01 22 04 46 00 073116 I tensorflow tsl framework bfc allocator cc 1098 1 chunk of size 1280 totalling 1 2kib 2023 01 22 04 46 00 073119 I tensorflow tsl framework bfc allocator cc 1102 sum total of in use chunk 1 2kib 2023 01 22 04 46 00 073123 I tensorflow tsl framework bfc allocator cc 1104 total region allocate bytes 4396810240 memory limit 4396810240 available byte 0 curr region allocation bytes 8793620480 2023 01 22 04 46 00 073128 I tensorflow tsl framework bfc allocator cc 1110 stat limit 4396810240 inuse 1280 maxinuse 1280 numalloc 1 maxallocsize 1280 reserve 0 peakreserve 0 largestfreeblock 0 2023 01 22 04 46 00 073135 w tensorflow tsl framework bfc allocator cc 492 error fail copy input tensor from job localhost replica 0 task 0 device cpu 0 to job localhost replica 0 task 0 device gpu 0 in order to run cumsum dst tensor be not initialize op cumsum
tensorflowtensorflow
cuda launch failure when run
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell cuda launch failure with the follow input argument standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python ops import gen nn op try try with tf device cpu arg 0 tensor tf random uniform 1 2 1 6 dtype tf float16 arg 0 tf identity arg 0 tensor arg 1 tensor tf random uniform 6 dtype tf float32 arg 1 tf identity arg 1 tensor arg 2 tensor tf random uniform 6 dtype tf float32 arg 2 tf identity arg 2 tensor arg 3 tensor tf random uniform 6 dtype tf float32 arg 3 tf identity arg 3 tensor arg 4 tensor tf random uniform 6 dtype tf float32 arg 4 tf identity arg 4 tensor epsilon 0 4105469566119495 exponential avg factor 1 0 datum format nhwc be train false out gen nn op fuse batch norm v3 arg 0 arg 1 arg 2 arg 3 arg 4 epsilon epsilon exponential avg factor exponential avg factor datum format datum format be training be train except exception as e print error str e try with tf device gpu 0 arg 0 tf identity arg 0 tensor arg 0 tf cast arg 0 tf float16 arg 1 tf identity arg 1 tensor arg 1 tf cast arg 1 tf float32 arg 2 tf identity arg 2 tensor arg 2 tf cast arg 2 tf float32 arg 3 tf identity arg 3 tensor arg 3 tf cast arg 3 tf float32 arg 4 tf identity arg 4 tensor arg 4 tf cast arg 4 tf float32 gen nn op fuse batch norm v3 arg 0 arg 1 arg 2 arg 3 arg 4 epsilon epsilon exponential avg factor exponential avg factor datum format datum format be training be train except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 21 18 55 56 490432 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 18 55 57 095606 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 21 18 55 57 095787 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib home nimashiri anaconda3 envs cuda11 2 lib home nimashiri anaconda3 envs cuda11 2 lib 2023 01 21 18 55 57 095796 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 18 55 57 639581 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 57 643978 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 57 644100 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 57 644401 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 18 55 57 644818 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 57 644932 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 57 645048 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 58 038876 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 58 039025 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 58 039125 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 18 55 58 039212 I tensorflow core common runtime gpu gpu device cc 1613 create device job localhost replica 0 task 0 device gpu 0 with 4261 mb memory device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2023 01 21 18 55 58 423108 I tensorflow compiler xla stream executor cuda cuda dnn cc 428 load cudnn version 8100 2023 01 21 18 55 58 423398 e tensorflow compiler xla stream executor dnn cc 887 cudnn status bad param in tensorflow compiler xla stream executor cuda cuda dnn cc 5790 cudnnbatchnormalizationforwardinference cudnn handle mode one zero x descriptor handle x opaque x descriptor handle y opaque scale offset descriptor handle scale opaque offset opaque estimate mean opaque maybe inv var epsilon error function node wrap fusedbatchnormv3 device job localhost replica 0 task 0 device gpu 0 cudnn launch failure input shape 1 2 1 6 op fusedbatchnormv3
tensorflowtensorflow
segfault when run tensorflow python op nn op fractional avg pool v2
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell probably due to negative or large argument standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import nn op try arg 0 tensor tf random uniform 3 30 50 3 dtype tf float64 arg 0 tf identity arg 0 tensor arg 1 0 32 arg 1 1 34 arg 1 2 46 arg 1 3 true arg 1 arg 1 0 arg 1 1 arg 1 2 arg 1 3 arg 2 true arg 3 false seed 341261001 out nn op fractional avg pool v2 arg 0 arg 1 arg 2 arg 3 seed seed except exception as e print error str e relevant log output shell 2023 01 21 14 02 26 071375 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 14 02 26 188308 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 14 02 26 781007 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 14 02 26 781205 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 14 02 26 781213 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 14 02 27 424061 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 14 02 27 454021 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 14 02 27 454038 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 14 02 27 454299 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag segmentation fault
tensorflowtensorflow
error when run tensorflow python op gen array op pad v2
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell probably due to negative or empty input standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen array op try arg 0 tensor tf saturate cast tf random uniform minval 0 maxval 2 dtype tf int64 dtype tf uint64 arg 0 tf identity arg 0 tensor arg 1 arg 2 992 out gen array op pad v2 arg 0 arg 1 arg 2 except exception as e print error str e relevant log output shell 2023 01 21 13 38 39 787930 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 13 38 39 902000 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 13 38 40 466321 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 13 38 40 466519 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 13 38 40 466528 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 13 38 41 067843 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 13 38 41 096228 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 13 38 41 096244 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 13 38 41 096482 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag error return a result with an error set
tensorflowtensorflow
process get kill tensorflow python op signal window op hann window
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell very large input argument standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python op signal import window op try try with tf device cpu arg 0 125091515651 periodic false dtype tf float64 out window op hann window arg 0 periodic periodic dtype dtype except exception as e print error str e try with tf device gpu 0 dtype tf float64 window op hann window arg 0 periodic periodic dtype dtype except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 21 12 56 28 727252 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 12 56 28 846144 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 12 56 29 407955 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 12 56 29 408152 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 12 56 29 408161 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 12 56 30 003853 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 12 56 30 032131 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 12 56 30 032148 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 12 56 30 032397 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 12 56 30 053099 w tensorflow core framework cpu allocator impl cc 82 allocation of 2149856268 exceed 10 of free system memory 2023 01 21 12 56 30 731465 w tensorflow core framework cpu allocator impl cc 82 allocation of 4299712536 exceed 10 of free system memory 2023 01 21 12 56 31 418647 w tensorflow core framework cpu allocator impl cc 82 allocation of 4299712536 exceed 10 of free system memory 2023 01 21 12 56 32 072936 w tensorflow core framework cpu allocator impl cc 82 allocation of 4299712536 exceed 10 of free system memory 2023 01 21 12 56 33 361915 w tensorflow core framework cpu allocator impl cc 82 allocation of 4299712536 exceed 10 of free system memory kill
tensorflowtensorflow
runtime error tensorflow python op stateless random op stateless truncate normal
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell nan input or empty input standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python op import stateless random op try shape mean 404 stddev nan dtype tf uint64 seed out stateless random op stateless truncate normal shape shape mean mean stddev stddev dtype dtype seed seed except exception as e print error str e relevant log output shell 2023 01 21 12 03 48 192910 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 12 03 48 497131 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 12 03 49 378226 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 12 03 49 378430 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 12 03 49 378439 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 12 03 50 368808 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 12 03 50 399759 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 12 03 50 399775 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 12 03 50 401226 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag error return a result with an error set
tensorflowtensorflow
crash when run tensorflow python op nn op convolution internal
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell very large tensor standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python ops import nn op try try with tf device cpu arg 0 tensor tf random uniform 32 12 12 8 dtype tf float32 arg 0 tf identity arg 0 tensor arg 1 tensor tf random uniform 3 3 8 4 dtype tf float32 arg 1 tf identity arg 1 tensor stride none pad valid datum format none dilation tensor tf constant 102378662306538 shape 1 dtype tf float32 dilation tf identity dilation tensor out nn op convolution internal arg 0 arg 1 stride stride padding padding datum format datum format dilation dilation except exception as e print error str e try with tf device gpu 0 arg 0 tf identity arg 0 tensor arg 0 tf cast arg 0 tf float32 arg 1 tf identity arg 1 tensor arg 1 tf cast arg 1 tf float32 dilation tf identity dilation tensor dilation tf cast dilation tf float32 nn op convolution internal arg 0 arg 1 stride stride padding padding datum format datum format dilation dilation except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 21 11 02 42 187245 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 11 02 42 297559 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 11 02 42 846775 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 11 02 42 846966 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 11 02 42 846974 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 11 02 43 423840 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 11 02 43 454583 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 11 02 43 454600 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 11 02 43 454839 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 11 02 43 476399 f tensorflow core framework tensor shape cc 404 check fail 0 new num element 0 vs 1 abort
tensorflowtensorflow
error when run tensorflow python op nn impl batch normalization
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell probably due to empty input tensor or negative argument standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python op import nn impl try try with tf device cpu arg 0 tensor tf random uniform dtype tf float32 arg 0 tf identity arg 0 tensor arg 1 tensor tf random uniform 2 dtype tf float64 arg 1 tf identity arg 1 tensor arg 2 tensor tf saturate cast tf random uniform minval 0 maxval 2 dtype tf int64 dtype tf uint64 arg 2 tf identity arg 2 tensor arg 3 tensor tf random uniform 3 3 3 dtype tf float64 arg 3 tf identity arg 3 tensor arg 4 tensor tf saturate cast tf random uniform 3 3 minval 0 maxval 2 dtype tf int64 dtype tf uint8 arg 4 tf identity arg 4 tensor arg 5 571 out nn impl batch normalization arg 0 arg 1 arg 2 arg 3 arg 4 arg 5 except exception as e print error str e try with tf device gpu 0 arg 0 tf identity arg 0 tensor arg 0 tf cast arg 0 tf float32 arg 1 tf identity arg 1 tensor arg 1 tf cast arg 1 tf float64 arg 2 tf identity arg 2 tensor arg 2 tf cast arg 2 tf uint64 arg 3 tf identity arg 3 tensor arg 3 tf cast arg 3 tf float64 arg 4 tf identity arg 4 tensor arg 4 tf cast arg 4 tf uint8 nn impl batch normalization arg 0 arg 1 arg 2 arg 3 arg 4 arg 5 except exception as e print error str e except exception as e print error str e also on tf 2 11 import tensorflow as tf import numpy as np from tensorflow python op import nn impl try try with tf device cpu arg 0 tensor tf random uniform dtype tf float32 arg 0 tf identity arg 0 tensor arg 1 tensor tf complex tf random uniform dtype tf float64 tf random uniform dtype tf float64 arg 1 tf identity arg 1 tensor arg 2 tensor tf saturate cast tf random uniform 2 minval 0 maxval 2 dtype tf int64 dtype tf uint64 arg 2 tf identity arg 2 tensor arg 3 tensor tf saturate cast tf random uniform 2 2 minval 0 maxval 2 dtype tf int64 dtype tf uint16 arg 3 tf identity arg 3 tensor arg 4 tensor tf saturate cast tf random uniform minval 0 maxval 2 dtype tf int64 dtype tf uint32 arg 4 tf identity arg 4 tensor arg 5 607 out nn impl batch normalization arg 0 arg 1 arg 2 arg 3 arg 4 arg 5 except exception as e print error str e try with tf device gpu 0 arg 0 tf identity arg 0 tensor arg 0 tf cast arg 0 tf float32 arg 1 tf identity arg 1 tensor arg 1 tf cast arg 1 tf complex128 arg 2 tf identity arg 2 tensor arg 2 tf cast arg 2 tf uint64 arg 3 tf identity arg 3 tensor arg 3 tf cast arg 3 tf uint16 arg 4 tf identity arg 4 tensor arg 4 tf cast arg 4 tf uint32 nn impl batch normalization arg 0 arg 1 arg 2 arg 3 arg 4 arg 5 except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 21 10 56 09 078632 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2023 01 21 10 56 09 370374 e tensorflow stream executor cuda cuda blas cc 2981 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2023 01 21 10 56 10 227902 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 10 56 10 228106 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path lib 2023 01 21 10 56 10 228115 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2023 01 21 10 56 11 192165 I tensorflow stream executor cuda cuda gpu executor cc 980 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2023 01 21 10 56 11 221955 w tensorflow stream executor platform default dso loader cc 64 could not load dynamic library libcudnn so 8 dlerror libcudnn so 8 can not open share object file no such file or directory ld library path lib 2023 01 21 10 56 11 221972 w tensorflow core common runtime gpu gpu device cc 1934 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2023 01 21 10 56 11 223466 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag error return a result with an error set error return a result with an error set
tensorflowtensorflow
segfault when run tf image combine non max suppression
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell segfault standalone code to reproduce the issue shell please use the file in this link to reproduce the bug relevant log output shell result pywrap tfe tfe py fastpathexecute 2023 01 20 07 28 00 340964 w tensorflow core kernel image non max suppression op cc 995 detect a large value for max total size this may cause oom error max total size 1051019270 segmentation fault
tensorflowtensorflow
a parameter check issue in the batchnorm operator
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 3 custom code yes os platform and distribution ubuntu 18 04 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version 10 1 7 6 1 gpu model and memory no response current behaviour shell we find that the implementation of tf keras layers batchnormalization lack parameter check the specific problem come from its variance parameter the reproduction of the error be specifically divide into four step 1 initialize a bn operator I e source model randomly input an input I e datum and get an output I e source result 2 set a constant delta set it to a negative value such as 1 subtract delta from the variance of source model and add delta to epsilon to get a new bn operator I e follow model 3 input datum to follow model and get follow result 4 calculate the distance between source result and follow result theoretically it should be small or even 0 in practice a very large result can be obtain we use the same method to implement in pytorch and mindspore there be no problem with the output of pytorch and mindspore prompt that the variance of the bn operator should be in the 0 1 interval standalone code to reproduce the issue shell from tensorflow keras layer import batchnormalization input from tensorflow keras model import model clone model import os import re import numpy as np def sourcemodel shape x input shape shape 1 y batchnormalization axis 1 x return model x y def followmodel 1 source model follow model clone model source model weight source model get weight weight name weight name for layer in source model layer for weight in layer weight variance idx findweightsidx variance weight name delta np random uniform 1e 3 1e 3 1 0 follow model layer 1 epsilon delta bn epsilon weight variance idx delta follow model set weight weight return follow model def findweightsidx name weight name find layer index by name for idx name in enumerate weight name if re search name name return idx return 1 os environ tf force gpu allow growth true os environ cuda device order pci bus i d os environ cuda visible device 0 shape 10 32 32 3 datum np random uniform 1 1 shape delta 1 source model sourcemodel shape follow model followmodel 1 source model source result source model datum follow result follow model datum dis np sum abs source result follow result print delta delta dis dis relevant log output shell delta 1 dis 4510 4536
tensorflowtensorflow
tensorflow khaokho29th
Invalid
system information android device information use adb shell getprop ro build fingerprint if possible tensorflow lite in play service sdk version find in build gradle google play service version setting app google play service app detail standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to or attach code demonstrate the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
issu
Invalid
system information android device information use adb shell getprop ro build fingerprint if possible tensorflow lite in play service sdk version find in build gradle google play service version setting app google play service app detail standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to or attach code demonstrate the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
problem with loading example dataset
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version tf2 9 2 custom code no os platform and distribution google collab mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell good afternoon I learm the colab notebook example transfer learn for the audio domain with tensorflow lite model maker in this example the bird dataset can not be load this link zip be invalid how can I get this dataset for this example standalone code to reproduce the issue shell example text from scrollto upnrfilknsmr bird dataset folder tf keras util get file bird dataset zip cache dir cache subdir dataset extract true this link zip be invalid relevant log output shell download datum from httperror traceback most recent call last usr local lib python3 8 dist package keras util datum util py in get file fname origin untar md5 hash file hash cache subdir hash algorithm extract archive format cache dir 276 try 277 urlretrieve origin fpath dl progress 278 except urllib error httperror as e 8 frame httperror http error 404 not find during handling of the above exception another exception occur exception traceback most recent call last usr local lib python3 8 dist package keras util datum util py in get file fname origin untar md5 hash file hash cache subdir hash algorithm extract archive format cache dir 277 urlretrieve origin fpath dl progress 278 except urllib error httperror as e 279 raise exception error msg format origin e code e msg 280 except urllib error urlerror as e 281 raise exception error msg format origin e errno e reason exception url fetch failure on 404 not find
tensorflowtensorflow
crash when run tensorflow python op math op sparse segment sum
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source binary tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell abort when run python op math op sparse segment sum standalone code to reproduce the issue shell import tensorflow as tf import numpy as np from tensorflow python op import math op try try with tf device cpu datum tensor tf random uniform 1 dtype tf float64 datum tf identity datum tensor indice segment ids out math op sparse segment sum datum datum indice indice segment ids segment id except exception as e print error str e try with tf device gpu 0 datum tf identity datum tensor datum tf cast datum tf float64 indice segment ids math op sparse segment sum datum datum indice indice segment ids segment id except exception as e print error str e except exception as e print error str e relevant log output shell 2023 01 09 17 01 07 909340 e tensorflow compiler xla stream executor cuda cuda driver cc 1159 fail to enqueue async memcpy from device to host cuda error invalid value invalid argument host dst 0x7f6e6ec00000 gpu src 0xfffffffffffffffc size 4 0x4 error function node wrap sparsesegmentsum device job localhost replica 0 task 0 device gpu 0 sparsesegmentsum fail to copy last segment I d from device op sparsesegmentsum 2023 01 09 17 01 07 910082 f tensorflow core common runtime gpu gpu util cc 386 cpu gpu memcpy fail abort
tensorflowtensorflow
documentation raw op realdiv input tensor can not be of integer dtype
Bug
click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source binary tensorflow version tf 2 9 1 custom code yes os platform and distribution linux wsl2 ubuntu 20 04 lts mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the documentation for tf raw op realdiv state that the input argument x must be a tensor of type bfloat16 half float32 float64 uint8 int8 uint16 int16 int32 uint32 uint64 int64 complex64 complex128 upon usage however this op throw an exception when run with an input of any integer dtype when run on a cpu the op only work for float and complex dtype standalone code to reproduce the issue shell import tensorflow as tf import numpy as np dtype int64 x np array 1 2 4 2 3 5 dtype dtype y np array 1 2 4 2 3 5 dtype dtype x tf constant x dtype dtype y tf constant y dtype dtype tf raw op realdiv x x y y name none relevant log output shell notfounderror traceback most recent call last in 4 x tf constant x dtype dtype 5 y tf constant y dtype dtype 6 tf raw op realdiv 7 x x y y name none 8 usr local lib python3 8 dist package tensorflow python framework op py in raise from not ok status e name 7162 def raise from not ok status e name 7163 e message name name if name be not none else 7164 raise core status to exception e from none pylint disable protect access 7165 7166 notfounderror could not find device for node node realdiv realdiv t dt int64 all kernel register for op realdiv device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt bfloat16 device cpu t in dt double device cpu t in dt half device cpu t in dt float device gpu t in dt complex128 device gpu t in dt complex64 device gpu t in dt double device gpu t in dt float device gpu t in dt half op realdiv
tensorflowtensorflow
segmentation fault when run gen sparse op sparse cross
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell segfault when run with the follow input argument standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen sparse op try indice value 0 tensor tf convert to tensor np one 3 dtype str value 0 tf identity value 0 tensor value 1 tensor tf convert to tensor np one 1 dtype str value 1 tf identity value 1 tensor value 2 tensor tf convert to tensor np one 2 dtype str value 2 tf identity value 2 tensor value value 0 value 1 value 2 shape 0 tensor tf random uniform 2 minval 256 maxval 257 dtype tf int64 shape 0 tf identity shape 0 tensor shape 1 tensor tf random uniform 2 minval 256 maxval 257 dtype tf int64 shape 1 tf identity shape 1 tensor shape 2 tensor tf random uniform 2 minval 256 maxval 257 dtype tf int64 shape 2 tf identity shape 2 tensor shape shape 0 shape 1 shape 2 dense input hash output true num bucket 1000 hash key 956888297470 out type tf int64 internal type tf string out gen sparse op sparse cross index indice value value shape shape dense input dense input hash output hash output num bucket num bucket hash key hash key out type out type internal type internal type except exception as e print error str e import tensorflow as tf import os import numpy as np from tensorflow python ops import gen sparse op try indice value 0 tensor tf convert to tensor np one 3 dtype str value 0 tf identity value 0 tensor value 1 tensor tf convert to tensor np one 1 dtype str value 1 tf identity value 1 tensor value 2 tensor tf convert to tensor np one 2 dtype str value 2 tf identity value 2 tensor value value 0 value 1 value 2 shape 0 tensor tf saturate cast tf constant 108945022180484 shape dtype tf int64 dtype tf uint16 shape 0 tf identity shape 0 tensor shape 1 tensor tf complex tf constant 47816739039262 shape 16 0 dtype tf float64 tf constant 102167786476375 shape 16 0 dtype tf float64 shape 1 tf identity shape 1 tensor shape 2 tensor tf constant 83109722927677 shape dtype tf int32 shape 2 tf identity shape 2 tensor shape shape 0 shape 1 shape 2 dense input hash output true num bucket 1000 hash key 956888297470 out type tf int64 internal type tf string out gen sparse op sparse cross index indice value value shape shape dense input dense input hash output hash output num bucket num bucket hash key hash key out type out type internal type internal type except exception as e print error str e relevant log output shell segmentation fault
tensorflowtensorflow
process kill when run array op concat
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell process kill when run the api with tensor with different rank standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python op import array op try arg 0 0 tensor tf saturate cast tf random uniform 2147483654 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 0 tf identity arg 0 0 tensor arg 0 1 tensor tf saturate cast tf random uniform 1024 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 1 tf identity arg 0 1 tensor arg 0 arg 0 0 arg 0 1 arg 1 0 out array op concat arg 0 arg 1 except exception as e print error str e import tensorflow as tf import os import numpy as np from tensorflow python op import array op try arg 0 0 tensor tf saturate cast tf random uniform 2147483654 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 0 tf identity arg 0 0 tensor arg 0 1 tensor tf saturate cast tf random uniform 1024 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 1 tf identity arg 0 1 tensor arg 0 arg 0 0 arg 0 1 arg 1 nan out array op concat arg 0 arg 1 except exception as e print error str e also on tf 2 11 import tensorflow as tf import os import numpy as np from tensorflow python op import array op try arg 0 0 tensor tf saturate cast tf random uniform 2147483654 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 0 tf identity arg 0 0 tensor arg 0 1 tensor tf saturate cast tf random uniform 1024 minval 128 maxval 128 dtype tf int64 dtype tf int8 arg 0 1 tf identity arg 0 1 tensor arg 0 arg 0 0 arg 0 1 arg 1 nan out array op concat arg 0 arg 1 except exception as e print error str e relevant log output shell 2023 01 08 19 27 23 166350 w tensorflow core framework cpu allocator impl cc 82 allocation of 17179869232 exceed 10 of free system memory kill
tensorflowtensorflow
documentation raw op conv3d stride can not be a list of int that have length 5
Bug
click to expand issue type documentation bug have you reproduce the bug with tf nightly yes source binary tensorflow version tf 2 9 1 custom code yes os platform and distribution linux ubuntu 20 04 5 lt wsl2 mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the documentation for raw op conv3d state that stride must be a list of int that have length 5 which be immediately follow by another statement state that stride must be a 1 d tensor of length 5 both of these statement be partly contradictory to each other since stride can not be a list that have length 5 but instead a list that have length 5 the latter statement be the correct one while the former one should either be remove from the documentation due to redundancy false information or correct if keep standalone code to reproduce the issue shell import numpy as np import tensorflow as tf x in np random randn 1 3 3 3 1 kernel in np random randn 3 3 3 1 1 x tf constant x in dtype tf float32 kernel tf constant kernel in dtype tf float32 tf raw op conv3d input x filter kernel stride 1 1 1 1 1 1 padding same datum format ndhwc dilation 1 1 1 1 1 relevant log output shell invalidargumenterror function node wrap conv3d device job localhost replica 0 task 0 device cpu 0 slide window stride field must specify 5 dimension op conv3d
tensorflowtensorflow
segmentation fault when run
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version 2 11 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell segmentation fault standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import nn op try arg 0 tensor tf random uniform 3 30 50 3 dtype tf float64 arg 0 tf identity arg 0 tensor arg 1 0 2 arg 1 1 3 arg 1 2 2 arg 1 3 1 arg 1 arg 1 0 arg 1 1 arg 1 2 arg 1 3 arg 2 true arg 3 true seed 341261001 out nn op fractional avg pool v2 arg 0 arg 1 arg 2 arg 3 seed seed except exception as e print error str e relevant log output shell error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 978 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 39 2679482 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 7 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 1 op fractionalavgpool error value for attr t of bool be not in the list of allow value float double int32 int64 nodedef node fractionalavgpool op output t row pooling sequence int64 col pool sequence int64 attr pool ratio list float min 4 attr pseudo random bool default false attr overlap bool default false attr deterministic bool default false attr seed int default 0 attr seed2 int default 0 attr t type allow dt float dt double dt int32 dt int64 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 1 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 23 2679501 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 1 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 21 5857868 op fractionalavgpool error expect bool for argument overlap not zero error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error value for attr t of complex128 be not in the list of allow value float double int32 int64 nodedef node fractionalavgpool op output t row pooling sequence int64 col pool sequence int64 attr pool ratio list float min 4 attr pseudo random bool default false attr overlap bool default false attr deterministic bool default false attr seed int default 0 attr seed2 int default 0 attr t type allow dt float dt double dt int32 dt int64 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 26 5857868 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 58 5857849 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 63 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 1 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 0 op fractionalavgpool error function node wrap fractionalavgpool device job localhost replica 0 task 0 device cpu 0 pool ratio can not be small than 1 get 1 op fractionalavgpool error expect bool for argument overlap not error expect float for argument pooling ratio not segmentation fault
tensorflowtensorflow
check failure when run tensorflow python op gen datum flow op record input
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version 2 10 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell check failure when run with the follow input combination standalone code to reproduce the issue shell import tensorflow as tf import os import numpy as np from tensorflow python ops import gen datum flow op try file pattern tmp record input test3nvh1t09 tmp3gauzk6b basic file buffer size 1 file parallelism 1 file shuffle shift ratio 2 batch size 1 file random seed 2 compression type out gen datum flow op record input file pattern file pattern file buffer size file buffer size file parallelism file parallelism file shuffle shift ratio file shuffle shift ratio batch size batch size file random seed file random seed compression type compression type except exception as e print error str e relevant log output shell 2023 01 05 22 01 15 270432 f tensorflow core platform threadpool cc 99 check fail num thread 1 1 vs 0 abort
tensorflowtensorflow
tfdefaultlogsink send use buffer write lead to delay write to log file
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 11 0 custom code no os platform and distribution linux mobile device n a python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when set the tf cpp vlog filename env var it would be great to have all log available almost immediately when write by the tf c library at the moment the output file on non android platform be open with buffering which delay output l183 l191 I think this work with stderr at the moment as the default behavior of fopen with stderr on most platform be to flush after each line standalone code to reproduce the issue shell here s an example use a name pipe fifo rm test fifo 2 dev null mkfifo test fifo tf cpp vlog filename readlink f test fifo python c import time import tensorflow time sleep 5 date cat test fifo ts rm test fifo relevant log output shell produce 1 do tf cpp vlog filename readlink f test fifo python c import time import tensorflow time sleep 5 1 20384 thu jan 5 14 35 28 pst 2023 jan 05 14 35 36 2023 01 05 14 35 30 119875 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer so 7 dlerror libnvinfer so 7 can not open share object file no such file or directory ld library path usr local nvidia lib64 jan 05 14 35 36 2023 01 05 14 35 30 119923 w tensorflow compiler xla stream executor platform default dso loader cc 64 could not load dynamic library libnvinfer plugin so 7 dlerror libnvinfer plugin so 7 can not open share object file no such file or directory ld library path usr local nvidia lib64 jan 05 14 35 36 2023 01 05 14 35 30 119929 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly jan 05 14 35 36 2023 01 05 14 35 29 256014 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma jan 05 14 35 36 to enable they in other operation rebuild tensorflow with the appropriate compiler flag notice the 5 second gap between the start of the python process and the tf log this be due to the log buffer only be flush on tf process close here l193 l197
tensorflowtensorflow
issue create for rollback of pr 58358 fix unigram assert
Bug
merge pr 58358 be roll back in 4981cacfe5da6e43b08f92a08d8d8a9c54b41197 please follow up with the reviewer and close this issue once its resolve
tensorflowtensorflow
typeerror can t pickle weakref object
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source source tensorflow version tf 2 11 0 custom code yes os platform and distribution window 11 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell typeerror can t pickle weakref object standalone code to reproduce the issue shell import tensorflow as tf import pickle output file open save dat wb pickle dump tf keras optimizer adam output file relevant log output no response
tensorflowtensorflow
typeerror when modify tf keras optimizer parameter
Bug
click to expand issue type bug have you reproduce the bug with tf nightly no source source tensorflow version v1 15 0 rc3 22 g590d6eef7e 1 15 0 custom code yes os platform and distribution window 10 19042 2364 mobile device no response python version 3 7 bazel version no response gcc compiler version no response cuda cudnn version cuda 10 1 gpu model and memory no response current behaviour shell I have be try to implement a heavily inspire gradient accumulation wrapper for my adam optimizer below and have be receive a strange typeerror standalone code to reproduce the issue shell class accumoptimizer tf keras optimizers optimizer def init self optimizer step per update 1 kwargs super accumoptimizer self init name accumoptimizer kwargs self optimizer optimizer self step per update step per update self iteration tf variable 0 dtype int64 name iteration self cond tf equal self iteration self step per update 0 self lr self optimizer learning rate self optimizer learning rate tf cond self cond lambda self optimizer learning rate lambda tf constant 0 tf float32 relevant log output shell file q datum common net ai code segmentation siddharth 32 grad accum net py line 455 in optimizer fn optimizer accumoptimizer optimizer adam step per update 8 file q datum common net ai code segmentation siddharth 32 grad accum grad accum py line 27 in init lambda tf constant 0 tf float32 file c dlcodereaderproject dlearner venv lib site package tensorflow core python keras optimizer v2 optimizer v2 py line 557 in setattr self set hyper name value file c dlcodereaderproject dlearner venv lib site package tensorflow core python keras optimizer v2 optimizer v2 py line 521 in set hyper backend set value self hyper name value file c dlcodereaderproject dlearner venv lib site package tensorflow core python keras backend py line 3199 in set value value np asarray value dtype dtype x file c dlcodereaderproject dlearner venv lib site package numpy core asarray py line 83 in asarray return array a dtype copy false order order typeerror array take 1 positional argument but 2 be give
tensorflowtensorflow
tf image extract patch fail with xla float32 input and message about int64 input
Bug
click to expand issue type bug have you reproduce the bug with tf nightly yes source binary tensorflow version 2 11 custom code yes os platform and distribution ubuntu 20 04 mac os x 12 6 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell in some case see link below tf image extract patch with float32 input and enable xla say that forward pass op request int64 dtype which be not register but input dtype be float32 standalone code to reproduce the issue shell relevant log output shell detect unsupported operation when try to compile graph inference run step 3512 on xla gpu jit extractimagepatche no register extractimagepatche opkernel for xla gpu jit device compatible with node node gradient tape model 2 temp 2 extractimagepatche opkernel be find but attribute didn t match request attribute t dt int64 ksize 1 16 16 1 padding same rate 1 1 1 1 stride 1 8 8 1 node gradient tape model 2 temp 2 extractimagepatche
tensorflowtensorflow
tf truncatediv not support float but doc support
Bug
click to expand issue type bug source source tensorflow version v2 9 1 132 g18960c44ad3 2 9 2 custom code yes os platform and distribution colab mobile device colab python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the tf truncatediv documentation show that float be support but the actual test only support integer type document standalone code to reproduce the issue shell import tensorflow as tf import numpy as np a np array 1 2 3 x tf convert to tensor a b np array 2 y tf convert to tensor b tf truncatediv x y relevant log output shell notfounderror could not find device for node node truncatediv truncatediv t dt double all kernel register for op truncatediv device gpu t in dt uint64 device gpu t in dt uint32 device gpu t in dt int8 device gpu t in dt int64 device gpu t in dt int16 device gpu t in dt uint16 device gpu t in dt uint8 device cpu t in dt int64 device cpu t in dt int32 device cpu t in dt int16 device cpu t in dt int8 device cpu t in dt uint64 device cpu t in dt uint32 device cpu t in dt uint16 device cpu t in dt uint8 op truncatediv
tensorflowtensorflow
crash when run tensorflow python op gen bitwise op population count
Bug
click to expand issue type bug source binary tensorflow version v2 4 0 custom code yes os platform and distribution linux ubuntu 20 4 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version release 11 0 v11 0 194 build cuda 11 0 bu tc445 37 28540450 0 gpu model and memory no response current behaviour shell when tensorflow python op gen bitwise op population count be give empty input parameter tensor it result in crash standalone code to reproduce the issue shell import tensorflow as tf from tensorflow python ops import gen bitwise op try try with tf device cpu arg 0 tensor arg 0 tf identity arg 0 tensor gen bitwise op population count arg 0 except exception as e error str e try with tf device gpu 0 arg 0 tf identity arg 0 tensor arg 0 tf cast arg 0 tf int8 gen bitwise op population count arg 0 except exception as e error str e except exception as e error str e relevant log output shell 2022 12 25 14 40 39 610623 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 14 40 40 542998 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2022 12 25 14 40 40 543474 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2022 12 25 14 40 40 581144 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 581307 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 01 00 0 name nvidia geforce gtx 1660 ti computecapability 7 5 coreclock 1 77ghz corecount 24 devicememorysize 5 80gib devicememorybandwidth 268 26gib s 2022 12 25 14 40 40 581330 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 14 40 40 583691 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2022 12 25 14 40 40 583735 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2022 12 25 14 40 40 584684 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2022 12 25 14 40 40 584890 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2022 12 25 14 40 40 587243 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2022 12 25 14 40 40 587742 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2022 12 25 14 40 40 587842 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2022 12 25 14 40 40 587915 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 588063 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 588158 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2022 12 25 14 40 40 589082 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2022 12 25 14 40 40 589154 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 589258 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 01 00 0 name nvidia geforce gtx 1660 ti computecapability 7 5 coreclock 1 77ghz corecount 24 devicememorysize 5 80gib devicememorybandwidth 268 26gib s 2022 12 25 14 40 40 589274 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 14 40 40 589289 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2022 12 25 14 40 40 589302 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2022 12 25 14 40 40 589315 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2022 12 25 14 40 40 589329 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2022 12 25 14 40 40 589341 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2022 12 25 14 40 40 589353 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2022 12 25 14 40 40 589365 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2022 12 25 14 40 40 589414 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 589502 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 589566 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2022 12 25 14 40 40 589582 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 14 40 40 895564 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2022 12 25 14 40 40 895590 I tensorflow core common runtime gpu gpu device cc 1267 0 2022 12 25 14 40 40 895595 I tensorflow core common runtime gpu gpu device cc 1280 0 n 2022 12 25 14 40 40 895758 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 895871 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 895958 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 14 40 40 896034 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 4723 mb memory physical gpu device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2022 12 25 14 40 40 936250 f tensorflow core util gpu launch config h 129 check fail work element count 0 0 vs 0 abort core dump
tensorflowtensorflow
crash when run tensorflow python op gen array op pad v2
Bug
click to expand issue type bug source binary tensorflow version v2 4 0 custom code yes os platform and distribution linux ubuntu 20 4 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version release 11 0 v11 0 194 build cuda 11 0 bu tc445 37 28540450 0 gpu model and memory no response current behaviour shell if tensorflow python ops gen array op pad v2 be give with input padding with large element the program crash due to increase in ram usage standalone code to reproduce the issue shell import tensorflow as tf import os from tensorflow python ops import gen array op import numpy as np try arg 0 tensor tf convert to tensor np one 2 2 dtype str arg 0 tf identity arg 0 tensor arg 1 0 0 12509 arg 1 0 1 12509 arg 1 0 arg 1 0 0 arg 1 0 1 arg 1 1 0 12509 arg 1 1 1 12509 arg 1 1 arg 1 1 0 arg 1 1 1 arg 1 arg 1 0 arg 1 1 arg 2 pad gen array op pad v2 arg 0 arg 1 arg 2 except exception as e error str e relevant log output shell 2022 12 25 13 42 02 032759 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 13 42 03 012208 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2022 12 25 13 42 03 012782 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2022 12 25 13 42 03 050082 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 050232 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 01 00 0 name nvidia geforce gtx 1660 ti computecapability 7 5 coreclock 1 77ghz corecount 24 devicememorysize 5 80gib devicememorybandwidth 268 26gib s 2022 12 25 13 42 03 050254 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 13 42 03 052670 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2022 12 25 13 42 03 052731 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2022 12 25 13 42 03 053883 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2022 12 25 13 42 03 054132 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2022 12 25 13 42 03 056360 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2022 12 25 13 42 03 056874 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2022 12 25 13 42 03 056981 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2022 12 25 13 42 03 057134 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 057360 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 057485 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2022 12 25 13 42 03 058622 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2022 12 25 13 42 03 058707 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 058823 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 01 00 0 name nvidia geforce gtx 1660 ti computecapability 7 5 coreclock 1 77ghz corecount 24 devicememorysize 5 80gib devicememorybandwidth 268 26gib s 2022 12 25 13 42 03 058842 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 13 42 03 058865 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2022 12 25 13 42 03 058878 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2022 12 25 13 42 03 058890 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2022 12 25 13 42 03 058903 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2022 12 25 13 42 03 058915 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2022 12 25 13 42 03 058927 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2022 12 25 13 42 03 058939 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2022 12 25 13 42 03 058989 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 059119 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 059212 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2022 12 25 13 42 03 059237 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2022 12 25 13 42 03 385787 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2022 12 25 13 42 03 385821 I tensorflow core common runtime gpu gpu device cc 1267 0 2022 12 25 13 42 03 385826 I tensorflow core common runtime gpu gpu device cc 1280 0 n 2022 12 25 13 42 03 385975 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 386121 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 386214 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2022 12 25 13 42 03 386299 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 4685 mb memory physical gpu device 0 name nvidia geforce gtx 1660 ti pci bus i d 0000 01 00 0 compute capability 7 5 2022 12 25 13 42 03 431772 w tensorflow core framework cpu allocator impl cc 80 allocation of 15024009600 exceed 10 of free system memory kill
tensorflowtensorflow
xla collapse documentation example incorrect
Bug
click to expand issue type documentation bug source source tensorflow version 2 9 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell it seem like the xla documentation for the collapse operator be wrong give a tensor with dimension 4 2 3 collapse over dimension 0 1 should result as describe in the text in a tensor of dim 8 3 in the example however you show the result to be a 4 6 tensor standalone code to reproduce the issue shell n a relevant log output no response
tensorflowtensorflow
nvidia xla xla generate element wise fusion kernel with no data locality for gpu
Bug
click to expand issue type bug source source tensorflow version tf r2 11 custom code yes os platform and distribution ubuntu 16 4 mobile device no response python version 3 8 bazel version 5 3 0 gcc compiler version gcc cuda cudnn version no response gpu model and memory no response current behaviour I encounter a case where xla create very bad layout for pointwise fuse kernel the fused kernel be use layout 1 0 2 3 which lead to almost no datum locality across cuda block thread and these kernel take really long time 45 of training step for stable diffusion unit test can you confirm that this be a layout assignment issue bug of xla attach the hlo dump of before after optimization please check fusion fuse computation 1116 the fuse graph for fuse computation 1116 be also attach attach file standalone code to reproduce the issue in the google drive I share there be a unitt to produce the issue to run it download the folder unitt and from inside the folder run bash r sh relevant log output no response
tensorflowtensorflow
autocast false not respect when save loading keras model
Bug
click to expand issue type bug source source tensorflow version 2 9 3 custom code yes os platform and distribution window 10 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell in the below code the intention be to create a model that take an input of type float32 when save even if it be train with mixed precision in this example training code be omit and call be just a placeholder for conciseness but it still show the issue the problem come when load the model because it have autocast false I would expect that the model be save with an input type of tf float32 but it seem that the model be save in an invalid state because it load with an error the error be the same even if you call tf keras mixed precision set global policy float32 before load the model standalone code to reproduce the issue shell import keras import tensorflow as tf class mymodel keras model def init self super init autocast false def call self input return 0 def main tf keras mixed precision set global policy mixed float16 model mymodel input tf zero shape 1 1 dtype tf float32 prediction 0 model input model path test model model save model path load model keras model load model model path prediction 1 load model input if name main main relevant log output shell valueerror exception encounter when call layer my model type mymodel could not find match concrete function to call load from the savedmodel get positional argument 1 total keyword argument expect these argument to match one of the follow 2 option s option 1 positional argument 1 total tensorspec shape none 1 dtype tf float32 name input keyword argument option 2 positional argument 1 total tensorspec shape none 1 dtype tf float32 name input 1 keyword argument call argument receive by layer my model type mymodel args tf tensor shape 1 1 dtype float16 kwargs process finish with exit code 1
tensorflowtensorflow
bad download link for window gpu only binary zip file
Bug
click to expand issue type build install source source tensorflow version x86 64 2 11 0 custom code no os platform and distribution window x86 gpu only mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell storage googleapis com tensorflow libtensorflow libtensorflow gpu window x86 64 2 11 0 zip the zip file do not exist no such object standalone code to reproduce the issue shell no such object when click on the download link relevant log output no response
tensorflowtensorflow
tensorflow not find
Bug
click to expand issue type bug source source tensorflow version ewfewfew custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen dsfdsf standalone code to reproduce the issue shell sdferfgerg relevant log output no response
tensorflowtensorflow
add autoparallel optimizer in config py
Bug
autoparallel optimizer available as one of the graph optimizer as per documentation available graph optimizer but for tf config optimizer set experimental option it be not list in the arg option but as the per this comment it be already available but not document hence make a pr as doc bug fix 49507
tensorflowtensorflow
c api installation instruction don t work for macos
Bug
click to expand issue type documentation bug source source tensorflow version 2 11 0 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the c api installation guide linker say on linux macos if you extract the tensorflow c library to a system directory such as usr local configure the linker with ldconfig but this command doesn t seem to exist on mac be there an alternative or can this step be skip standalone code to reproduce the issue shell look at docs linker relevant log output no response
tensorflowtensorflow
select tensorflow op flextensorarrayv3 be not support by this interpreter
Bug
system information os platform and distribution win10 tensorflow version tensorflow v1 provide the text output from tflite convert I have already follow the instruction when I m try to convert I use import tensorflow as tf converter tf compat v1 lite tfliteconverter from frozen graph graph def file d myhome fish detection master model research object detection fish inception v2 graph2 frozen inference graph pb both pb and pbtxt file be accept input array image tensor input shape image tensor none none none 3 output array secondstagepostprocessor softmax converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert save the model with open model tflite wb as f f write tflite model it s all right when convert then I add the org tensorflow tensorflow lite select tf op dependency in android studio by use the follow code implementation org tensorflow tensorflow lite 2 11 0 implementation org tensorflow tensorflow lite select tf op 2 11 0 however there be something wrong this be the model that I use here be my error in android studio e tflite select tensorflow op s include in the give model be be not support by this interpreter make sure you apply link the flex delegate before inference for the android it can be resolve by add org tensorflow tensorflow lite select tf op dependency see instruction e tflite node number 3 flextensorarrayv3 fail to prepare e tflite select tensorflow op s include in the give model be be not support by this interpreter make sure you apply link the flex delegate before inference for the android it can be resolve by add org tensorflow tensorflow lite select tf op dependency see instruction e tflite node number 3 flextensorarrayv3 fail to prepare e taskjniutil error get native address of native library task vision jni java lang illegalstateexception error occur when initialize objectdetector allocatetensor fail at org tensorflow lite task vision detector objectdetector initjniwithmodelfdandoption native method at org tensorflow lite task vision detector objectdetector access 000 objectdetector java 88 at org tensorflow lite task vision detector objectdetector 1 createhandle objectdetector java 156 at org tensorflow lite task vision detector objectdetector 1 createhandle objectdetector java 149 at org tensorflow lite task core taskjniutil 1 createhandle taskjniutil java 70 at org tensorflow lite task core taskjniutil createhandlefromlibrary taskjniutil java 91 at org tensorflow lite task core taskjniutil createhandlefromfdandoption taskjniutil java 66 at org tensorflow lite task vision detector objectdetector createfromfileandoption objectdetector java 147 at org tensorflow lite example objectdetection objectdetectorhelper setupobjectdetector objectdetectorhelper kt 96 at org tensorflow lite example objectdetection objectdetectorhelper detect objectdetectorhelper kt 107 at org tensorflow lite example objectdetection fragment camerafragment detectobject camerafragment kt 289 at org tensorflow lite example objectdetection fragment camerafragment bindcamerausecase lambda 9 lambda 8 camerafragment kt 264 at org tensorflow lite example objectdetection fragment camerafragment r8 lambda tra1wwym4jg8atygw5f6kpxoow8 unknown source 0 at org tensorflow lite example objectdetection fragment camerafragment externalsyntheticlambda7 analyze unknown source 2 at androidx camera core imageanalysis lambda setanalyzer 2 imageanalysis java 476 at androidx camera core imageanalysis externalsyntheticlambda0 analyze unknown source 2 at androidx camera core imageanalysisabstractanalyzer lambda analyzeimage 0 androidx camera core imageanalysisabstractanalyzer imageanalysisabstractanalyzer java 283 at androidx camera core imageanalysisabstractanalyzer externalsyntheticlambda1 run unknown source 14 at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1167 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 641 at java lang thread run thread java 919 e test tflite fail to load model with error error get native address of native library task vision jni d egl emulation eglmakecurrent 0xebf41200 ver 2 0 tinfo 0xe0075a30 whether the op call flextensorarrayv3 can be use to inference on android device by use org tensorflow tensorflow lite select tf op 2 11 0 android project dose it mean that flextensorarrayv3 be unlikely to be use in inference on android device and I should change my model
tensorflowtensorflow
can t use adpt function to normarlization the datum
Bug
click to expand issue type bug source source tensorflow version 2 8 custom code yes os platform and distribution colab mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 11 gpu model and memory 3060 current behaviour shell a bug happen standalone code to reproduce the issue shell colab link import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras import layer util import tensorflow addon as tfa import matplotlib pyplot as plt num class 2 input shape 256 256 1 train data path content chest xray train test data path content chest xray test val datum path content chest xray val train dataset util image dataset from directory train datum path batch size 32 image size 256 256 color mode grayscale test dataset util image dataset from directory test datum path batch size 32 image size 256 256 color mode grayscale val dataset util image dataset from directory val datum path batch size 32 image size 256 256 color mode grayscale image size 256 datum augmentation keras sequential layer normalization layer resizing image size image size layer randomflip horizontal layer randomrotation factor 0 02 layer randomzoom height factor 0 2 width factor 0 2 name datum augmentation compute the mean and the variance of the training datum for normalization datum augmentation layer 0 adapt train dataset relevant log output shell valueerror traceback most recent call last in 13 14 compute the mean and the variance of the training datum for normalization 15 datum augmentation layer 0 adapt train dataset 5 frame usr local lib python3 8 dist package kera layer preprocesse normalization py in build self input shape 135 please convert to a numpy array or tf tensor 136 137 input shape tf tensorshape input shape as list 138 ndim len input shape 139 valueerror in user code file usr local lib python3 8 dist package kera engine base preprocesse layer py line 117 in adapt step self adapt maybe build data file usr local lib python3 8 dist package kera engine base preprocesse layer py line 285 in adapt maybe build self build data shape file usr local lib python3 8 dist package kera layer preprocesse normalization py line 137 in build input shape tf tensorshape input shape as list valueerror as list be not define on an unknown tensorshape
tensorflowtensorflow
tensorflow lite benchmark tool possible error when specife input layer value file
Bug
click to expand issue type bug source binary tensorflow version 2 9 custom code no os platform and distribution linux mobile device android python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour I would like to use benchmark tool to see the result of inference so there be an option to specify input layer value file and output filepath when there be one input it be easy problem be when there be multiple input and you can not fully control input name e g you need to escape I think I escape name but I m get error wrong input value file item specify serve default input0 0 datum local tmp benchmarkomat tflite inference input serve default input0 0 kyo5lzfg6a bin my command datum local tmp tflite exec tflite benchmark graph datum local tmp tflite model 5izy3tp8v6 model tflite input layer value file serve default input0 0 datum local tmp tflite inference input serve default input0 0 kyo5lzfg6a bin serve default input1 0 datum local tmp tflite inference input serve default input1 0 kyo5lzfg6a bin serve default input2 0 datum local tmp tflite inference input serve default input2 0 kyo5lzfg6a bin output filepath datum local tmp tflite inference output kyo5lzfg6a bin similar problem when I m not use single quote around key and value shell datum local tmp tflite exec tflite benchmark graph datum local tmp tflite model 3lmd92wlny model tflite input layer value file serve default input0 0 datum local tmp tflite inference input serve default input0 0 c3fb3dngk8 bin serve default input1 0 datum local tmp tflite inference input serve default input1 0 c3fb3dngk8 bin serve default input2 0 datum local tmp tflite inference input serve default input2 0 c3fb3dngk8 bin output filepath datum local tmp tflite inference output c3fb3dngk8 bin use xnnpack false use nnapi false use gpu false use hexagon false gpu precision loss allow true standalone code to reproduce the issue shell code to create simple model import tensorflow as tf import numpy as np from tensorflow import kera from tensorflow keras import layer print tf version input0 keras input shape 64 name input0 input1 keras input shape 16 name input1 input2 keras input shape 16 name input2 layer0 layer dense 3 activation relu name layer0 input0 layer1 layer dense 5 activation relu name layer1 input1 layer2 layer dense 7 activation relu name layer2 input2 x0 layer concatenate layer0 layer1 layer2 x1 layer concatenate layer0 layer1 layer2 x0 layer dense 3 name output0 x0 x1 layer dense 7 name output1 x1 instantiate an end to end model predict both priority and department model keras model input input0 input1 input2 output x0 x1 model compile optimizer sgd loss mean squared error model summary converter tf lite tfliteconverter from keras model model path to the savedmodel directory converter optimization tf lite optimize default tflite model converter convert with open model tflite wb as f f write tflite model relevant log output no response
tensorflowtensorflow
tf keras loss mean absolute error exception information be mislead
Bug
click to expand issue type bug source source tensorflow version tf 2 4 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 2 cudnn 8 1 gpu model and memory rtx3060 current behaviour shell the exception throw by tf keras loss mean absolute error y true y pre be different when run the follow two test code in code1 I set y pre to bool and the exception message prompt I that bool be not in the legal type bfloat16 half float double uint8 int8 uint16 int16 int32 int64 complex64 complex128 uint32 remember that it include uint16 complex64 complex128 which will be useful later in code2 I set y pre to uint16 and the exception information remind I that uint16 be not in the legal type bfloat16 half float double int8 int16 int32 int64 it be obvious that the exception information before and after be contradictory standalone code to reproduce the issue shell code 1 import tensorflow as tf y true tf random uniform 2 3 minval 5 maxval 5 dtype tf int32 y pre tf cast tf random uniform 2 3 minval 5 maxval 5 dtype tf int32 dtype tf bool out tf keras loss mean absolute error y true y pre code 2 import tensorflow as tf y true tf random uniform 2 3 minval 5 maxval 5 dtype tf int32 y pre tf cast tf random uniform 2 3 minval 5 maxval 5 dtype tf int32 dtype tf uint16 out tf keras loss mean absolute error y true y pre relevant log output shell code1 result tensorflow python framework error impl invalidargumenterror value for attr t of bool be not in the list of allow value bfloat16 half float double uint8 int8 uint16 int16 int32 int64 complex64 complex128 uint32 nodedef node sub op z t attr t type allow dt bfloat16 dt half dt float dt double dt uint8 dt int8 dt uint16 dt int16 dt int32 dt int64 dt complex64 dt complex128 dt uint32 op sub code2 result tensorflow python framework error impl invalidargumenterror value for attr t of uint16 be not in the list of allow value bfloat16 half float double int8 int16 int32 int64 nodedef node abs op y t attr t type allow dt bfloat16 dt half dt float dt double dt int8 dt int16 dt int32 dt int64 op ab
tensorflowtensorflow
true 1 false 0
Bug
click to expand issue type bug source source tensorflow version tf 2 4 tf 2 10 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn 8 6 0 gpu model and memory rtx3060 current behaviour shell hello in the bottom layer of python bool value exist true and false and the value be 1 and 0 by default so we can equivalently represent true 1 false 0 in the follow test code I try to change trainable true to trainable 1 and then test in tensorflow 2 10 version I will find that an exception be throw so I guess the bottom layer should be a clear distinction between int type and bool type but I test other before when use the operator I find that some operator allow change bool to int value I suspect that in the entire tensorflow library the type conversion between bool and int be not uniformly handle in order to find out if this happen in other version I try to test in tensorflow 2 4 and find that it can be run successfully can you answer the two question above for I standalone code to reproduce the issue shell import tensorflow as tf import os os environ tf xla flag tf xla enable xla devices os environ tf cpp min log level 2 with tf device cpu arg class tf keras layer locallyconnected2d name locally connected2d trainable 1 dtype float32 filter 3 kernel size 3 3 stride 1 1 arg input tf random uniform 2 6 10 4 dtype tf float32 out arg class arg input print out relevant log output shell tensorflow 2 10 reuslt typeerror expect trainable argument to be a boolean but get 1 tensorflow 2 40 reuslt tf tensor 0 14315505 0 06499855 0 1216753 0 12515134 0 00923991 0 00172077 0 41864938 0 02003886 0 03438807 0 02098705 0 0746426 0 2031626 0 0728707 0 03610322 0 14793792 0 01780548 0 17400125 0 14011937 0 06005201 0 10351056 0 3389557 0 15803254 0 17660744 0 00082554 0 01428804 0 22738077 0 02933689 0 02186381 0 11017214 0 05838367 0 05483316 0 04585953 0 18316194 0 21922597 0 17010817 0 25584626 0 1008286 0 10639497 0 12071656 0 22636111 0 02853697 0 07310107 0 1485898 0 02738841 0 16978729 0 0090516 0 05216654 0 04658506 0 00877664 0 14235732 0 2121313 0 22127396 0 13508058 0 31834558 0 03935264 0 23036832 0 15259065 0 20577691 0 06493172 0 08293995 0 0963655 0 0462731 0 05218513 0 10291788 0 19309677 0 08959021 0 15569639 0 04861056 0 21688032 0 15255114 0 11844756 0 03276695 0 1714233 0 05530884 0 01905462 0 16327316 0 24063838 0 07224267 0 17923783 0 02356955 0 31808606 0 05519737 0 2262343 0 09526171 0 06705513 0 00699769 0 11271586 0 08290772 0 15076353 0 20352224 0 00327513 0 14215843 0 11999539 0 03289764 0 11100782 0 12392078 0 11126763 0 04755261 0 17210905 0 22264665 0 01195438 0 06607565 0 25382042 0 11046582 0 0901512 0 06138152 0 00287328 0 1841901 0 3332898 0 13658786 0 17474082 0 04825393 0 25146523 0 01793581 0 06349282 0 00336953 0 37769794 0 20829338 0 19630632 0 08679034 0 00682403 0 1472453 0 05004839 0 02671506 0 03807459 0 06085263 0 17125078 0 07825888 0 18529892 0 3258647 0 00920958 0 11021222 0 02069337 0 01494252 0 15680203 0 10269973 0 04348599 0 0444208 0 14931358 0 05233786 0 20222609 0 02042234 0 00546753 0 01768769 0 11787581 0 2359893 0 22494979 0 13885225 0 14175156 0 36444566 0 02330246 0 19728938 0 13624257 0 06695578 0 00306335 0 18129912 0 06312215 0 13316229 0 12113601 0 07067803 0 20414466 0 14369792 0 07599919 0 06597348 0 16438872 0 10267115 0 05639628 0 20692885 0 33940262 0 07860201 0 12013634 0 02628985 0 31915188 0 07179877 0 06511983 0 05447894 0 24597749 0 11804631 0 16325285 0 02923613 0 00725327 0 01450248 0 06370709 0 17952165 0 09148498 0 09543476 0 03884454 0 01504131 0 2514327 0 00173181 0 01126088 0 08406446 shape 2 4 8 3 dtype float32
tensorflowtensorflow
tfp jointdistributionsequential build line 236 in build if not isinstance model collection sequence attributeerror module collection have no attribute sequence
Bug
click to expand issue type bug source source tensorflow version 2 8 custom code no os platform and distribution mac os 13 0 1 22a400 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen I expect the code in the tfp guide to run as write I do import collection abc as collection accord to python doc isinstance object classinfo return true if the object argument be an instance of the classinfo argument however the classinfo argument in pycharm be abcmeta and tfp code be if not isinstance model collection sequence my guess be tfp code should be if not isinstance model collection abc sequence standalone code to reproduce the issue shell code to reproduce dfhogg pd dataframe np array 1 201 592 61 9 0 84 2 244 401 25 4 0 31 3 47 583 38 11 0 64 4 287 402 15 7 0 27 5 203 495 21 5 0 33 6 58 173 15 9 0 67 7 210 479 27 4 0 02 8 202 504 14 4 0 05 9 198 510 30 11 0 84 10 158 416 16 7 0 69 11 165 393 14 5 0 30 12 201 442 25 5 0 46 13 157 317 52 5 0 03 14 131 311 16 6 0 50 15 166 400 34 6 0 73 16 160 337 31 5 0 52 17 186 423 42 9 0 90 18 125 334 26 8 0 40 19 218 533 16 6 0 78 20 146 344 22 5 0 56 column i d x y sigma y sigma x rho xy for convenience zero base the i d and use as index dfhogg i d dfhogg i d 1 dfhogg set index i d inplace true standardize mean center and divide by 1 sd dfhoggs dfhogg x y dfhogg x y mean 0 dfhogg x y std 0 dfhoggs sigma y dfhogg sigma y dfhogg y std 0 dfhogg sigma x dfhogg sigma x dfhogg x std 0 x np dfhogg x value sigma y np dfhogg y value y np dfhogg y value mdl ols tfd jointdistributionsequential b0 normal 0 1 tfd normal loc tf cast 0 dtype scale 1 b1 normal 0 1 tfd normal loc tf cast 0 dtype scale 1 x normal b0 b1 x 1 lambda b1 b0 tfd normal parameter transformation loc b0 b1 x np scale sigma y np relevant log output shell line 236 in build if not isinstance model collection sequence attributeerror module collection have no attribute sequence
tensorflowtensorflow
tf image rot90 have inconsistent result in different tensorflow version
Bug
click to expand issue type bug source source tensorflow version tf 2 4 tf 2 10 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn 8 6 0 gpu model and memory rtx3060 current behaviour shell compare tensorflow2 4 and tensorflow2 10 it be find that the description of tf image rot90 be consistent but the result of run tensorflow2 4 and tensorflow2 10 be inconsistent it will fail in the low version but will succeed in the high version judge from the exception throw in the low version the datum type be not support and the high version support this type but this type of restriction be not propose in the official document which will cause misunderstand standalone code to reproduce the issue shell import tensorflow as tf with tf device cpu image tf saturate cast tf random uniform 2 2 1 minval 0 maxval 257 dtype tf int64 dtype tf uint64 out tf image rot90 image print out relevant log output shell result run on tensorflow 2 4 tensorflow python framework error impl invalidargumenterror value for attr t of uint64 be not in the list of allow value uint8 int8 uint16 int16 int32 int64 bool bfloat16 half float double complex64 complex128 string nodedef node reversev2 op output t attr tidx type default dt int32 allow dt int32 dt int64 attr t type allow dt uint8 dt int8 dt uint16 dt int16 dt int32 dt int64 dt bool dt bfloat16 dt half dt float dt double dt complex64 dt complex128 dt string op reversev2 result run on tensorflow 2 10 tf tensor 72 70 35 12 shape 2 2 1 dtype uint64
tensorflowtensorflow
question about tf image convert image dtype parameter type
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 10 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn 8 6 0 gpu model and memory rtx3060 current behaviour shell tf image convert image dtype image dtype saturate false name none for image and dtype parameter can be uint8 uint16 uint32 uint64 int8 int16 int32 int64 float16 float32 float64 bfloat16 then I set the image to complex64 and find it to work so I don t know if the documentation be adequate standalone code to reproduce the issue shell import tensorflow as tf image tf constant 254 2j 83 72 dtype tf complex64 dtype tf float64 out tf image convert image dtype image dtype print out relevant log output shell result tf tensor inf inf inf shape 3 1 1 dtype float64
tensorflowtensorflow
tf image adjust hue run incorrectly on tensorflow 2 4 version
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 4 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn cudnn 8 6 0 gpu model and memory no response current behaviour shell when run the tf image adjust hue operator test code on tensorflow 2 4 the official requirement that the parameter delta must be in the interval 1 1 but when I set it to not be in this interval it can still run successfully standalone code to reproduce the issue shell import tensorflow as tf image 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 image tf constant image print tf image adjust hue image 12 0 relevant log output shell result tf tensor 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 shape 3 2 3 dtype int32
tensorflowtensorflow
the information throw by the tf clip by norm operator in different tensorflow version do not match the description in the documentation
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 4 tf 2 10 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn cudnn 8 6 0 gpu model and memory rtx3060 current behaviour shell the tf clip by norm operator in tensorflow2 4 version be completely consistent with the tensorflow2 10 version description and the data type for x be a tensor or indexedslice this must be a float point type so I be test the code when x be set to uint32 the information throw be different in tensorflow2 4 version and tensorflow2 10 version especially in tensorflow2 4 version the type of exception information description x can be bfloat16 half float double uint8 int8 uint16 int16 int32 int64 complex64 complex128 be inconsistent with the official description of a float point type standalone code to reproduce the issue shell from tensorflow import int32 from tensorflow import uint32 import tensorflow as tf import os os environ tf xla flag tf xla enable xla devices os environ tf cpp min log level 2 with tf device cpu t tf saturate cast tf random uniform 1 5 minval 0 maxval 257 dtype tf int32 dtype tf uint32 clip norm 2 0 out tf clip by norm t clip norm print out relevant log output shell result run on tensorflow 2 4 tensorflow python framework error impl invalidargumenterror value for attr t of uint32 be not in the list of allow value bfloat16 half float double uint8 int8 uint16 int16 int32 int64 complex64 complex128 nodedef node mul op z t attr t type allow dt bfloat16 dt half dt float dt double dt uint8 dt int8 dt uint16 dt int16 dt int32 dt int64 dt complex64 dt complex128 be commutative true op mul result run on tensorflow 2 10 tensorflow python framework error impl invalidargumenterror value for attr t of uint32 be not in the list of allow value bfloat16 half float double complex64 complex128 nodedef node sqrt op y t attr t type allow dt bfloat16 dt half dt float dt double dt complex64 dt complex128 op sqrt
tensorflowtensorflow
tf math add run successfully documentation unsupported type uint16
Bug
click to expand issue type documentation bug source source tensorflow version tf 2 10 custom code yes os platform and distribution window 11 mobile device no response python version 3 8 15 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn cudnn 8 6 0 gpu model and memory rtx3060 current behaviour shell accord to the tensorflow2 10 documentation tf math add x y name none the type of x and y must be a tensor type bfloat16 half float32 float64 uint8 int8 int16 int32 int64 complex64 complex128 string but when the type of my x and y be input as uint16 I find that the program can run successfully so I suggest that the document can be modify or work on add a type check on the input in order to prove that this be not an accidental behavior I also test tf math exp x the type of x need to be bfloat16 half float32 float64 complex64 complex128 if I enter int64 an exception will be throw indicate in the source code there be a check on the parameter type standalone code to reproduce the issue shell from tensorflow import int64 from tensorflow import uint16 import tensorflow as tf with tf device cpu x tf saturate cast tf random uniform 1 2 1 3 minval 0 maxval 257 dtype tf int64 dtype tf uint16 y tf saturate cast tf random uniform 2 1 3 1 minval 0 maxval 257 dtype tf int64 dtype tf uint16 re tf math add x y print re relevant log output shell test code result tf tensor 350 277 296 259 186 205 183 110 129 376 204 403 285 113 312 209 37 236 189 116 135 244 171 190 178 105 124 215 43 242 270 98 297 204 32 231 shape 2 2 3 3 dtype uint16
tensorflowtensorflow
bug typeerror can t multiply sequence by non int of type nonetype
Bug
click to expand issue type bug source source tensorflow version v2 4 0 49 g85c8b2a817f 2 4 1 custom code no os platform and distribution kaggle notebook mobile device no python version 3 7 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory tpu v3 8 current behaviour shell while I m try to train my model use tf and kaggle notebook with tpu acceleration I get the error message paste in the log below I ve read the source code and find this in at line 1605 1609 if rank be none valueerror should be raise if rank be none raise valueerror input tensor to tpustrategy run have unknown rank which be not allow format input tensor maximum shape tensor shape tensorshape none rank instead I m get typeerror on the next line typeerror can t multiply sequence by non int of type nonetype be I do something wrong this look like a bug to I standalone code to reproduce the issue shell relevant log output shell typeerror traceback most recent call last tmp ipykernel 21 2489583751 py in 1 history model fit train epoch epoch step per epoch step per epoch validation datum val opt conda lib python3 7 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 1098 r 1 1099 callback on train batch begin step 1100 tmp log self train function iterator 1101 if datum handler should sync 1102 context async wait opt conda lib python3 7 site package tensorflow python eager def function py in call self args kwd 826 trace count self experimental get trace count 827 with trace trace self name as tm 828 result self call args kwd 829 compiler xla if self experimental compile else nonxla 830 new tracing count self experimental get trace count opt conda lib python3 7 site package tensorflow python eager def function py in call self args kwd 869 this be the first call of call so we have to initialize 870 initializer 871 self initialize args kwd add initializer to initializer 872 finally 873 at this point we know that the initialization be complete or less opt conda lib python3 7 site package tensorflow python eager def function py in initialize self args kwd add initializer to 724 self concrete stateful fn 725 self stateful fn get concrete function internal garbage collect pylint disable protect access 726 args kwd 727 728 def invalid creator scope unused args unused kwd opt conda lib python3 7 site package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 2967 args kwargs none none 2968 with self lock 2969 graph function self maybe define function args kwargs 2970 return graph function 2971 opt conda lib python3 7 site package tensorflow python eager function py in maybe define function self args kwargs 3359 3360 self function cache miss add call context key 3361 graph function self create graph function args kwargs 3362 self function cache primary cache key graph function 3363 opt conda lib python3 7 site package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 3204 arg name arg name 3205 override flat arg shape override flat arg shape 3206 capture by value self capture by value 3207 self function attribute 3208 function spec self function spec opt conda lib python3 7 site package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 988 original func tf decorator unwrap python func 989 990 func output python func func args func kwargs 991 992 invariant func output contain only tensor compositetensor opt conda lib python3 7 site package tensorflow python eager def function py in wrap fn args kwd 632 xla context exit 633 else 634 out weak wrap fn wrap args kwd 635 return out 636 opt conda lib python3 7 site package tensorflow python framework func graph py in wrapper args kwargs 975 except exception as e pylint disable broad except 976 if hasattr e ag error metadata 977 raise e ag error metadata to exception e 978 else 979 raise typeerror in user code opt conda lib python3 7 site package tensorflow python keras engine training py 805 train function return step function self iterator opt conda lib python3 7 site package tensorflow python keras engine training py 795 step function output model distribute strategy run run step args datum opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 540 run return self extend tpu run fn args kwargs option opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 1296 tpu run return func args kwargs opt conda lib python3 7 site package tensorflow python distribute tpu strategy py 1345 tpu function maximum shape tensor shape tensorshape none rank typeerror can t multiply sequence by non int of type nonetype
tensorflowtensorflow
spam
Invalid
spam remove
tensorflowtensorflow
invalid argument in example
Bug
click to expand issue type bug source binary tensorflow version 2 11 custom code no os platform and distribution linux ubuntu 22 04 1 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 2 gpu model and memory rtx 3090 24 gb current behaviour shell this example return invalid argument message but it still run I wonder if that be a warning or error standalone code to reproduce the issue relevant log output shell layout fail invalid argument size of value 0 do not match size of permutation 4 fanin shape insequential 2 dropout dropout selectv2 2 transposenhwctonchw layoutoptimiz
tensorflowtensorflow
wrong barplot x label in
Bug
I think that in the notebook this line plt bar command tf nn softmax prediction 0 should be replace with x label down go leave no right stop up yes plt bar x label tf nn softmax prediction 0 as a result the graph will correctly show the probability for all class
tensorflowtensorflow
ubuntu 20 04 base tensorflow image don t include python 3 9 as indicate in the dockerhub doc
Bug
I believe there be either a build issue or the dockerhub doc need to be clarify my understanding be that the image for the tensorflow 2 10 0 release and other recent release be guarantee to include python 3 9 base on the follow excerpt from the dockerhub doc image build after sept 2021 be base on ubuntu 20 04 python 3 9 for ubuntu 20 base image when use the 2 10 0 image I find that the os be correctly ubuntu 20 04 but that the python version be 3 8 run the follow command indicate the absence of python 3 9 shell root 4fbf1b14a2dd cat etc issue ubuntu 20 04 5 lts n l root 4fbf1b14a2dd python3 v python 3 8 10 root 4fbf1b14a2dd ls usr bin python usr bin python3 usr bin python3 config usr bin python3 8 usr bin python3 8 config I look through the issue pull request and through stackoverflow and find no report of this issue which I find surprising that most likely mean that I be just misinterpret the guarantee make by the doc but maybe this be sufficient support to clarify the doc far if that be the case standalone code to reproduce the issue shell run rm it tensorflow tensorflow 2 10 0 bin bash
tensorflowtensorflow
null
Invalid
image originally post by twbent in
tensorflowtensorflow
tf io decode image fail to load certain png file
Bug
click to expand issue type bug source binary tensorflow version 2 10 0 also test 2 11 0rc2 custom code yes os platform and distribution linux ubuntu 22 04 1 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 2 also test 11 6 gpu model and memory rtx 3090 24 gb current behaviour shell I have two png file both create by pil package one of they upon open with tf io decode image be set to all zero dimension be ok the image can be open in pil not in tensorflow it can also be open in a viewer or a simple python png decoder happen both in eager mode and tf function I will upload the image mention in next post standalone code to reproduce the issue shell import tensorflow as tf p work lppid 011119852 png p not work lppid 023863769 png def process image path buffer tf io read file path image tf io decode image buffer tf print tf shape image tf print image process image p work process image p not work relevant log output no response
tensorflowtensorflow
tensorflow python data experimental kernel tests service cross trainer cache test fail on high cpu count
Bug
click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 8 10 bazel version 5 3 0 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell on a machine with a high cpu core count tensorflow python data experimental kernel tests service cross trainer cache test will fail due to prefetche more datum than the test allow for standalone code to reproduce the issue shell bazel test test timeout 500 900 1 1 flaky test attempt 1 test output all cache test result no config nonccl config mkl aarch64 threadpool copt mtune generic copt march armv8 a copt o3 test env tf enable onednn opt 1 verbose failure build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 build test only test lang filter cc py tensorflow python data experimental kernel tests service cross trainer cache test relevant log output shell fail testconcurrentreader test mode graph tfapiversion 2 main crosstrainercachet crosstrainercachet testconcurrentreader test mode graph tfapiversion 2 testconcurrentreader test mode graph tfapiversion 2 mode graph tf api version 2 traceback most recent call last file root cache bazel bazel root fbac33eb30dbfb6b11b15a7ff5ac830d execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles absl py absl testing parameterized py line 314 in bind param test return test method self testcase param file root cache bazel bazel root fbac33eb30dbfb6b11b15a7ff5ac830d execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python framework test combination py line 360 in decorate execute test method file root cache bazel bazel root fbac33eb30dbfb6b11b15a7ff5ac830d execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python framework test combination py line 343 in execute test method test method kwarg to pass file root cache bazel bazel root fbac33eb30dbfb6b11b15a7ff5ac830d execroot org tensorflow bazel out aarch64 opt bin tensorflow python data experimental kernel tests service cross trainer cache test runfiles org tensorflow tensorflow python data experimental kernel tests service cross trainer cache test py line 136 in testconcurrentreader self assertequal self evaluate iterator j I assertionerror 26 0 run 9 test in 4 935 fail failure 1 2022 11 07 18 30 46 995018 I tensorflow core datum service server lib cc 91 shut down workerserver server run at port 46333 2022 11 07 18 30 46 995434 I tensorflow core datum service server lib cc 91 shut down dispatchserver server run at port 46249
tensorflowtensorflow
valueerror an initial state be pass that be not compatible with cell state size
Bug
click to expand issue type bug source source tensorflow version 1 19 2 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen currently I be try to reimplement a seq2seq model with gru when I be try to run the code of decoder then I encounter valueerror an initial state be pass that be not compatible with cell state size receive state spec listwrapper inputspec shape none 9 128 ndim 3 inputspec shape none 128 ndim 2 however cell state size be 128 standalone code to reproduce the issue shell brief description of bug currently I be try to reimplement a seq2seq model with gru when I be try to run the code of decoder then I encounter valueerror an initial state be pass that be not compatible with cell state size receive state spec listwrapper inputspec shape none 9 128 ndim 3 inputspec shape none 128 ndim 2 however cell state size be 128 self diagnosis x I have review the documentation x I have review the wiki x I have search the issue for an answer to my question x I have search the web for an answer to my question my configuration relate code def define model n input n output n unit ay seq input shape n input 2 name ay seq input ay seq input mask mask value 99 ay seq embed company code input input shape n output name company input company code embed embed 200 49 input length 1 name company code embed company code input flatten flatten company code embed repeatvector 9 encoder gru 128 dropout 0 2 recurrent dropout 0 2 return sequence true return state true state h state c encoder ay seq input encoder state state h state c repeatvector 9 decoder decoder input input shape none n output decoder gru 128 dropout 0 2 recurrent dropout 0 2 return sequence true return state true decoder output decoder decoder input initial state encoder state lambd lambda lambda x concatenate list x flatten output lambd decoder output fc decoder dense1 timedistribute dense n unit activation relu dropout dropout 0 2 decoder dense2 timedistribute dense n output activation relu pay output decoder dense2 dropout decoder dense1 output case reserve output decoder dense2 dropout decoder dense1 output model model ay seq input company code input decoder input pay output case reserve output step to reproduce the issue 18 decoder output decoder decoder input initial state state h state c current result include screenshot where appropriate valueerror an initial state be pass that be not compatible with cell state size receive state spec listwrapper inputspec shape none 9 128 ndim 3 inputspec shape none 128 ndim 2 however cell state size be 128 expect result I want to know how to tackle this problem relevant log output no response
tensorflowtensorflow
tf raw op statelessmultinomial produce inconsistent result on cpu gpu
Bug
click to expand issue type bug source binary tensorflow version 2 9 2 custom code yes os platform and distribution colab mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version 11 2 152 8 1 1 gpu model and memory no response current behaviour stateless sampling produce different result on cpu gpu with the same seed relate 58445 standalone code to reproduce the issue python with tf device cpu 0 sample tf raw op statelessmultinomial logit tf math log 0 5 0 5 num sample 10 seed 7 17 print sample with tf device gpu 0 sample tf raw op statelessmultinomial logit tf math log 0 5 0 5 num sample 10 seed 7 17 print sample relevant log output shell tf tensor 1 0 1 0 0 0 1 1 0 0 shape 1 10 dtype int64 tf tensor 0 0 0 1 0 1 0 1 1 1 shape 1 10 dtype int64