repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | disable resource variable be deprecate | Bug | click to expand issue type bug source source tensorflow version tf 2 6 0 custom code yes os platform and distribution linux ubuntu 16 04 mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen I be run my code normally but then all of sudden I receive a warning that disturb my code from compile and my program completely stop t t my source code be write in tf v1 and I ve be run it on tf v2 so apparently I need tf disable resource variable for my programme to work standalone code to reproduce the issue shell I receive this warning warn tensorflow from home mdee local lib python3 8 site package tensorflow python compat v2 compat py 101 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term this be exactly what I receive and I read from the tensorflow org website that if my code need tf disable resource variable to be call to work properly I should file a bug please let I know how can I run my code smoothly again relevant log output no response |
tensorflowtensorflow | the height and width must be change in tensorflow document on the website | Bug | hi in the tf documentation there be a mistake the order of height and width be wrong it s batch width height feature but must be batch height width feature to comply with channel last notation nhwc image text region 20of 20memory typical 20axis 20order indexing |
tensorflowtensorflow | fix a typo | Bug | fix a small typo in graph property cc fix the issue 56618 |
tensorflowtensorflow | a small typo in graph property cc | Bug | click to expand issue type other source source tensorflow version tf2 10 0 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell l2145 conflic conflict standalone code to reproduce the issue shell I think any simple code will output this typo relevant log output no response |
tensorflowtensorflow | csv generate by the code in the human pose classification with movenet and tensorflow lite example inconsistent | Bug | click to expand issue type documentation bug source source tensorflow version 2 9 1 custom code no os platform and distribution window mobile device no response python version 3 8 0 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour run part 1 part 1 preprocess the input image here after run the preparation code generate csvs different from the csvs in the section here optional download the preprocesse dataset if you do not run part 1 the csvs should be the same but the csvs generate by run the code have discrete keypoint coordinate while the csvs link in the above section have decimal keypoint coordinate I believe that the code in the example be miss some scaling code in the def process self per pose class limit none detection threshold 0 1 function under the get landmark and scale it to the same size as the input image comment I don t think the landmark be actually scale pose landmark np array keypoint coordinate x keypoint coordinate y keypoint score for keypoint in person keypoint type np float32 yoga csvs generate by run the code yoga train data csv yoga test data csv yoga csvs link in the code for skip part 1 which claim to be identical yoga train data csv yoga test data csv standalone code to reproduce the issue relevant log output no response |
tensorflowtensorflow | overflowerror int too large to convert to float | Bug | click to expand issue type bug source binary tensorflow version 2 9 1 custom code yes os platform and distribution window 1 mobile device no response python version 3 9 5 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I want to train my machine learn model on this environment unfortunately this bug appear I don t know if this be my issue or bug in tensorflow s code standalone code to reproduce the issue shell from collection import counter from keras import layer import numpy as np import tensorflow as tf from rl agents dqn import dqnagent from rl policy import linearannealedpolicy epsgreedyqpolicy from rl memory import sequentialmemory import kera import random import gym class env gym env def init self self observation space gym space multidiscrete 50 for in range 9 60000 60000 60000 self action space gym space discrete 12 self stepsleft 20000 self state self reset step mapping be a varible show what each action do this prevent wall of 12 if statement first 3 number be the row you take second 3 be the one you perform the operation on 7th number mean either add or substacte self stepmapping 0 0 1 2 3 4 5 1 1 0 1 2 6 7 8 1 2 3 4 5 0 1 2 1 3 3 4 5 6 7 8 1 4 6 7 8 0 1 2 1 5 6 7 8 3 4 5 1 6 0 1 2 3 4 5 1 7 0 1 2 6 7 8 1 8 3 4 5 0 1 2 1 9 3 4 5 6 7 8 1 10 6 7 8 0 1 2 1 11 6 7 8 3 4 5 1 def step self action reward 1 do false self stepsleft 1 self state self stepmapping action 3 self state self stepmapping action 0 self stepmapping action 6 self state self stepmapping action 4 self state self stepmapping action 1 self stepmapping action 6 self state self stepmapping action 5 self state self stepmapping action 2 self stepmapping action 6 this wierd reward system should make ai lean towards the solution and not astronomically big number wholesystem abs self state 0 abs self state 1 abs self state 2 abs self state 3 abs self state 4 ab self state 5 ab self state 6 abs self state 7 ab self state 8 if wholesystem 10000 reward 10 if wholesystem 400 reward 1 if wholesystem 200 reward 5 if wholesystem 100 reward 10 if wholesystem 50 reward 20 if self stepsleft 0 do true if self do do true reward 100000 print holy shit return self state reward do def reset self in case you don t understand my genius way of call varible lsq leave side of quation rsq right side of quation x y z random randint 0 200 random randint 0 200 random randint 0 200 lsq random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 random randint 0 49 rsq lsq 0 x lsq 1 y lsq 2 z lsq 3 x lsq 4 y lsq 5 z lsq 6 x lsq 7 y lsq 8 z return lsq rsq def do self state self state 0 self state 1 self state 2 self state 3 self state 4 self state 5 self state 6 self state 7 self state 8 find 0 1 2 savedindexe try for I in range 3 if counter state I 0 2 find pop find index 2 savedindexe np where np array state I 0 0 tolist for I in range 3 if 0 not in state I find pop find index 0 if counter state I 0 1 if state I index 0 in savedindexe find pop find index 1 else return false except valueerror return false if not find return true else return false env env model keras sequential model add layer flatten input shape 1 12 model add layer dense 256 activation relu batch input shape 12 model add layer dense 128 activation relu model add layer dense 12 model compile loss kera loss sparsecategoricalcrossentropy from logit false optimizer keras optimizer adam lr 0 001 metric accuracy policy linearannealedpolicy epsgreedyqpolicy attr eps value max 1 0 value min 0 1 value test 0 2 nb step 10000 memory sequentialmemory limit 50000 window length 1 dqn dqnagent model model memory memory policy policy nb action 12 nb step warmup 10 target model update 1e 2 dqn compile tf keras optimizer adam learning rate 1e 4 metric mae dqn fit env nb step 2000000 visualize false verbose 1 model save workingmodel h5 score dqn test env nb episode 1000000 visualize true print np mean score history episode reward relevant log output shell d pycharmproject ai venv lib site package kera optimizer optimizer v2 adam py 110 userwarne the lr argument be deprecate use learning rate instead super adam self init name kwargs training for 2000000 step interval 1 0 step perform d pycharmproject ai venv lib site package kera engine training v1 py 2067 userwarning model state update will be remove in a future version this property should not be use in tensorflow 2 0 as update be apply automatically update self state update 1 10000 eta 14 50 reward 1 0000d pycharmproject ai venv lib site package rl memory py 37 userwarning not enough entry to sample without replacement consider increase your warm up phase to avoid oversample warning warn not enough entry to sample without replacement consider increase your warm up phase to avoid oversample d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 10 1 instead batch idxs np random random integer low high 1 size size 12 10000 eta 6 11 reward 0 6667d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 11 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 12 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 13 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 14 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 15 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 16 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 17 1 instead batch idxs np random random integer low high 1 size size 29 10000 eta 3 25 reward 0 8621d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 18 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 19 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 20 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 21 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 22 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 23 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 24 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 25 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 26 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 27 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 28 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 29 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 30 1 instead batch idxs np random random integer low high 1 size size d pycharmproject ai venv lib site package rl memory py 38 deprecationwarne this function be deprecate please call randint 1 31 1 instead batch idxs np random random integer low high 1 size size 6720 10000 eta 25 reward 9 9231traceback most recent call last file d pycharmproject ai venv lib site package ipython core interactiveshell py line 3398 in run code exec code obj self user global ns self user n file line 1 in runfile d pycharmproject ai linear algebra ai py wdir d pycharmproject ai file d jetbrain pycharm community edition 2021 2 plugin python ce helper pydev pydev bundle pydev umd py line 198 in runfile pydev import execfile filename global var local var execute the script file d jetbrain pycharm community edition 2021 2 plugin python ce helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file d pycharmproject ai linear algebra ai py line 95 in dqn fit env nb step 2000000 visualize false verbose 1 file d pycharmproject ai venv lib site package rl core py line 168 in fit action self forward observation file d pycharmproject ai venv lib site package rl agents dqn py line 224 in forward q value self compute q value state file d pycharmproject ai venv lib site package rl agents dqn py line 68 in compute q value q value self compute batch q value state flatten file d pycharmproject ai venv lib site package rl agents dqn py line 63 in compute batch q value q value self model predict on batch batch file d pycharmproject ai venv lib site package kera engine training v1 py line 1200 in predict on batch output self predict function input file d pycharmproject ai venv lib site package keras backend py line 4269 in call array val append np asarray value overflowerror int too large to convert to float |
tensorflowtensorflow | tensorflow macos 2 9 2 miss op file | Bug | click to expand issue type bug source binary tensorflow version 2 9 2 custom code no os platform and distribution macos 13 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell on try to import horovod with tensorflow the follow bug show up which state that tensorflow 2 9 2 be miss some op when it build standalone code to reproduce the issue shell import horovod tensorflow traceback most recent call last file line 1 in file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow init py line 26 in from horovod tensorflow import elastic file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow elastic py line 24 in from horovod tensorflow function import broadcast object broadcast object fn broadcast variable file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow function py line 24 in from horovod tensorflow mpi op import allgather broadcast broadcast file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow mpi op py line 53 in raise e file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow mpi op py line 50 in mpi lib load library mpi lib get ext suffix file user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow mpi op py line 45 in load library library load library load op library filename file user bradbury 11 miniforge3 lib python3 9 site package tensorflow python framework load library py line 54 in load op library lib handle py tf tf loadlibrary library filename tensorflow python framework error impl notfounderror dlopen user bradbury 11 miniforge3 lib python3 9 site package horovod tensorflow mpi lib cpython 39 darwin so 0x0006 weak def symbol not find zn3xla14hloinstruction5visitipks0 een10tensorflow6statusepns 17dfshlovisitorbaseit ee relevant log output no response |
tensorflowtensorflow | different result on 2 9 1 and 2 8 0 | Bug | click to expand issue type bug source binary tensorflow version tf 2 8 0 tf 2 9 1 custom code yes os platform and distribution window 11 x86 mobile device no response python version 3 8 10 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 3 cudnn 8 2 1 gpu model and memory nvidia geforce rtx 3060 laptop gpu 6 gb current behaviour shell the model have different result on 2 8 0 and 2 9 1 version model keras application efficientnetb0 class 1000 2 8 0 print model predict np zero 1 224 224 3 0 0 result be 0 0010812443 2 9 1 print model predict np zero 1 224 224 3 0 0 result be 0 00066214177 the same happen with other model standalone code to reproduce the issue shell import numpy as np from tensorflow import keras model keras application efficientnetb0 class 1000 print model predict np zero 1 224 224 3 0 0 relevant log output no response |
tensorflowtensorflow | ktonthat tech writer test this template | Bug | click to expand issue type documentation bug source source tensorflow version 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell ignore testing template for documentation standalone code to reproduce the issue shell ignore testing template for documentation relevant log output no response |
tensorflowtensorflow | tensorflow load weight 0 function work in jupyter ntoebook but not in a script | Bug | click to expand issue type bug source source tensorflow version 2 9 1 os platform and distribution window python version 3 8 10 gcc compiler version no response cuda cudnn version 11 2 gpu model and memory no response current behaviour shell the above code when run in jupyter notebook inside vscode run as intend but fail when run inside vscode as a py file I be use a virtual environment and only instal tensorflow transformer and ipykernel for use as a jupyter kernel I get the follow error when run it in a py file standalone code to reproduce the issue shell import tensorflow as tf from transformer import autotokenizer tfautomodelforsequenceclassification datacollatorwithpadde model ckpt distilbert base uncased num label 6 batch size 16 tokenizer autotokenizer from pretrained model ckpt def predict text str int tf model tfautomodelforsequenceclassification from pretraine model ckpt num label num label tf model load weight checkpoint my checkpoint input text tokenize tokenizer encode text truncation true padding true return tensor tf prediction tf model input text tokenize prediction logit prediction 0 emotion np argmax prediction logit 0 return prediction x predict text I be despair x relevant log output shell raise error impl notfounderror none none error message tensorflow python framework error impl notfounderror unsuccessful tensorslicereader constructor fail to find any match file for checkpoint my checkpoint solve tensorflow do not work well with relative path the solution be to provide an absolute path to the checkpoint when run it in the terminal |
tensorflowtensorflow | tf linalg qr do not support half datatype in the late version | Bug | click to expand issue type documentation bug source binary tensorflow version tf 2 9 1 custom code yes os platform and distribution linux ubuntu 16 04 mobile device no response python version 3 7 13 bazel version n a gcc compiler version n a cuda cudnn version n a gpu model and memory n a current behaviour shell tf linalg qr raise an exception when the input be in half datum type as follow notfounderror traceback most recent call last in 1 import tensorflow as tf 2 a tf random uniform 10 10 dtype half 3 tf linalg qr a 1 frame usr local lib python3 7 dist package tensorflow python framework op py in raise from not ok status e name 7162 def raise from not ok status e name 7163 e message name name if name be not none else 7164 raise core status to exception e from none pylint disable protect access 7165 7166 notfounderror could not find device for node node qr qr t dt half full matrix false all kernel register for op qr device cpu t in dt float device cpu t in dt double device cpu t in dt complex64 device cpu t in dt complex128 device gpu t in dt float device gpu t in dt double device gpu t in dt complex128 op qr but the documentation describe that this api support half datatype standalone code to reproduce the issue shell import tensorflow as tf a tf random uniform 10 10 dtype half tf linalg qr a relevant log output shell notfounderror traceback most recent call last in 1 import tensorflow as tf 2 a tf random uniform 10 10 dtype half 3 tf linalg qr a 1 frame usr local lib python3 7 dist package tensorflow python framework op py in raise from not ok status e name 7162 def raise from not ok status e name 7163 e message name name if name be not none else 7164 raise core status to exception e from none pylint disable protect access 7165 7166 notfounderror could not find device for node node qr qr t dt half full matrix false all kernel register for op qr device cpu t in dt float device cpu t in dt double device cpu t in dt complex64 device cpu t in dt complex128 device gpu t in dt float device gpu t in dt double device gpu t in dt complex128 op qr |
tensorflowtensorflow | experimental rostam | Invalid | the cosine embed loss in torch be give by loss x y 1 cos x1 x2 max 0 cos x1 x2 margin if y 1if y 1 the idea be to give it a pair of vector and and a response value 1 or 1 depend on if they belong to the same group or not first let s generate large dataset of 20 observation each one be a length 100 vector we also sample for each obs if it belong to group a or group b library torch library ggplot2 go to use for plot x torch randn 20 100 y sample c a b replace true size x shape 1 next we crate a torch dataset that will do the follow everytime we ask for a new item I take the observation correspond to the item I in the dataset we create previously with prob 0 5 select an observation from the same group to be it s pair otherwise select an observation from a different group return both select observation and the objective value 1 of they be obs be not from the same group and 1 if they be datum dataset initialize function x y rate 0 5 self x x self y y self rate rate getitem function I lab self y I if self rate runif 1 I sample which self y lab 1 obj 1 else I sample which self y lab 1 obj 1 list x self x I x self x I obj obj length function x shape 1 initialize the dataset and dataloader d datum x y dl dataloader d batch size 5 the model we be go to define be a dimension reduction model it will take the observation space from 100 dimension to only 2 r100 r2 it do that via linear model model nn linear 100 2 we create a plot utility to plot observation in the model space make plot functn model x y with no grad as as datum frame as matrix model x y a as class a bs as datum frame as matrix model x y b bs class b ggplot rbind as bs aes x v1 y v2 color class geom point xlim 2 2 ylim 2 2 we now fit this model for 100 epoch save intermediary plot opt optim adam model parameter criterion nn cosine embed loss plot list for epoch in 1 100 coro loop for b in dl r model b x r model b x opt zero grad loss criterion r r b obj loss backward opt step plot length plot 1 make plot model x y and we can finally observe how the observation evolve in the model space during training gifski save gif delay 0 1 lapply seq along plot function x p plot x ggtitle label sprintf epoch d x print p animation gif 1 user dfalbel document dfalbel github io post 2021 04 15 cosine embed loss in torch |
tensorflowtensorflow | nightly | Invalid | |
tensorflowtensorflow | tanh give out of range output | Bug | click to expand issue type bug source source tensorflow version 2 8 2 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the tanh activation function be suppose to give an output in range 1 1 but in this instance the output would sometimes have value in range 9 999 9 9999 below I have attach a link to a repo where you can see the model architecture the input and the output of the same the model be also save and upload in the same repo the notebook be run in colab and local environment where I face similar result standalone code to reproduce the issue shell relevant log output no response |
tensorflowtensorflow | update the expect behavior in tf doc | Bug | in tf image resize with pad if the value be negative or zero for target width and target height it be raise value error it raise invalidargumenterror when a certain positive value that be give for target width 100 fix 56333 |
tensorflowtensorflow | put two model in another model the two model variable name aren t under the another model name scope | Bug | click to expand issue type bug source source tensorflow version tf 2 5 0 custom code yes os platform and distribution mac os linux mobile device no response python version 3 8 5 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell put two model in another model the two model variable name aren t under the another model name scope 1 class clip inherit from keras model 2 class cliptexttransformer inherit from keras model 3 class clipvisiontrransformer inherit from keras model standalone code to reproduce the issue shell the source code as follow class clip pretrainmodel clip config cls clipconfig def init self name str clip123 kwargs super clip self init name name kwargs self text transformer cliptexttransformer config self config text transformer name text transformer self vision transformer clipvisiontrransformer config self config vision transformer name vision transformer self visual projection tf keras layer dense unit self config projection dim kernel initializer get initializer self vision transformer config hide size 0 5 use bias false name visual projection self text projection tf keras layer dense unit self config projection dim kernel initializer get initializer self text transformer config hide size 0 5 use bias false name text projection relevant log output no response |
tensorflowtensorflow | change the documentation for tensorarray | Bug | fix 56272 I m not totally sure about the wording here so feel free to suggest change |
tensorflowtensorflow | tensorflow lite label image abort occur with xnnpack delegate option | Bug | click to expand issue type bug source source tensorflow version v2 9 1 or master custom code no os platform and distribution raspberry pi os bullseye mobile device no response python version no response bazel version no response gcc compiler version gcc version 10 2 1 20210110 debian 10 2 1 6 cuda cudnn version no response gpu model and memory no response current behaviour shell build tensorflow lite and label image from source with cmake execute with the xnnpack delegate option label image tflite model tmp mobilenet v1 1 0 224 tflite label tmp label txt image tensorflow lite example label image testdata grace hopper bmp xnnpack delegate 1 type mismatch while access parameter abort standalone code to reproduce the issue shell clone repository git clone b v2 9 1 build tensorflow lite and label image mkdir build cd build cmake tensorflow tensorflow lite cmake build j nproc cmake build j nproc t label image download tflite model wget tar xf mobilenet v1 1 0 224 tgz wget tar xf mobilenet v1 1 0 224 frozen tgz cp mobilenet v1 1 0 224 label txt exec label image with use xnnpack option example label image label image tflite model mobilenet v1 1 0 224 tflite label label txt image tensorflow tensorflow lite example label image testdata grace hopper bmp relevant log output no response |
tensorflowtensorflow | there be a problem quantify the model with two input which have different dimension | Bug | click to expand issue type bug source binary tensorflow version tf 2 9 custom code yes os platform and distribution windows10 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen my model have two input the first input have dimension 1 1 257 the second input have dimension 1 2 128 2 I want to quantify my model with int16 input output type and int8 weight type when I use my code to quantify my model error occur standalone code to reproduce the issue shell def representative dataset for in range 100 data1 np random rand 1 2 128 2 data2 np random rand 1 1 257 data3 input 2 data2 astype np float32 input 3 data1 astype np float32 data3 data2 astype np float32 data1 astype np float32 yield data3 if use dynamic range quant converter optimization tf lite optimize default converter target spec support op tf lite opsset experimental tflite builtin activation int16 weight int8 converter inference input type tf int16 converter inference output type tf int16 converter representative dataset representative dataset tflite model converter convert relevant log output shell the error like this file d programfile anaconda3 lib site package tensorflow lite python optimize calibrator py line 129 in self calibrator prepare list s shape for s in input array attributeerror list object have no attribute shape |
tensorflowtensorflow | simple audio recognition recognize keyword image must have either 3 or 4 dimension | Bug | click to expand issue type bug source binary tensorflow version tf 2 7 0 custom code no os platform and distribution macos monterey 12 4 mobile device no response python version python 3 10 4 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I m follow the tensorflow tutorial on simple audio recognition recognize keyword and when train the model after the first epoch I get the follow error epoch 1 10 100 100 eta 0s loss 1 7273 accuracy 0 3780warne tensorflow model be construct with shape none 124 129 1 for input kerastensor type spec tensorspec shape none 124 129 1 dtype tf float32 name input 1 name input 1 description create by layer input 1 but it be call on an input with incompatible shape none none traceback most recent call last file user feiticeir0 script audio recognition audio classification py line 283 in history model fit file opt miniconda3 envs audio recognition lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file opt miniconda3 envs audio recognition lib python3 10 site package tensorflow python framework func graph py line 1129 in autograph handler raise e ag error metadata to exception e valueerror in user code file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1366 in test function return step function self iterator file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1356 in step function output model distribute strategy run run step args datum file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1349 in run step output model test step datum file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1303 in test step y pre self x training false file opt miniconda3 envs audio recognition lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none valueerror exception encounter when call layer resizing type resizing image must have either 3 or 4 dimension call argument receive input tf tensor shape none none dtype float32 standalone code to reproduce the issue shell just follow the code in the tutorial relevant log output shell epoch 1 10 100 100 eta 0s loss 1 7273 accuracy 0 3780warne tensorflow model be construct with shape none 124 129 1 for input kerastensor type spec tensorspec shape none 124 129 1 dtype tf float32 name input 1 name input 1 description create by layer input 1 but it be call on an input with incompatible shape none none traceback most recent call last file user feiticeir0 script audio recognition audio classification py line 283 in history model fit file opt miniconda3 envs audio recognition lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none file opt miniconda3 envs audio recognition lib python3 10 site package tensorflow python framework func graph py line 1129 in autograph handler raise e ag error metadata to exception e valueerror in user code file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1366 in test function return step function self iterator file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1356 in step function output model distribute strategy run run step args datum file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1349 in run step output model test step datum file opt miniconda3 envs audio recognition lib python3 10 site package kera engine training py line 1303 in test step y pre self x training false file opt miniconda3 envs audio recognition lib python3 10 site package keras util traceback util py line 67 in error handler raise e with traceback filter tb from none valueerror exception encounter when call layer resizing type resizing image must have either 3 or 4 dimension call argument receive input tf tensor shape none none dtype float32 |
tensorflowtensorflow | tf image resize with pad raise invalidargumenterror but the behavior be not document | Bug | click to expand issue type documentation bug source binary tensorflow version v2 9 0 18 gd8ce9f9c301 2 9 1 custom code no os platform and distribution macos 12 4 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the documentation only state tf image resize with pad raise valueerror raise the function actually raise invalidargumenterror if the resize image become too small zero width or zero height the function should raise a valueerror correspond to this situation or at least document that the function raise invalidargumenterror the same apply tf image resize with preserve aspect ratio true standalone code to reproduce the issue shell import tensorflow as tf tf image resize with pad tf one 1 100 1 target height 10 target width 10 relevant log output shell invalidargumenterror traceback most recent call last var folder xx 0bq30ljn7dz9055 2f5kd2csqm1fwc t ipykernel 41817 1900687440 py in 1 tf image resize with pad tf one 1 100 1 target height 10 target width 10 project ailab psd process venv lib python3 7 site package tensorflow python util traceback util py in error handler args kwargs 151 except exception as e 152 filter tb process traceback frames e traceback 153 raise e with traceback filter tb from none 154 finally 155 del filter tb project ailab psd process venv lib python3 7 site package tensorflow python framework op py in raise from not ok status e name 7162 def raise from not ok status e name 7163 e message name name if name be not none else 7164 raise core status to exception e from none pylint disable protect access 7165 7166 invalidargumenterror output dimension must be positive op resizebilinear |
tensorflowtensorflow | unix wheel url be incorrect in tf2 9 | Bug | click to expand issue type documentation bug source binary tensorflow version 2 9 custom code no os platform and distribution ubuntu mobile device na python version 3 7 3 8 3 9 3 10 bazel version na gcc compiler version na cuda cudnn version na gpu model and memory na current behaviour shell unix wheel url be incorrect in tf2 9 at package location standalone code to reproduce the issue shell linux python 3 7 gpu support python 3 7 cpu only python 3 8 gpu support python 3 8 cpu only python 3 9 gpu support python 3 9 cpu only python 3 10 gpu support python 3 10 cpu only relevant log output shell nosuchkey the specify key do not exist no such object tensorflow linux gpu tensorflow gpu 2 9 0 cp37 cp37 m manylinux2014 whl |
tensorflowtensorflow | unit test tensorflow compiler mlir quantization tensorflow python concurrency test fail on python 3 7 3 8 | Bug | click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution ubuntu 20 04 mobile device n a python version 3 8 13 bazel version 5 1 1 gcc compiler version 10 3 0 cuda cudnn version n a gpu model and memory n a current behaviour shell the test fail due to python code that be not support in 3 7 or 3 8 standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config nonccl copt mtune generic copt march armv8 a copt o3 build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only tensorflow compiler mlir quantization tensorflow python concurrency test relevant log output shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config nonccl copt mtune generic copt march armv8 a copt o3 build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only tensorflow compiler mlir quantization tensorflow python concurrency test start local bazel server and connect to it info option provide by the client inherit common option isatty 1 terminal column 143 info read rc option for test from home builder 1 tensorflow build tensorflow git bazelrc inherit common option experimental repo remote exec info read rc option for test from home builder 1 tensorflow build tensorflow git bazelrc inherit build option define framework share object true define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no are true noincompatible remove legacy whole archive enable platform specific config define with xla support true config short log config v2 define no aw support true define no hdfs support true experimental cc shared library experimental link static library once true info read rc option for test from home builder 1 tensorflow build tensorflow git tf configure bazelrc inherit build option action env python bin path home builder 1 tensorflow build venv py38 bin python3 action env python lib path home builder 1 tensorflow build venv py38 lib python3 8 site package python path home builder 1 tensorflow build venv py38 bin python3 info read rc option for test from home builder 1 tensorflow build tensorflow git bazelrc inherit build option delete package tensorflow compiler mlir tfrt tensorflow compiler mlir tfrt benchmark tensorflow compiler mlir tfrt jit python bind tensorflow compiler mlir tfrt jit transform tensorflow compiler mlir tfrt python test tensorflow compiler mlir tfrt test tensorflow compiler mlir tfrt test ir tensorflow compiler mlir tfrt test analysis tensorflow compiler mlir tfrt test jit tensorflow compiler mlir tfrt test lhlo to tfrt tensorflow compiler mlir tfrt test lhlo to jitrt tensorflow compiler mlir tfrt test tf to corert tensorflow compiler mlir tfrt test tf to tfrt data tensorflow compiler mlir tfrt test save model tensorflow compiler mlir tfrt transform lhlo gpu to tfrt gpu tensorflow core runtime fallback tensorflow core runtime fallback conversion tensorflow core runtime fallback kernel tensorflow core runtime fallback opdef tensorflow core runtime fallback runtime tensorflow core runtime fallback util tensorflow core tfrt common tensorflow core tfrt eager tensorflow core tfrt eager backends cpu tensorflow core tfrt eager backend gpu tensorflow core tfrt eager core runtime tensorflow core tfrt eager cpp test core runtime tensorflow core tfrt gpu tensorflow core tfrt run handler thread pool tensorflow core tfrt runtime tensorflow core tfrt save model tensorflow core tfrt graph executor tensorflow core tfrt save model test tensorflow core tfrt tpu tensorflow core tfrt util info read rc option for test from home builder 1 tensorflow build tensorflow git tf configure bazelrc test option flaky test attempt 3 test size filter small medium info find applicable config definition build short log in file home builder 1 tensorflow build tensorflow git bazelrc output filter do not match anything info find applicable config definition build v2 in file home builder 1 tensorflow build tensorflow git bazelrc define tf api version 2 action env tf2 behavior 1 info find applicable config definition test v2 in file home builder 1 tensorflow build tensorflow git tf configure bazelrc test tag filter benchmark test no oss gpu oss serial v1only build tag filter benchmark test no oss gpu v1only info find applicable config definition build nonccl in file home builder 1 tensorflow build tensorflow git bazelrc define no nccl support true info find applicable config definition build linux in file home builder 1 tensorflow build tensorflow git bazelrc copt w host copt w define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c 17 host cxxopt std c 17 config dynamic kernel distinct host configuration false experimental guard against concurrent change info find applicable config definition build dynamic kernel in file home builder 1 tensorflow build tensorflow git bazelrc define dynamic load kernel true copt dautoload dynamic kernel info analyze target tensorflow compiler mlir quantization tensorflow python concurrency test 474 package load 28581 target configure info find 1 test target fail tensorflow compiler mlir quantization tensorflow python concurrency test see home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 1 log fail tensorflow compiler mlir quantization tensorflow python concurrency test see home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 2 log fail tensorflow compiler mlir quantization tensorflow python concurrency test see home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test log fail tensorflow compiler mlir quantization tensorflow python concurrency test summary home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test log home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 1 log home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 2 log info from test tensorflow compiler mlir quantization tensorflow python concurrency test test output for tensorflow compiler mlir quantization tensorflow python concurrency test warn tensorflow please fix your import module tensorflow python training tracking base have be move to tensorflow python trackable base the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training checkpoint management have be move to tensorflow python checkpoint checkpoint management the old module will be delete in version 2 9 warning tensorflow please fix your import module tensorflow python training tracking resource have be move to tensorflow python trackable resource the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking util have be move to tensorflow python checkpoint checkpoint the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking base delegate have be move to tensorflow python trackable base delegate the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track graph view have be move to tensorflow python checkpoint graph view the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track python state have be move to tensorflow python trackable python state the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save functional saver have be move to tensorflow python checkpoint functional saver the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save checkpoint option have be move to tensorflow python checkpoint checkpoint option the old module will be delete in version 2 11 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python integration test concurrency test py line 22 in from tensorflow compiler mlir quantization tensorflow python import quantize model file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python quantize model py line 312 in signature key list str typeerror type object be not subscriptable test output for tensorflow compiler mlir quantization tensorflow python concurrency test warn tensorflow please fix your import module tensorflow python training tracking base have be move to tensorflow python trackable base the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training checkpoint management have be move to tensorflow python checkpoint checkpoint management the old module will be delete in version 2 9 warning tensorflow please fix your import module tensorflow python training tracking resource have be move to tensorflow python trackable resource the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking util have be move to tensorflow python checkpoint checkpoint the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking base delegate have be move to tensorflow python trackable base delegate the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track graph view have be move to tensorflow python checkpoint graph view the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track python state have be move to tensorflow python trackable python state the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save functional saver have be move to tensorflow python checkpoint functional saver the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save checkpoint option have be move to tensorflow python checkpoint checkpoint option the old module will be delete in version 2 11 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python integration test concurrency test py line 22 in from tensorflow compiler mlir quantization tensorflow python import quantize model file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python quantize model py line 312 in signature key list str typeerror type object be not subscriptable test output for tensorflow compiler mlir quantization tensorflow python concurrency test warn tensorflow please fix your import module tensorflow python training tracking base have be move to tensorflow python trackable base the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training checkpoint management have be move to tensorflow python checkpoint checkpoint management the old module will be delete in version 2 9 warning tensorflow please fix your import module tensorflow python training tracking resource have be move to tensorflow python trackable resource the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking util have be move to tensorflow python checkpoint checkpoint the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training tracking base delegate have be move to tensorflow python trackable base delegate the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track graph view have be move to tensorflow python checkpoint graph view the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training track python state have be move to tensorflow python trackable python state the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save functional saver have be move to tensorflow python checkpoint functional saver the old module will be delete in version 2 11 warning tensorflow please fix your import module tensorflow python training save checkpoint option have be move to tensorflow python checkpoint checkpoint option the old module will be delete in version 2 11 traceback most recent call last file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python integration test concurrency test py line 22 in from tensorflow compiler mlir quantization tensorflow python import quantize model file home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt bin tensorflow compiler mlir quantization tensorflow python concurrency test runfiles org tensorflow tensorflow compiler mlir quantization tensorflow python quantize model py line 312 in signature key list str typeerror type object be not subscriptable target tensorflow compiler mlir quantization tensorflow python concurrency test up to date bazel bin tensorflow compiler mlir quantization tensorflow python concurrency test info elapse time 112 798s critical path 85 50 info 274 process 19 internal 255 local info build complete 1 test fail 274 total action tensorflow compiler mlir quantization tensorflow python concurrency test fail in 3 out of 3 in 8 4s stat over 3 run max 8 4s min 3 8s avg 5 3s dev 2 2s home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test log home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 1 log home builder cache bazel bazel builder 9dc2dbd69dc3512cedb530e1521082e7 execroot org tensorflow bazel out aarch64 opt testlog tensorflow compiler mlir quantization tensorflow python concurrency test test attempt attempt 2 log info build complete 1 test fail 274 total action |
tensorflowtensorflow | error in ragged tensor documentation | Bug | click to expand issue type documentation bug source source tensorflow version 2 9 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when I run the example in the doc to create a raggedtensor t4 with shape 3 none 4 8 none 2 I get an error standalone code to reproduce the issue shell t0 tf zero 1000 2 shape 1000 2 t1 raggedtensor from row length t0 160 none 2 t2 raggedtensor from uniform row length t1 8 20 8 none 2 t3 raggedtensor from uniform row length t2 4 5 4 8 none 2 t4 raggedtensor from row length t3 3 none 4 8 none 2 relevant log output valueerror attempt to convert a value ellipsis with an unsupported type to a tensor |
tensorflowtensorflow | potential race condition in nsync library that cause sigsegv or deadlock | Bug | click to expand issue type bug source source tensorflow version tf 2 4 custom code no os platform and distribution linux redhat 7 6 mobile device no response python version 3 7 bazel version 3 1 0 gcc compiler version 7 cuda cudnn version 11 0 8 0 2 gpu model and memory no response current behaviour our tensorflow application which use nsync for mutex cv implementation fail because of sigsegv the issue can be reproduce during thread pool destruction with thread local tracemerecorder threadlocalrecorderwrapper l282 use in each thread after add log into nsync library we be able to identify the root cause root cause destruction order of thread local variable be not deterministic in posix mutex or conditional variable in tensorflow be a thread local object if another thread local object e g per thread traceme recorder use a lock in its destructor the lock can already be destruct and the behavior be undefined detail for example the follow log show two thread 0x7f4118748700 and 0x7f4100718700 share the same nsync waiter object 0x7f465c2c2290 and cause sigsegv a nsync waiter object be a thread local object use by nsync to implement a lock first the waiter destroy I e lock destructor be call on thread 0x7f4118748700 and the waiter 0x7f465c2c2290 have be recycle into free waiter at this moment the other thread 0x7f4100718700 enter and take the waiter from free waiter then the old thread 0x7f4118748700 try to take a lock again as the other thread local object use a lock in its destructor because the thread local state be not clean up in the old thread 0x7f4118748700 it assume the waiter object be still reserve by itself it end up with two thread hold the same waiter object the object be free twice and fail the assertion in nsync waiter free shell nsync waiter destroy internal common c l150 tid 0x7f4118748700 w 0x7f465c2c2290 nsync nsync waiter new internal common c l182 tid 0x7f4100718700 w 0x7f465c2c2290 nsync nsync waiter new internal common c l171 tid 0x7f4118748700 tw 0x7f465c2c2290 nsync nsync waiter new internal common c l206 tid 0x7f4100718700 w 0x7f465c2c2290 nsync nsync mu lock slow internal mu c l102 tid 0x7f4118748700 mu 0x12482b40 waiter 0x7f465c2c2290 nsync nsync mu lock slow internal mu c l71 tid 0x7f4100718700 mu 0x7f4ae4593f30 waiter 0x7f465c2c2290 nsync nsync mu lock slow internal mu c l71 tid 0x7f4118748700 mu 0x12482b40 waiter 0x7f465c2c2290 a fatal error have be detect by the java runtime environment sigsegv 0xb at pc 0x00007f4af0e74eda pid 6 tid 0x00007f4118748700 jre version java tm se runtime environment 8 0 172 b11 build 1 8 0 172 b11 java vm java hotspot tm 64 bit server vm 25 172 b11 mixed mode linux amd64 compress oop problematic frame c pywrap tensorflow internal so 0xc8cbeda nsync nsync waiter free nsync waiter 0xa core dump write default location opt code fetcher system src core or core 6 an error report file with more information be save as opt code fetcher system src hs err pid6 log deadlock can also happen when the waiter be move to another mutex before it get notify for example two thread above be lock on two different mutex 0x12482b40 and 0x7f4ae4593f30 in this case the waiter semaphore be decremente twice but only incremente once p s the thread local variable that access lock in its destructor in tensorflow be the per thread tracemerecorder threadlocalrecorderwrapper use in tensorflow profiler standalone code to reproduce the issue n a the issue be fix in nsync upstream see pr for more detail we need help from google to release a new nsync version and upgrade nsync version in tensorflow thank you so much relevant log output no response |
tensorflowtensorflow | bad file descriptor error during cleanup of mirroredstrategy model | Bug | click to expand issue type bug source binary tensorflow version 2 8 0 custom code no os platform and distribution linux red hat mobile device no response python version 3 9 12 bazel version no response gcc compiler version no response cuda cudnn version cuda 11 1 cudnn 8 2 1 32 gpu model and memory no response current behaviour shell create a model within the scope of a mirroredstrategy result in an oserror during memory cleanup at the end of the program I would expect it to run without an error other seem to have encounter a similar issue I encounter this error follow the example at use tfdistributestrategy with keras modelfit after get the error I trim it down to the include minimal example standalone code to reproduce the issue shell environment setup conda create n test tf c conda forge python 3 9 12 tensorflow 2 8 0 cudnn 8 2 1 32 export cuda visible device import os os environ cuda visible device 0 import tensorflow as tf from tensorflow import kera def main print gpu tf config list physical device gpu with tf distribute mirroredstrategy scope model tf keras sequential tf keras layer dense 1 input shape 1 print output shape model output shape if name main main relevant log output shell gpu physicaldevice name physical device gpu 0 device type gpu 2022 05 21 14 16 41 133359 I tensorflow core platform cpu feature guard cc 151 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation sse4 1 sse4 2 avx avx2 avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 05 21 14 16 41 633403 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 0 output shape none 1 exception ignore in traceback most recent call last file conda envs test tf lib python3 9 multiprocesse pool py line 268 in del self change notifier put none file conda envs test tf lib python3 9 multiprocesse queue py line 378 in put self writer send byte obj file conda envs test tf lib python3 9 multiprocesse connection py line 205 in send bytes self send byte m offset offset size file conda envs test tf lib python3 9 multiprocesse connection py line 416 in send bytes self send header buf file conda envs test tf lib python3 9 multiprocesse connection py line 373 in send n write self handle buf oserror errno 9 bad file descriptor |
tensorflowtensorflow | tensorarray documentation be mislead | Bug | click to expand issue type documentation bug source binary tensorflow version tf 2 8 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour the documentation for tensorarray describe it as write once but this be incorrect for example the code below execute fine the documentation should be clear about whether tensorarray be write once at all or if this constraint only apply in certain context such as when the user wish to backprop through it standalone code to reproduce the issue python array tf tensorarray tf float32 dynamic size false size 16 for I in range 16 array array write I 5 array array write 15 6 write to already write position print array concat overwrite the last position complete successfully relevant log output no response |
tensorflowtensorflow | new release protobuf v3 20 cause an error when import | Bug | click to expand issue type bug source binary tensorflow version 2 7 1 custom code no os platform and distribution linux 20 04 mobile device no response python version 3 8 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell after the new protobuf release and instal tensorflow an error appear when import it standalone code to reproduce the issue shell after instal tensorflow 2 7 1 run a simple import tensorflow relevant log output shell typeerror descriptor can not not be create directly if this call come from a pb2 py file your generate code be out of date and must be regenerate with protoc 3 19 0 if you can not immediately regenerate your protos some other possible workaround be 1 downgrade the protobuf package to 3 20 x or low 2 set protocol buffer python implementation python but this will use pure python parsing and will be much slow |
tensorflowtensorflow | convert to tf lite valueerror can not iterate over a shape with unknown rank | Bug | click to expand issue type bug source binary tensorflow version tf2 3 1 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 6 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when I convert the savedmodel to tflite model error happen valueerror can not iterate over a shape with unknown rank standalone code to reproduce the issue shell import tensorflow as tf converter tf lite tfliteconverter from save model model converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert with open rvm tflite wb as f f write tflite model relevant log output shell 2022 05 25 12 59 38 001545 w tensorflow stream executor platform default dso loader cc 59 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory 2022 05 25 12 59 38 001571 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine 2022 05 25 12 59 39 165540 w tensorflow stream executor platform default dso loader cc 59 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2022 05 25 12 59 39 165560 w tensorflow stream executor cuda cuda driver cc 312 fail call to cuinit unknown error 303 2022 05 25 12 59 39 165578 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host wudi qitianm620 n000 proc driver nvidia version do not exist 2022 05 25 12 59 39 165701 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 05 25 12 59 39 190199 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2899885000 hz 2022 05 25 12 59 39 190662 I tensorflow compiler xla service service cc 168 xla service 0x55dc4ea19680 initialize for platform host this do not guarantee that xla will be use device 2022 05 25 12 59 39 190693 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2022 05 25 12 59 46 493553 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 0 2022 05 25 12 59 46 493624 I tensorflow core grappler cluster single machine cc 356 start new session 2022 05 25 12 59 46 608109 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item graph to optimize 2022 05 25 12 59 46 608136 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer graph size after 2419 node 2082 6014 edge 5672 time 50 604ms 2022 05 25 12 59 46 608141 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer graph size after 2419 node 0 6014 edge 0 time 26 182ms 2022 05 25 12 59 46 608144 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item inference cond 1 true 10408 9212 2022 05 25 12 59 46 608147 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 001ms 2022 05 25 12 59 46 608150 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2022 05 25 12 59 46 608153 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item inference cond true 8440 26709 2022 05 25 12 59 46 608156 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2022 05 25 12 59 46 608158 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2022 05 25 12 59 46 608161 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item inference cond false 8441 36559 2022 05 25 12 59 46 608164 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 001ms 2022 05 25 12 59 46 608167 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2022 05 25 12 59 46 608170 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item inference cond 1 false 10409 21697 2022 05 25 12 59 46 608173 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 001ms 2022 05 25 12 59 46 608176 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms traceback most recent call last file home wudi project project rvm mat tensorflow convert1 py line 11 in tflite model converter convert file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow lite python lite py line 1076 in convert return super tfliteconverterv2 self convert file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow lite python lite py line 900 in convert self convert graph def input tensor output tensor file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow lite python lite py line 633 in convert converter kwargs file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow lite python convert py line 567 in toco convert impl input tensor output tensor args kwargs file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow lite python convert py line 458 in build toco convert protos for dim in shape file home wudi software yes envs onnx2tf lib python3 6 site package tensorflow python framework tensor shape py line 859 in iter raise valueerror can not iterate over a shape with unknown rank valueerror can not iterate over a shape with unknown rank |
tensorflowtensorflow | dataset have no attribute load | Bug | click to expand issue type build install source source tensorflow version 2 9 1 custom code no os platform and distribution mac mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell hello I just upgrade to tensorflow v2 9 1 expect to be able to use the new dataset method load and save but they be not available standalone code to reproduce the issue shell load doesn t work tf datum dataset load path attributeerror type object datasetv2 have no attribute load save doesn t work dataset tf datum dataset range 2 tf datum dataset save dataset path attributeerror type object datasetv2 have no attribute save relevant log output no response |
tensorflowtensorflow | cholesky decomposition half precision either do not work or wrong documentation | Bug | click to expand issue type bug source source tensorflow version tf 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory tesla t4 current behaviour shell cholesky crash for half precision while documentation say otherwise quote from documentation input a tensor must be one of the follow type float64 float32 half complex64 complex128 shape be m m standalone code to reproduce the issue shell import tensorflow as tf a tf random normal 32 32 dtype tf dtype float16 same for tf dtype half u a tf transpose a tf linalg cholesky u relevant log output shell could not find device for node node cholesky cholesky t dt half all kernel register for op cholesky device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt double device cpu t in dt float device gpu t in dt complex128 device gpu t in dt complex64 device gpu t in dt double device gpu t in dt float op cholesky |
tensorflowtensorflow | document dataset repeat 0 | Bug | click to expand issue type documentation bug source binary tensorflow version 2 4 custom code no os platform and distribution n a mobile device no response python version 3 8 bazel version n a gcc compiler version n a cuda cudnn version n a gpu model and memory n a current behaviour when use keras model fit validation datum val ds where val ds be a dataset create via val ds repeat 0 the below exception will be raise python 96 96 eta 0s loss 2 8148 accuracy 0 2789traceback most recent call last file home ubuntu pyenv version 3 8 13 lib python3 8 runpy py line 194 in run module as main return run code code main global none file home ubuntu pyenv version 3 8 13 lib python3 8 runpy py line 87 in run code exec code run global file home ubuntu path to training vgg16 train py line 68 in history tf keras callbacks history model fit file home ubuntu path to venv lib python3 8 site package tensorflow python keras engine training py line 1113 in fit self eval data handler datum adapter datahandler file home ubuntu path to venv lib python3 8 site package tensorflow python keras engine datum adapter py line 1100 in init self adapter adapter cls file home ubuntu path to venv lib python3 8 site package tensorflow python keras engine datum adapter py line 779 in init peek x self peek and restore x file home ubuntu path to venv lib python3 8 site package tensorflow python keras engine datum adapter py line 836 in peek and restore peek next x stopiteration I have want to basically go through the list once so I think I should set count 0 in other word ds repeat will not work when count 0 but do work when count 1 the documentation currently document two special case the default behavior if count be none or 1 be for the dataset be repeat indefinitely can we add information to the doc when count 0 it s unclear what happen in this case standalone code to reproduce the issue shell n a relevant log output shell n a |
tensorflowtensorflow | tf image resize different result when inside a tf function | Bug | click to expand issue type bug source binary tensorflow version v2 9 0 rc2 42 g8a20d54a3c1 2 9 0 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell when you put a tf image resize op inside a tf function whilst use tf raggedtensor s the result change standalone code to reproduce the issue python import numpy as np import tensorflow as tf np random seed 0 batch1 tf cast tf ragged constant 255 np random uniform size 2000 2000 tf uint8 batch1 tf expand dim batch1 axis 1 batch1 tf concat batch1 batch1 batch1 axis 1 sign tf raggedtensorspec 1 none none 3 tf uint8 2 tf int64 tf function input signature sign def resize tf image return tf image resize image 50 50 255 def resize non tf image return tf image resize image 50 50 255 print tf reduce mean resize tf batch1 print tf reduce mean resize non tf batch1 and then run python3 test py relevant log output shell tf tensor 0 49723607 shape dtype float32 tf tensor 0 497236 shape dtype float32 |
tensorflowtensorflow | unexpected output when lambda layer be use in a for loop | Bug | click to expand issue type bug source binary tensorflow version 2 9 custom code yes os platform and distribution window 10 x64 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell the attached code should produce an output that be equal to the input all it do be take the input vector split it and reconcatenate it standalone code to reproduce the issue shell import tensorflow as tf from tensorflow keras import layer model test input datum tf expand dim tf linspace 0 0 1 0 6 axis 0 input layer layer input batch shape 1 6 output for I channel in range 6 print f initialise layer that should extract element I channel expect test input data 0 I channel output append layers lambda lambda c c I channel input layer cat output layer concatenate output model model model input input layer output cat output name repro model y model test input datum print y relevant log output shell initialise layer that should extract element 0 expect 0 0 initialise layer that should extract element 1 expect 0 20000000298023224 initialise layer that should extract element 2 expect 0 4000000059604645 initialise layer that should extract element 3 expect 0 6000000238418579 initialise layer that should extract element 4 expect 0 800000011920929 initialise layer that should extract element 5 expect 1 0 tf tensor 1 1 1 1 1 1 shape 6 dtype float32 |
tensorflowtensorflow | tflite convert save model dir command throw error savedmodel file do not exist save model pbtxt save model pb | Bug | click to expand issue type bug source binary tensorflow version v2 9 0 custom code no os platform and distribution linux ubuntu 18 04 mobile device no response python version 3 7 13 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell I be try to convert the tf model to tflite model I have download the tf model spaghettinet edgetpu s from github and extract it the extracted folder contain all te require file pb pbtxt and other run the tflite convert save model dir output file command and expect to generate the tflite model file standalone code to reproduce the issue shell 1 downalod the model spaghettinet edgetpu s pixel 6 edge tpu model 2 extract it 3 folde contain these file datum sample model spaghettinet edgetpu s ls checkpoint graph pbtxt model ckpt 400000 index model tflite frozen inference graph pb model ckpt 400000 datum 00000 of 00001 model ckpt 400000 meta pipeline config 4 run tflite convert save model dir home gaurav datum sample model spaghettinet edgetpu m output file home gaurav datum sample model model m tflite relevant log output shell datum sample model tflite convert save model dir home gaurav datum sample model spaghettinet edgetpu m output file home gaurav datum sample model model m tflite 2022 05 18 14 04 23 552150 I tensorflow core util util cc 169 onednn custom operation be on you may see slightly different numerical result due to float point round off error from different computation order to turn they off set the environment variable tf enable onednn opt 0 traceback most recent call last file home gaurav local bin tflite convert line 8 in sys exit main file home gaurav local lib python3 7 site package tensorflow lite python tflite convert py line 692 in main app run main run main argv sys argv 1 file home gaurav local lib python3 7 site package absl app py line 312 in run run main main args file home gaurav local lib python3 7 site package absl app py line 258 in run main sys exit main argv file home gaurav local lib python3 7 site package tensorflow lite python tflite convert py line 675 in run main convert tf2 model tflite flag file home gaurav local lib python3 7 site package tensorflow lite python tflite convert py line 279 in convert tf2 model tag parse set flag save model tag set file home gaurav local lib python3 7 site package tensorflow lite python lite py line 1786 in from save model save model load save model dir tag file home gaurav local lib python3 7 site package tensorflow python save model load py line 782 in load result load partial export dir none tag option root file home gaurav local lib python3 7 site package tensorflow python save model load py line 887 in load partial loader impl parse save model with debug info export dir file home gaurav local lib python3 7 site package tensorflow python save model loader impl py line 57 in parse save model with debug info save model parse save model export dir file home gaurav local lib python3 7 site package tensorflow python save model loader impl py line 116 in parse save model f savedmodel file do not exist at export dir os path sep oserror savedmodel file do not exist at home gaurav datum sample model spaghettinet edgetpu m save model pbtxt save model pb |
tensorflowtensorflow | gpucudamallocasyncallocator fail if the same gpu device be initialise multiple time | Bug | click to expand issue type bug source source tensorflow version tf 2 7 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version cuda 11 2 cudnn 8 1 1 gpu model and memory nvidia gtx 1080 current behaviour when l391 gpucudamallocasyncallocator be use instead of the default one either via environment variable or config set and the same gpu device be initialise more than once e g if there be multiple tensorflow session within a single process it fail with the try to set the stream twice this isn t support l391 error apparently this be happen because the gpucudamallocasyncallocator be global and its cuda stream field be fill on the first setstreamandpreallocatememory I be able to fix this by make some change in the code and I be go to submit a pr contain those standalone code to reproduce the issue cpp include tensorflow graphdef graph def2 auto option tensorflow sessionoption option config mutable gpu option mutable experimental set use cuda malloc async true tensorflow graphdef graph def1 load graph def1 tensorflow session session1 nullptr tensorflow newsession option session1 session1 create graph def1 tensorflow graphdef graph def2 load graph def2 tensorflow session session2 nullptr tensorflow newsession option session2 session2 create graph def2 relevant log output shell 2022 05 17 14 17 05 253418 I tensorflow core common runtime gpu gpu process state cc 214 use cuda malloc async allocator for gpu 0 2022 05 17 14 17 05 253515 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 0 with 7234 mb memory device 0 name nvidia geforce gtx 1080 pci bus i d 0000 01 00 0 compute capability 6 1 2022 05 17 14 17 05 422828 I tensorflow core common runtime gpu gpu device cc 1525 create device job localhost replica 0 task 0 device gpu 0 with 7234 mb memory device 0 name nvidia geforce gtx 1080 pci bus i d 0000 01 00 0 compute capability 6 1 2022 05 17 14 17 05 423011 f tensorflow core common runtime gpu gpu cudamallocasync allocator cc 390 try to set the stream twice this isn t support |
tensorflowtensorflow | tf random uniform always pick the same value when metal plugin be instal | Bug | system information os macos montererey 12 4 tensorflow instal from pip command inside a conda environment tensorflow version tensorflow macos 2 8 0 metal plugin version tensorflow metal 0 4 0 python version 3 9 12 gpu model and memory apple m1 pro with 16 gb memory exact command to reproduce see below describe the problem when the tensorflow metal plugin be instal the tf random uniform function always pick the same value on every call other random function like tf random normal do not have this problem neither do this problem occur when the metal plugin be not instal source code log text yunhao yunhaos mbp conda create name tf rand test python 3 9 six 1 15 collect package metadata current repodata json do solve environment do package plan environment location user yunhao miniforge3 envs tf rand test add update spec python 3 9 six 1 15 the follow new package will be instal bzip2 conda forge osx arm64 bzip2 1 0 8 h3422bc3 4 ca certificate conda forge osx arm64 can certificate 2021 10 8 h4653dfc 0 libffi conda forge osx arm64 libffi 3 4 2 h3422bc3 5 libzlib conda forge osx arm64 libzlib 1 2 11 h90dfc92 1014 ncurse conda forge osx arm64 ncurse 6 3 h07bb92c 1 openssl conda forge osx arm64 openssl 3 0 3 ha287fd2 0 pip conda forge noarch pip 22 1 pyhd8ed1ab 0 python conda forge osx arm64 python 3 9 12 h14b404e 1 cpython python abi conda forge osx arm64 python abi 3 9 2 cp39 readline conda forge osx arm64 readline 8 1 hedafd6a 0 setuptool conda forge osx arm64 setuptool 62 2 0 py39h2804cbe 0 six conda forge noarch six 1 15 0 pyh9f0ad1d 0 sqlite conda forge osx arm64 sqlite 3 38 5 h40dfcc0 0 tk conda forge osx arm64 tk 8 6 12 he1e0b03 0 tzdata conda forge noarch tzdata 2022a h191b570 0 wheel conda forge noarch wheel 0 37 1 pyhd8ed1ab 0 xz conda forge osx arm64 xz 5 2 5 h642e427 1 zlib conda forge osx arm64 zlib 1 2 11 h90dfc92 1014 proceed y n prepare transaction do verifying transaction do execute transaction do to activate this environment use conda activate tf rand test to deactivate an active environment use conda deactivate yunhao yunhaos mbp conda activate tf rand test tf rand test yunhao yunhaos mbp conda install c apple tensorflow dep collect package metadata current repodata json do solve environment do package plan environment location user yunhao miniforge3 envs tf rand test add update spec tensorflow dep the follow new package will be instal c are conda forge osx arm64 c are 1 18 1 h3422bc3 0 cache property conda forge noarch cache property 1 5 2 hd8ed1ab 1 cache property conda forge noarch cache property 1 5 2 pyha770c72 1 grpcio conda forge osx arm64 grpcio 1 46 1 py39h365d37b 0 h5py conda forge osx arm64 h5py 3 6 0 nompi py39hd982b79 100 hdf5 conda forge osx arm64 hdf5 1 12 1 nompi hd9dbc9e 104 krb5 conda forge osx arm64 krb5 1 19 3 he492e65 0 libblas conda forge osx arm64 libblas 3 9 0 14 osxarm64 openblas libcblas conda forge osx arm64 libcbla 3 9 0 14 osxarm64 openblas libcurl conda forge osx arm64 libcurl 7 83 1 h7965298 0 libcxx conda forge osx arm64 libcxx 14 0 3 h6a5c8ee 0 libedit conda forge osx arm64 libedit 3 1 20191231 hc8eb9b7 2 libev conda forge osx arm64 libev 4 33 h642e427 1 libgfortran conda forge osx arm64 libgfortran 5 0 0 dev0 11 0 1 hf114ba7 23 libgfortran5 conda forge osx arm64 libgfortran5 11 0 1 dev0 hf114ba7 23 liblapack conda forge osx arm64 liblapack 3 9 0 14 osxarm64 openblas libnghttp2 conda forge osx arm64 libnghttp2 1 47 0 hf30690b 0 libopenblas conda forge osx arm64 libopenbla 0 3 20 openmp h2209c59 0 libssh2 conda forge osx arm64 libssh2 1 10 0 h7a5bd25 2 llvm openmp conda forge osx arm64 llvm openmp 14 0 3 hd125106 0 numpy conda forge osx arm64 numpy 1 21 6 py39h690d673 0 tensorflow dep apple osx arm64 tensorflow dep 2 8 0 0 proceed y n prepare transaction do verifying transaction do execute transaction do tf rand test yunhao yunhaos mbp python m pip install tensorflow macos collect tensorflow macos use cache tensorflow macos 2 8 0 cp39 cp39 macosx 11 0 arm64 whl 190 1 mb collect tensorboard 2 9 2 8 use cache tensorboard 2 8 0 py3 none any whl 5 8 mb collect absl py 0 4 0 use cache absl py 1 0 0 py3 none any whl 126 kb collect libclang 9 0 1 use cache libclang 14 0 1 py2 py3 none macosx 11 0 arm64 whl 11 8 mb collect termcolor 1 1 0 use cache termcolor 1 1 0 py3 none any whl collect protobuf 3 9 2 use cache protobuf 3 20 1 py2 py3 none any whl 162 kb collect opt einsum 2 3 2 use cache opt einsum 3 3 0 py3 none any whl 65 kb collect kera preprocesse 1 1 1 use cache kera preprocesse 1 1 2 py2 py3 none any whl 42 kb collect flatbuffer 1 12 use cache flatbuffer 2 0 py2 py3 none any whl 26 kb requirement already satisfy six 1 12 0 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow macos 1 15 0 collect keras 2 9 2 8 0rc0 use cache keras 2 8 0 py2 py3 none any whl 1 4 mb requirement already satisfy setuptool in miniforge3 envs tf rand test lib python3 9 site package from tensorflow macos 62 2 0 collect astunparse 1 6 0 use cache astunparse 1 6 3 py2 py3 none any whl 12 kb collect wrapt 1 11 0 use cache wrapt 1 14 1 cp39 cp39 macosx 11 0 arm64 whl 35 kb collect gast 0 2 1 use cache gast 0 5 3 py3 none any whl 19 kb collect google pasta 0 1 1 use cache google pasta 0 2 0 py3 none any whl 57 kb collect type extension 3 6 6 use cache typing extension 4 2 0 py3 none any whl 24 kb requirement already satisfied numpy 1 20 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow macos 1 21 6 requirement already satisfy grpcio 2 0 1 24 3 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow macos 1 46 1 collect tf estimator nightly 2 8 0 dev2021122109 use cache tf estimator nightly 2 8 0 dev2021122109 py2 py3 none any whl 462 kb requirement already satisfy h5py 2 9 0 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow macos 3 6 0 requirement already satisfied wheel 1 0 0 23 0 in miniforge3 envs tf rand test lib python3 9 site package from astunparse 1 6 0 tensorflow macos 0 37 1 collect google auth 3 1 6 3 use cache google auth 2 6 6 py2 py3 none any whl 156 kb collect request 3 2 21 0 use cache request 2 27 1 py2 py3 none any whl 63 kb collect tensorboard plugin wit 1 6 0 use cache tensorboard plugin wit 1 8 1 py3 none any whl 781 kb collect werkzeug 0 11 15 use cache werkzeug 2 1 2 py3 none any whl 224 kb collect markdown 2 6 8 use cache markdown 3 3 7 py3 none any whl 97 kb collect tensorboard datum server 0 7 0 0 6 0 use cache tensorboard datum server 0 6 1 py3 none any whl 2 4 kb collect google auth oauthlib 0 5 0 4 1 use cache google auth oauthlib 0 4 6 py2 py3 none any whl 18 kb collect pyasn1 module 0 2 1 use cache pyasn1 module 0 2 8 py2 py3 none any whl 155 kb collect cachetool 6 0 2 0 0 use cache cachetool 5 1 0 py3 none any whl 9 2 kb collect rsa 5 3 1 4 use cache rsa 4 8 py3 none any whl 39 kb collect request oauthlib 0 7 0 use cache request oauthlib 1 3 1 py2 py3 none any whl 23 kb collect importlib metadata 4 4 use cache importlib metadata 4 11 3 py3 none any whl 18 kb collect idna 4 2 5 use cache idna 3 3 py3 none any whl 61 kb collect certifi 2017 4 17 use cache certifi 2021 10 8 py2 py3 none any whl 149 kb collect urllib3 1 27 1 21 1 use cache urllib3 1 26 9 py2 py3 none any whl 138 kb collect charset normalizer 2 0 0 use cache charset normalizer 2 0 12 py3 none any whl 39 kb collect zipp 0 5 use cache zipp 3 8 0 py3 none any whl 5 4 kb collect pyasn1 0 5 0 0 4 6 use cache pyasn1 0 4 8 py2 py3 none any whl 77 kb collect oauthlib 3 0 0 use cache oauthlib 3 2 0 py3 none any whl 151 kb instal collect package tf estimator nightly termcolor tensorboard plugin wit pyasn1 libclang keras flatbuffer certifi zipp wrapt werkzeug urllib3 type extension tensorboard datum server rsa pyasn1 module protobuf opt einsum oauthlib keras preprocesse idna google pasta gast charset normalizer cachetool astunparse absl py request importlib metadata google auth request oauthlib markdown google auth oauthlib tensorboard tensorflow macos successfully instal absl py 1 0 0 astunparse 1 6 3 cachetool 5 1 0 certifi 2021 10 8 charset normalizer 2 0 12 flatbuffer 2 0 gast 0 5 3 google auth 2 6 6 google auth oauthlib 0 4 6 google pasta 0 2 0 idna 3 3 importlib metadata 4 11 3 kera 2 8 0 kera preprocesse 1 1 2 libclang 14 0 1 markdown 3 3 7 oauthlib 3 2 0 opt einsum 3 3 0 protobuf 3 20 1 pyasn1 0 4 8 pyasn1 module 0 2 8 request 2 27 1 request oauthlib 1 3 1 rsa 4 8 tensorboard 2 8 0 tensorboard datum server 0 6 1 tensorboard plugin wit 1 8 1 tensorflow macos 2 8 0 termcolor 1 1 0 tf estimator nightly 2 8 0 dev2021122109 typing extension 4 2 0 urllib3 1 26 9 werkzeug 2 1 2 wrapt 1 14 1 zipp 3 8 0 tf rand test yunhao yunhaos mbp python python 3 9 12 package by conda forge main mar 24 2022 23 25 14 clang 12 0 1 on darwin type help copyright credit or license for more information import tensorflow as tf for in range 10 print tf random uniform tf tensor 0 43182516 shape dtype float32 tf tensor 0 73127687 shape dtype float32 tf tensor 0 8234819 shape dtype float32 tf tensor 0 7472857 shape dtype float32 tf tensor 0 06519365 shape dtype float32 tf tensor 0 46567404 shape dtype float32 tf tensor 0 081846 shape dtype float32 tf tensor 0 26130438 shape dtype float32 tf tensor 0 53803635 shape dtype float32 tf tensor 0 98602235 shape dtype float32 exit tf rand test yunhao yunhaos mbp python m pip install tensorflow metal collect tensorflow metal use cache tensorflow metal 0 4 0 cp39 cp39 macosx 11 0 arm64 whl 1 2 mb requirement already satisfied wheel 0 35 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow metal 0 37 1 requirement already satisfy six 1 15 0 in miniforge3 envs tf rand test lib python3 9 site package from tensorflow metal 1 15 0 instal collect package tensorflow metal successfully instal tensorflow metal 0 4 0 tf rand test yunhao yunhaos mbp python python 3 9 12 package by conda forge main mar 24 2022 23 25 14 clang 12 0 1 on darwin type help copyright credit or license for more information import tensorflow as tf for in range 10 print tf random uniform metal device set to apple m1 pro systemmemory 16 00 gb maxcachesize 5 33 gb 2022 05 16 15 21 55 560387 I tensorflow core common runtime pluggable device pluggable device factory cc 305 could not identify numa node of platform gpu i d 0 default to 0 your kernel may not have be build with numa support 2022 05 16 15 21 55 560711 I tensorflow core common runtime pluggable device pluggable device factory cc 271 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 0 mb memory physical pluggabledevice device 0 name metal pci bus i d tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 for in range 10 print tf random uniform tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 tf tensor 0 9149647 shape dtype float32 exit tf rand test yunhao yunhaos mbp |
tensorflowtensorflow | no module name model recommendation model launcher keras | Bug | I follow all previous step in ondevice recommendation ipynb I can t find either model recommendation model launcher keras in example repo but error still exist when I execute this block python m model recommendation model launcher keras run mode train and eval encoder type cnn training datum filepattern datum example train movielen 1 m tfrecord testing datum filepattern datum example test movielen 1 m tfrecord model dir model model dir param path model sample config json batch size 64 learning rate 0 1 step per epoch 1000 num epoch 10 num eval step 1000 gradient clip norm 1 0 max history length 10 |
tensorflowtensorflow | font do not load properly under firefox | Bug | click to expand issue type documentation bug source source tensorflow version tf 2 8 custom code no os platform and distribution firefox on windows 11 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell doc page can not load google font 503 service unavailable under firefox even with all extension disabled it only succeed to load properly under an incognito window standalone code to reproduce the issue shell open any page on tensorflow org under firefox 100 problem be there before ff 100 but I didn t take note of when it start the problem will appear clearing cookie or use incognito solve the issue far note the issue can be solve by block cookie for tensorflow org but a server side solution would help a lot of people relevant log output no response |
tensorflowtensorflow | loss do not decrease in google colab and local computer tf 1 13 vs from tf 1 14 to tf 2 6 | Bug | click to expand issue type performance source source tensorflow version 1 13 vs 2 6 custom code yes os platform and distribution window 10 mobile device no response python version 3 7 4 bazel version no response gcc compiler version no response cuda cudnn version 11 2 0 8 1 0 gpu model and memory no response current behaviour shell I study with my datum and tf gpu 1 6 and kera in 2019 at this time training be do normally although it be the same code as in the past training loss and validation loss be decrease when train with tf 1 13 work tf 1 6 tf 1 13 however from tf 1 14 to tf 2 6 both train loss and validation loss have increase when the model weight make with tf 1 6 be call from tf 2 6 try to tf 1 14 tf 1 15 tf 2 0 and tf 2 6 it be call without any problem and the result be the same as those from the past so I can t understand why that happen I wonder what part of tf 1 13 and tf 1 14 and above be different and affect training also I try update the source code through tf upgrade v2 but it be not solve at all tf upgrade v2 infile my code py outfile my code upgrade py standalone code to reproduce the issue shell def define model time step input row input dim input ch output dim droprate learning rate input datum tf keras layers input shape time step input row input dim input ch name my datum x tf keras layer timedistribute tf keras layer conv2d filter 16 kernel size 2 2 stride 4 padding same input datum x tf keras layers leakyrelu x default alpha 0 3 x tf keras layer timedistribute tf keras layer maxpooling2d 2 2 stride 2 x x tf keras layers dropout rate droprate x x tf keras layer timedistribute tf keras layer flatten x x tf keras layer timedistribute tf keras layer dense 128 x x tf keras layers leakyrelu x default alpha 0 3 model 2 lstm x tf keras layers lstm 128 dropout 0 2 recurrent dropout 0 2 return sequence true x x tf keras layers lstm 10 dropout 0 2 recurrent dropout 0 2 return sequence false x x out tf keras layer dense 10 softmax x model model input input datum output x out model compile optimizer optimizer adam learning rate 0 001 loss loss categorical crossentropy return model relevant log output decrease the loss when use tf 1 13 image increase the loss when use tf 1 14 and tf 2 6 image no response |
tensorflowtensorflow | unit test tensorflow tool doc tf doctest be break by protobuf exception | Bug | click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 8 13 bazel version 5 1 1 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell tensorflow tool docs tf doct fail in 12 out of 12 in 10 8s standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config nonccl copt mtune generic copt march armv8 a copt o3 build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only job 75 tensorflow tool docs tf doct relevant log output shell introduce by info option provide by the client inherit common option isatty 1 terminal column 132 info read rc option for test from tmp workspace tensorflow git bazelrc inherit common option experimental repo remote exec info read rc option for test from tmp workspace tensorflow git bazelrc inherit build option define framework share object true define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no are true noincompatible remove legacy whole archive enable platform specific config define with xla support true config short log config v2 define no aw support true define no hdfs support true experimental cc shared library experimental link static library once true info read rc option for test from tmp workspace tensorflow git tf configure bazelrc inherit build option action env python bin path tmp workspace venv cp38 cp38 bin python3 action env python lib path tmp workspace venv cp38 cp38 lib python3 8 site package python path tmp workspace venv cp38 cp38 bin python3 info read rc option for test from tmp workspace tensorflow git bazelrc inherit build option delete package tensorflow compiler mlir tfrt tensorflow compiler mlir tfrt benchmark tensorflow compiler mlir tfrt jit python bind tensorflow compiler mlir tfrt jit transform tensorflow compiler mlir tfrt python test tensorflow compiler mlir tfrt test tensorflow compiler mlir tfrt test ir tensorflow compiler mlir tfrt test analysis tensorflow compiler mlir tfrt test jit tensorflow compiler mlir tfrt test lhlo to tfrt tensorflow compiler mlir tfrt test tf to corert tensorflow compiler mlir tfrt test tf to tfrt data tensorflow compiler mlir tfrt test save model tensorflow compiler mlir tfrt transform lhlo gpu to tfrt gpu tensorflow core runtime fallback tensorflow core runtime fallback conversion tensorflow core runtime fallback kernel tensorflow core runtime fallback opdef tensorflow core runtime fallback runtime tensorflow core runtime fallback util tensorflow core tfrt common tensorflow core tfrt eager tensorflow core tfrt eager backends cpu tensorflow core tfrt eager backend gpu tensorflow core tfrt eager core runtime tensorflow core tfrt eager cpp test core runtime tensorflow core tfrt gpu tensorflow core tfrt run handler thread pool tensorflow core tfrt runtime tensorflow core tfrt save model tensorflow core tfrt graph executor tensorflow core tfrt save model test tensorflow core tfrt tpu tensorflow core tfrt util info read rc option for test from tmp workspace tensorflow git tf configure bazelrc test option flaky test attempt 3 test size filter small medium info find applicable config definition build short log in file tmp workspace tensorflow git bazelrc output filter do not match anything info find applicable config definition build v2 in file tmp workspace tensorflow git bazelrc define tf api version 2 action env tf2 behavior 1 info find applicable config definition test v2 in file tmp workspace tensorflow git tf configure bazelrc test tag filter benchmark test no oss gpu oss serial v1only build tag filter benchmark test no oss gpu v1only info find applicable config definition build nonccl in file tmp workspace tensorflow git bazelrc define no nccl support true info find applicable config definition build linux in file tmp workspace tensorflow git bazelrc copt w host copt w define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c 14 host cxxopt std c 14 config dynamic kernel distinct host configuration false experimental guard against concurrent change info find applicable config definition build dynamic kernel in file tmp workspace tensorflow git bazelrc define dynamic load kernel true copt dautoload dynamic kernel info analyze target tensorflow tool docs tf doct 279 package load 21308 target configure info find 1 test target fail tensorflow tool docs tf doct shard 3 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 1 log fail tensorflow tool docs tf doct shard 1 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 1 log fail tensorflow tool docs tf doct shard 4 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 1 log fail tensorflow tool docs tf doct shard 2 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 1 log fail tensorflow tool docs tf doct shard 4 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 2 log fail tensorflow tool docs tf doct shard 1 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 2 log fail tensorflow tool docs tf doct shard 3 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 2 log fail tensorflow tool docs tf doct shard 2 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 2 log fail tensorflow tool docs tf doct shard 4 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test log info from test tensorflow tool docs tf doct shard 4 of 4 test output for tensorflow tool docs tf doct shard 4 of 4 2022 05 12 10 42 30 717782 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 4 of 4 2022 05 12 10 42 35 654227 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 4 of 4 2022 05 12 10 42 40 591478 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size fail tensorflow tool docs tf doct shard 3 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test log info from test tensorflow tool docs tf doct shard 3 of 4 test output for tensorflow tool docs tf doct shard 3 of 4 2022 05 12 10 42 30 717687 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 3 of 4 2022 05 12 10 42 35 668010 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 3 of 4 2022 05 12 10 42 40 629291 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size fail tensorflow tool docs tf doct shard 1 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test log info from test tensorflow tool docs tf doct shard 1 of 4 test output for tensorflow tool docs tf doct shard 1 of 4 2022 05 12 10 42 30 717782 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 1 of 4 2022 05 12 10 42 35 654227 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 1 of 4 2022 05 12 10 42 40 629291 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size fail tensorflow tool docs tf doct shard 2 of 4 see root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test log fail tensorflow tool docs tf doct summary root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 2 log info from test tensorflow tool docs tf doct shard 2 of 4 test output for tensorflow tool docs tf doct shard 2 of 4 2022 05 12 10 42 30 717687 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 2 of 4 2022 05 12 10 42 35 770968 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size test output for tensorflow tool docs tf doct shard 2 of 4 2022 05 12 10 42 40 991862 e tensorflow core common runtime session factory cc 48 two session factory be be register under grpc session libprotobuf error external com google protobuf src google protobuf descriptor database cc 118 file already exist in database tensorflow python framework cpp shape inference proto libprotobuf fatal external com google protobuf src google protobuf descriptor cc 1379 check fail generateddatabase add encode file descriptor size terminate call after throw an instance of google protobuf fatalexception what check fail generateddatabase add encode file descriptor size target tensorflow tool doc tf doctest up to date bazel bin tensorflow tool docs tf doct info elapse time 681 341s critical path 425 75 info 3147 process 456 internal 2691 local info build complete 1 test fail 3147 total action tensorflow tool docs tf doct fail in 12 out of 12 in 10 8s stat over 12 run max 10 8s min 4 9s avg 6 9s dev 2 7s root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 4 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 3 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 1 of 4 test attempt attempt 2 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 1 log root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt testlog tensorflow tool docs tf doctest shard 2 of 4 test attempt attempt 2 log info build complete 1 test fail 3147 total action |
tensorflowtensorflow | a problem in the example for pix2pix | Bug | click to expand issue type other current behavior shell on this site the output of the generator be tanh but the image it be compare to be 0 1 this mean half the dynamic range of the activation be waste this be go to slow down learn where it say this last tf keras layer conv2dtranspose output channel 4 stride 2 padding same kernel initializer initializer activation tanh it might be well to say this last tf keras layer conv2dtranspose output channel 4 stride 2 padding same kernel initializer initializer activation sigmoid standalone code to reproduce the issue shell test the discriminator img in inp 255 print np max img in print np min img in plt imshow img in plt colorbar plt title input image leave plt show gen out tf keras backend squeeze gen output 0 print np max gen out print np min gen out plt imshow gen out plt colorbar plt title input image right plt show put image pair through the discriminator disc out discriminator inp tf newaxis gen output training false show the discriminator output note the size be much small than the input image plt imshow disc out 0 1 vmin 20 vmax 20 cmap rdbu r plt title discriminator output note the axis scale plt colorbar relevant log output shell this be the actual image range 1 0 0 0 this be the warning it give for the need to clip clip input datum to the valid range for imshow with rgb data 0 1 for float or 0 255 for integer this be the actual range of the generator output 1 0 1 0 |
tensorflowtensorflow | tfd normal prob output probability great than 1 | Bug | click to expand issue type bug source source tensorflow version tf 2 8 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen tfd normal prob output probability great than 1 standalone code to reproduce the issue shell import tensorflow probability as tfp tfd tfp distribution dist tfd normal loc 0 53605634 scale 0 3021255 dist prob 0 41487068 output relevant log output no response |
tensorflowtensorflow | autograph crash when use pythonw exe on window | Bug | click to expand issue type bug source binary tensorflow version 2 8 0 custom code no os platform and distribution window mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour when use pythonw exe if local source be not available e g only pyc file be available autograph cause a crash by try to flush sys stdout l100 l105 if the source code to autograph itself isn t available it emit this warning use ag log warn l141 l145 for some reason warning be different than any of the other emitter and flush the stream but when run in pythonw exe sys stdout be none it check for this use echo log to stdout to ensure that it doesn t write to stdout but then immediately try to flush it this can be remediate to indent the call to sys stdout flush to inside the if echo log to stdout check standalone code to reproduce the issue 1 create a virtual environment and install tensorflow 2 compile all of your py file in site package use the compileall module 3 delete py file from your site package 4 create an example tk base script or something else that show a gui 5 import tensorflow in your script and execute it use pythonw exe and observe the error which will not fail when run python exe relevant log output no response |
tensorflowtensorflow | miss f prefix on f string | Bug | some string look like they re mean to be f string but be miss the f prefix mean variable interpolation win t happen l168 l86 l1762 l197 l716 l1345 I find this issue automatically I m a bot beep boop see other issue I find in your repo here |
tensorflowtensorflow | error cascade on model save version 2 9 0 rc0 | Bug | click to expand issue type bug source source tensorflow version 2 9 0 rc0 custom code no os platform and distribution linux ubuntu 21 09 mobile device no response python version 3 9 7 bazel version 5 1 1 gcc compiler version 11 2 0 cuda cudnn version no response gpu model and memory intel core i7 980 current behaviour shell error cascade on save model after training save fine before train many thank brendan standalone code to reproduce the issue shell import tensorflow as tf from tensorflow import kera import panda as pd print tf version git version tf version version def baseline model b model keras sequential b model add keras layer flatten input shape 5 b model add keras layer dense unit 512 activation relu name dense 1 b model add keras layers dropout 0 2 b model add keras layer dense unit 32 activation relu name dense 2 b model add keras layer dense 3 activation softmax b model compile optimizer keras optimizer adam learning rate 0 0001 loss kera loss categoricalcrossentropy metric accuracy return b model f feature1 1 1 feature2 2 2 feature3 3 3 feature4 4 4 feature5 5 5 c cat1 0 1 cat2 1 0 cat3 0 0 x train pd dataframe f y train pd dataframe c model baseline model model save grrrrr untrained model fit x train y train epoch 2 verbose 1 model save grrrrr train relevant log output shell v1 12 1 73396 g3903c35b3bf 2 9 0 rc0 2022 04 23 10 22 59 360155 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation sse3 sse4 1 sse4 2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag epoch 1 2 1 1 eta 0s loss 0 9775 accuracy 0 5000 1 1 1s 612ms step loss 0 9775 accuracy 0 5000 epoch 2 2 1 1 eta 0s loss 0 8341 accuracy 0 5000 1 1 0s 3ms step loss 0 8341 accuracy 0 5000 traceback most recent call last runtimeerror in user code runtimeerror mismatch replicacontext valueerror error when trace gradient for savedmodel check the error log to see the error that be raise when convert a gradient function to a concrete function you may need to update the custom gradient or disable saving gradient with the option tf save model saveoption custom gradient false problematic op name adam identityn gradient input |
tensorflowtensorflow | tensorflow tool doc tf doctest fail on machine without gpu | Bug | click to expand issue type bug source source tensorflow version git head custom code no os platform and distribution cento 7 mobile device n a python version 3 7 13 bazel version 5 1 1 gcc compiler version 10 2 1 cuda cudnn version n a gpu model and memory n a current behaviour shell unit test tensorflow tool doc tf doctest fail when run on machine without gpu instal standalone code to reproduce the issue shell bazel test test timeout 300 500 1 1 flaky test attempt 3 test output all cache test result no noremote accept cache config nonccl build tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu test tag filter no oss oss serial gpu tpu benchmark test v1only no aarch64 require gpu verbose failure build test only tensorflow tool docs tf doct relevant log output shell multiple instance similar to file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python type distribute py line 156 in tensorflow python type distribute distributedvalue fail example strategy tf distribute mirroredstrategy gpu 0 gpu 1 exception raise traceback most recent call last file opt python cp37 cp37 m lib python3 7 doctest py line 1337 in run compileflag 1 test globs file line 1 in strategy tf distribute mirroredstrategy gpu 0 gpu 1 file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute mirror strategy py line 287 in init self device device cross device op cross device op file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute mirror strategy py line 342 in init self initialize strategy device file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute mirror strategy py line 367 in initialize strategy self collective op self make collective op device file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute mirror strategy py line 385 in make collective op collective key self collective key file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute cross device op py line 1104 in init group key group size self collective key device option file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python distribute cross device util py line 271 in init self order token resource variable op resourcevariable 0 file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python util traceback util py line 141 in error handler return fn args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python op variable py line 268 in call return super variablemetaclass cls call args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python op resource variable op py line 1670 in init validate shape validate shape file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python op resource variable op py line 1817 in init from args initial value name initial value dtype dtype file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python profiler trace py line 183 in wrap return func args kwargs file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework op py line 1640 in convert to tensor ret conversion func value dtype dtype name name as ref as ref file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework tensor conversion registry py line 48 in default conversion function return constant op constant value dtype name name file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework constant op py line 268 in constant allow broadcast true file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework constant op py line 279 in constant impl return constant eager impl ctx value dtype shape verify shape file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework constant op py line 304 in constant eager impl t convert to eager tensor value ctx dtype file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python framework constant op py line 102 in convert to eager tensor return op eagertensor value ctx device name dtype tensorflow python framework error impl invalidargumenterror could not satisfy device specification job localhost replica 0 task 0 device gpu 0 enable soft placement 0 support device type cpu all available device job localhost replica 0 task 0 device cpu 0 file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python type distribute py line 158 in tensorflow python type distribute distributedvalue fail example dataset iterator iter strategy experimental distribute dataset dataset exception raise traceback most recent call last file opt python cp37 cp37 m lib python3 7 doctest py line 1337 in run compileflag 1 test globs file line 1 in dataset iterator iter strategy experimental distribute dataset dataset nameerror name strategy be not define file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python type distribute py line 159 in tensorflow python type distribute distributedvalue fail example per replica value strategy experimental local result distribute value exception raise traceback most recent call last file opt python cp37 cp37 m lib python3 7 doctest py line 1337 in run compileflag 1 test globs file line 1 in per replica value strategy experimental local result nameerror name strategy be not define file root cache bazel bazel root db210f68f81d95ddcca9ae96b16ed72c execroot org tensorflow bazel out aarch64 opt bin tensorflow tool docs tf doct runfiles org tensorflow tensorflow python type distribute py line 161 in tensorflow python type distribute distributedvalue fail example per replica value exception raise traceback most recent call last file opt python cp37 cp37 m lib python3 7 doctest py line 1337 in run compileflag 1 test globs file line 1 in per replica value nameerror name per replica value be not define |
tensorflowtensorflow | inconsistent documentation regard support input data type and parameterserverstrategy | Bug | issue type documentation bug current behaviour there be a direct conflict between the parameterserverstrategy tutorial doc 1 which state keras model fit with tf distribute parameterserverstrategy can take input datum in the form of a tf datum dataset tf distribute distributeddataset or a tf keras util experimental datasetcreator with dataset be the recommend option for ease of use and these page parameterserverstrategy 2 when use model fit tf distribute experimental parameterserverstrategy must be use with a tf keras util experimental datasetcreator and step per epoch must be specify distribute input 3 user of tf distribute experimental parameterserverstrategy with the model fit api need to use a tf keras util experimental datasetcreator as the input model fit 4 if use tf distribute experimental parameterserverstrategy only datasetcreator type be support for x the former recommend use dataset while the latter explicitly state only datasetcreator be support 1 input datum 2 3 4 fit |
tensorflowtensorflow | test | Bug | click to expand issue type bug source source tensorflow version 2 6 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour shell a bug happen standlone code to reproduce the issue shell test relevant log output no response |
tensorflowtensorflow | gradient not compute upon tf concat | Bug | hi I be try to re write mavnet into genuine tensorflow instead of tflearn and train it note that mavnet consist of numerous tf keras layers concatenate which involve tf concat for some reason gradient flow seem to break and I just find out that the flow break whenever it pass tf concat please pardon if this issue be a duplicate of 37726 but I feel that the previous issue have be close too early without be thoroughly investigate and resolve the detail of this issue be as follow system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 2 8 0 issue reproduce from all version include in this range python version 3 8 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 11 4 cudnn 8 1 1 gpu model and memory nvidia geforce gtx titan x you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow be not able to compute gradient after concatenate multiple feature map with tf concat describe the expect behavior tensorflow should be able to compute gradient after concatenate multiple feature map with tf concat standalone code to reproduce the issue here we assume all necessary library include tensorflow be import class localresponsenormalization tf keras layers layer def init self depth radius 5 bias 1 alpha 1 beta 0 5 super localresponsenormalization self init self depth radius depth radius self bias bias self alpha alpha self beta beta def build self input pass def call self input return tf nn local response normalization input alpha self alpha beta self beta def get config self config super get config copy return config def mavnet input shape num class 2 img input tf keras layers input shape input shape conv1 7 7 tf keras layer conv2d 64 7 stride 2 activation relu pad same name conv1 7 7 s2 img input pool1 3 3 tf keras layer maxpooling2d pool size 3 stride 2 conv1 7 7 pool1 3 3 localresponsenormalization alpha 0 0001 beta 0 75 pool1 3 3 conv2 3 3 reduce tf keras layer conv2d 64 1 activation relu pad same name conv2 3 3 reduce pool1 3 3 conv2 3 3 tf keras layer conv2d 192 3 activation relu pad same name conv2 3 3 conv2 3 3 reduce conv2 3 3 localresponsenormalization alpha 0 0001 beta 0 75 conv2 3 3 pool2 3 3 tf keras layer maxpooling2d pool size 3 stride 2 name pool2 3 3 s2 conv2 3 3 mavnet 3a 1 1 tf keras layer conv2d 64 1 activation relu pad same name mavnet 3a 1 1 pool2 3 3 mavnet 3a 3 3 reduce tf keras layer conv2d 96 1 activation relu pad same name mavnet 3a 3 3 reduce pool2 3 3 mavnet 3a 3 3 tf keras layer conv2d 128 kernel size 3 activation relu pad same name mavnet 3a 3 3 mavnet 3a 3 3 reduce mavnet 3a pool tf keras layer maxpooling2d pool size 3 stride 1 padding same pool2 3 3 mavnet 3a pool 1 1 tf keras layer conv2d 32 kernel size 1 activation relu pad same name mavnet 3a pool 1 1 mavnet 3a pool mavnet 3a output tf keras layers concatenate axis 3 mavnet 3a 1 1 mavnet 3a 3 3 mavnet 3a pool 1 1 mavnet 3b 1 1 tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 3b 1 1 mavnet 3a output mavnet 3b 3 3 reduce tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 3b 3 3 reduce mavnet 3a output mavnet 3b 3 3 tf keras layer conv2d 192 kernel size 3 activation relu pad same name mavnet 3b 3 3 mavnet 3b 3 3 reduce mavnet 3b pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 3b pool mavnet 3a output mavnet 3b pool 1 1 tf keras layer conv2d 64 kernel size 1 activation relu pad same name mavnet 3b pool 1 1 mavnet 3b pool mavnet 3b output tf keras layers concatenate axis 3 name mavnet 3b output mavnet 3b 1 1 mavnet 3b 3 3 mavnet 3b pool 1 1 pool3 3 3 tf keras layer maxpooling2d pool size 3 stride 2 padding same name pool3 3 3 mavnet 3b output mavnet 4a 1 1 tf keras layer conv2d 192 kernel size 1 activation relu pad same name mavnet 4a 1 1 pool3 3 3 mavnet 4a 3 3 reduce tf keras layer conv2d 96 kernel size 1 activation relu pad same name mavnet 4a 3 3 reduce pool3 3 3 mavnet 4a 3 3 tf keras layer conv2d 208 kernel size 3 activation relu pad same name mavnet 4a 3 3 mavnet 4a 3 3 reduce mavnet 4a pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 4a pool pool3 3 3 mavnet 4a pool 1 1 tf keras layer conv2d 64 kernel size 1 activation relu pad same name mavnet 4a pool 1 1 mavnet 4a pool mavnet 4a output tf keras layers concatenate axis 3 name mavnet 4a output mavnet 4a 1 1 mavnet 4a 3 3 mavnet 4a pool 1 1 mavnet 4b 1 1 tf keras layer conv2d 160 kernel size 1 activation relu pad same name mavnet 4b 1 1 mavnet 4a output mavnet 4b 3 3 reduce tf keras layer conv2d 112 kernel size 1 activation relu pad same name mavnet 4b 3 3 reduce mavnet 4a output mavnet 4b 3 3 tf keras layer conv2d 224 kernel size 3 activation relu pad same name mavnet 4b 3 3 mavnet 4b 3 3 reduce mavnet 4b pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 4b pool mavnet 4a output mavnet 4b pool 1 1 tf keras layer conv2d 64 kernel size 1 activation relu pad same name mavnet 4b pool 1 1 mavnet 4b pool mavnet 4b output tf keras layers concatenate axis 3 name mavnet 4b output mavnet 4b 1 1 mavnet 4b 3 3 mavnet 4b pool 1 1 mavnet 4c 1 1 tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 4c 1 1 mavnet 4b output mavnet 4c 3 3 reduce tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 4c 3 3 reduce mavnet 4b output mavnet 4c 3 3 tf keras layer conv2d 256 kernel size 3 activation relu pad same name mavnet 4c 3 3 mavnet 4c 3 3 reduce mavnet 4c pool tf keras layer maxpooling2d pool size 3 stride 1 padding same mavnet 4b output mavnet 4c pool 1 1 tf keras layer conv2d 64 kernel size 1 activation relu pad same name mavnet 4c pool 1 1 mavnet 4c pool mavnet 4c output tf keras layers concatenate axis 3 name mavnet 4c output mavnet 4c 1 1 mavnet 4c 3 3 mavnet 4c pool 1 1 mavnet 4d 1 1 tf keras layer conv2d 112 kernel size 1 activation relu pad same name mavnet 4d 1 1 mavnet 4c output mavnet 4d 3 3 reduce tf keras layer conv2d 144 kernel size 1 activation relu pad same name mavnet 4d 3 3 reduce mavnet 4c output mavnet 4d 3 3 tf keras layer conv2d 288 kernel size 3 activation relu pad same name mavnet 4d 3 3 mavnet 4d 3 3 reduce mavnet 4d pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 4d pool mavnet 4c output mavnet 4d pool 1 1 tf keras layer conv2d 64 kernel size 1 activation relu pad same name mavnet 4d pool 1 1 mavnet 4d pool mavnet 4d output tf keras layers concatenate axis 3 name mavnet 4d output mavnet 4d 1 1 mavnet 4d 3 3 mavnet 4d pool 1 1 mavnet 4e 1 1 tf keras layer conv2d 256 kernel size 1 activation relu pad same name mavnet 4e 1 1 mavnet 4d output mavnet 4e 3 3 reduce tf keras layer conv2d 160 kernel size 1 activation relu pad same name mavnet 4e 3 3 reduce mavnet 4d output mavnet 4e 3 3 tf keras layer conv2d 320 kernel size 3 activation relu pad same name mavnet 4e 3 3 mavnet 4e 3 3 reduce mavnet 4e pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 4e pool mavnet 4d output mavnet 4e pool 1 1 tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 4e pool 1 1 mavnet 4e pool mavnet 4e output tf keras layers concatenate axis 3 mavnet 4e 1 1 mavnet 4e 3 3 mavnet 4e pool 1 1 pool4 3 3 tf keras layer maxpooling2d pool size 3 stride 2 padding same name pool 3 3 mavnet 4e output mavnet 5a 1 1 tf keras layer conv2d 256 kernel size 1 activation relu pad same name mavnet 5a 1 1 pool4 3 3 mavnet 5a 3 3 reduce tf keras layer conv2d 160 kernel size 1 activation relu pad same name mavnet 5a 3 3 reduce pool4 3 3 mavnet 5a 3 3 tf keras layer conv2d 320 kernel size 3 activation relu pad same name mavnet 5a 3 3 mavnet 5a 3 3 reduce mavnet 5a pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 5a pool pool4 3 3 mavnet 5a pool 1 1 tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 5a pool 1 1 mavnet 5a pool mavnet 5a output tf keras layers concatenate axis 3 mavnet 5a 1 1 mavnet 5a 3 3 mavnet 5a pool 1 1 mavnet 5b 1 1 tf keras layer conv2d 384 kernel size 1 activation relu pad same name mavnet 5b 1 1 mavnet 5a output mavnet 5b 3 3 reduce tf keras layer conv2d 192 kernel size 1 activation relu pad same name mavnet 5b 3 3 reduce mavnet 5a output mavnet 5b 3 3 tf keras layer conv2d 384 kernel size 3 activation relu pad same name mavnet 5b 3 3 mavnet 5b 3 3 reduce mavnet 5b pool tf keras layer maxpooling2d pool size 3 stride 1 padding same name mavnet 5b pool mavnet 5a output mavnet 5b pool 1 1 tf keras layer conv2d 128 kernel size 1 activation relu pad same name mavnet 5b pool 1 1 mavnet 5b pool mavnet 5b output tf keras layers concatenate axis 3 mavnet 5b 1 1 mavnet 5b 3 3 mavnet 5b pool 1 1 pool5 7 7 tf keras layer averagepooling2d pool size 7 stride 1 padding same mavnet 5b output pool5 7 7 tf keras layers dropout 0 4 pool5 7 7 pool5 7 7 flatten tf keras layer flatten pool5 7 7 output tf keras layer dense num class activation softmax pool5 7 7 flatten model tf keras model img input output model compile optimizer tf keras optimizer sgd 0 0001 loss categorical crossentropy metric accuracy return model model mavnet 100 100 1 num class here you can put input datum of your own model fit datum in addition accord to 37726 you can refer to a simple example in which gradient computation after tf concat do not work |
tensorflowtensorflow | xla different jit compile behavior from tf2 7 | Bug | for the customize code below I have see such a error at runtime when xla be turn on this do not appear in tf2 7 2022 04 13 19 49 36 873241 w tensorflow core framework op kernel cc 1745 op require fail at xla ops cc 436 invalid argument fail to proof the equality of two dimension at compile time multiply 144 s32 multiply s32 constant 142 s32 add 1 metadata op type reshape op name reshape 3 vs add s32 add s32 reduce 109 s32 constant 17 system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution ubuntu 20 04 4 lts tensorflow instal from source or binary binary tensorflow version 2 8 0 python version 3 8 gcc compiler version if compile from source gcc 10 cuda cudnn version 11 6 gpu model and memory v100 32 g standalone code to reproduce the issue import tensorflow as tf import numpy as np display i d counter tf variable 0 trainable false dtype tf float64 tf function def evaluation step x y prediction dummy loss 0 9 prediction tf reshape prediction 1 prediction tf cast prediction tf float64 display ids x display ids tf reshape display ids 1 label tf reshape y 1 sort ids tf argsort display ids display ids tf gather display id index sort id prediction tf gather prediction indice sort id label tf gather label index sort id display ids idx display ids ad count tf unique with count display ids out idx tf int64 pad length 30 tf reduce max display ids ad count pred tf raggedtensor from value rowid prediction display ids idx to tensor label tf raggedtensor from value rowid label display ids idx to tensor label mask tf math reduce max label 1 pred mask tf boolean mask pred label mask label mask tf boolean mask label label mask label mask tf argmax label mask axis 1 output type tf int32 label mask tf reshape label mask 1 1 pred mask tf pad pred mask 0 0 0 pad length prediction idx tf math top k pred mask 12 indice tf math equal prediction idx label mask shape tf cast tf shape indice 0 tf float64 display i d counter assign add shape dim 102400 tf config optimizer set jit true for step in range 200 pre np random random dim 1 y tmp np zero dim 1 dtype float num one np random randint 1 dim 1 1 I d one np random randint 0 dim num one for I in i d one y tmp I 0 1 x tmp np random randint 0 dim dim 1 dtype np int64 evaluation step x tmp y tmp pre track down to commit ac4575 |
tensorflowtensorflow | macos wheel url be incorrect | Bug | url s with the issue description of issue what need change the mac wheel that the docs page purport to exist do not in fact exist nosuchkeythe specify key do not exist no such object tensorflow mac cpu tensorflow 2 8 0 cp39 cp39 macosx 10 11 x86 64 whl where be these wheel actually locate submit a pull request I would submit a pr but I be not aware of what the correct url be the storage bucket do not have any index as far as I could tell |
tensorflowtensorflow | test fail on r2 8 core tensorflow core lib math math util test | Bug | edit pr fix this issue be please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below v2 8 0 2 ge994fb9c3ad 2 8 0 python version 3 8 3 bazel version if compile from source 0 25 2 gcc compiler version if compile from source gcc ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 cuda cudnn version n a gpu model and memory n a describe the current behavior test fail describe the expect behavior test pass contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute edit remove incorrect hypothesis standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook git checkout r2 8 bazel host jvm args xmx32 g test job 12 config dbg verbose failure k tensorflow core tensorflow core lib math math util test other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach test log |
tensorflowtensorflow | typeerror call miss 1 require positional argument step | Bug | there be bug in tensorflow 2 6 learningrateschedule be not correctly recognize by isinstance tensorflow python keras optimizer v2 optimizer v2 py line 1014 and 807 value be tf keras optimizer schedule piecewiseconstantdecay print isinstance value learn rate schedule learningrateschedule return false not desire import tensorflow print isinstance value tensorflow keras optimizer schedule learningrateschedule return true can slove the bug below can any one create a pull request ne training generator v1 py line 574 in fit return fit generator file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras engine training generator v1 py line 256 in model iteration batch out batch function batch data file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras engine training v1 py line 1093 in train on batch self make train function file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras engine training v1 py line 2028 in make train function update self optimizer get update file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 784 in get update return self apply gradient grad and var file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 671 in apply gradient apply state self prepare var list file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 957 in prepare self prepare local var device var dtype apply state file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 gradient descent py line 125 in prepare local super sgd self prepare local var device var dtype apply state file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 963 in prepare local lr t array op identity self decay lr var dtype file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 1017 in decay lr lr t self get hyp learning rate var dtype file home fujiaqe anaconda3 lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py line 814 in get hyper value value typeerror call miss 1 require positional argument step test code boundary len x train batch size 70 len x train batch size 150 value 0 1 0 01 0 001 learning rate fn tf keras optimizer schedule piecewiseconstantdecay boundary value optimizer1 gradient descent v2 sgd learn rate learning rate fn momentum 0 9 model compile loss categorical crossentropy optimizer optimizer1 metric accuracy model fit |
tensorflowtensorflow | null | Bug | on define convolution encoder encode imgs autoencod encoder x test numpy here the input data be suppose to be x test noisy |
tensorflowtensorflow | memory leak in example tutorial documentation | Bug | url s with the issue log arbitrary image datum description of issue what need change list example need to be correct as to avoid memory leak clear description it appear that the example be create tf image decode png and tf expand dim layer within plot to image which be be repeatedly call by a tf keras callback lambdacallback these layer be fill vram or system ram depend on whether or not the example be run on gpu or not |
tensorflowtensorflow | be the instruction for build tflite example outdate | Bug | I be try to build an image classifier for android and I be follow the instruction provide here follow the instruction to run the give example I be get this error query the map value of map java io file property org gradle api file directory fix class org gradle api internal file defaultfilepropertyfactory fixeddirectory user user1 sample android example model build generate ap generate source debug out org gradle api internal file defaultfilepropertyfactory tofiletransformer 2325887e before task model compiledebugjavawithjavac have complete be not support system info system mac android studio version 3 2 android sdk version 23 |
tensorflowtensorflow | tf map fn on raggedtensor crash during gradient computation on a gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary binary tensorflow version use command below 2 8 python version 3 7 describe the current behavior when some loss tf loss sparsecategoricalcrossentropy tf loss categoricalcrossentropy tf loss binarycrossentropy or tf loss meansquarederror be use on ragged tensor which be compute via a tf map fn on a raggedtensor that the gradient computation on a gpu crash with node adam gradient zero like 2 2 root error s find 0 internal no unary variant unary op function find for op zero like variant type name raggedtensorvariant for device type gpu node adam gradient zero like 2 binary crossentropy map while loop body control 124 67 1 internal no unary variant unary op function find for op zero like variant type name raggedtensorvariant for device type gpu node adam gradient zero like 2 0 successful operation 0 derive error ignore op inference train function 16690 the computation do not crash on a cpu and it do not crash when tf function s be execute eagerly also if the tf map fn be circumvent by use the follow argument to compile python loss lambda yt yp tf loss binarycrossentropy yt value yp value it work on gpu without a crash describe the expect behavior the code do not crash on a gpu do you want to contribute a pr yes no no standalone code to reproduce the issue a simple colab reproduce the error be here other info log the map fn use be here l1408 |
tensorflowtensorflow | runpath not include tensorflow python for dtensor device so | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 manylinux2014 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below git head python version 3 7 x bazel version if compile from source 5 0 0 gcc compiler version if compile from source 10 2 1 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior auditwheel fail with message about can not find pywrap tensorflow internal so describe the expect behavior auditwheel pass contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook auditwheel repair w wheel tensorflow pkg tensorflow aarch64 2 9 0 cp38 cp38 linux aarch64 wh other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach issue introduce with |
tensorflowtensorflow | factor limit of tf image adjust contrast and tf image adjust saturation | Bug | tf image adjust saturation image saturation factor name none tf image adjust contrast image contrast factor could you please help understand what be the expect range or limit of the saturation factor and contrast factor in the above two function in the documentation it s not state clearly |
tensorflowtensorflow | wrong definition of arg fix length in api docs python tf raw op decodepaddedraw | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change wrong type decription of fix length clear description fix length should be a positive int but the doc have follow description a tensor of type int32 length in byte for each element of the decoded output must be a multiple of the size of the output type parameter define fix length should be a positive int submit a pull request no |
tensorflowtensorflow | tfds load movie lens fail to establish a new connection | Bug | I be execute the code on colab use the cpu configuration then I execute this sequence of pip install pip install tfds nightly pip install q tensorflow recommender pip install q upgrade tensorflow dataset 4 3 pip install q scann the code that cause the error python tfds load movielen 100k rating split train shuffle file false this be the error message python oserror traceback most recent call last usr local lib python3 7 dist package urllib3 connection py in new conn self 158 conn connection create connection 159 self dns host self port self timeout extra kw 160 38 frame oserror errno 113 no route to host during handling of the above exception another exception occur newconnectionerror traceback most recent call last newconnectionerror fail to establish a new connection errno 113 no route to host during handling of the above exception another exception occur maxretryerror traceback most recent call last maxretryerror httpconnectionpool host file grouplen org port 80 max retrie exceed with url dataset movielen ml 100k zip cause by newconnectionerror fail to establish a new connection errno 113 no route to host during handling of the above exception another exception occur connectionerror traceback most recent call last usr local lib python3 7 dist package request adapter py in send self request stream timeout verify cert proxy 514 raise sslerror e request request 515 516 raise connectionerror e request request 517 518 except closedpoolerror as e connectionerror httpconnectionpool host file grouplen org port 80 max retrie exceed with url dataset movielen ml 100k zip cause by newconnectionerror fail to establish a new connection errno 113 no route to host |
tensorflowtensorflow | tf string unsorted segment join crash unexpectedly when num segment be negative | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 python version 3 7 12 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior give the follow code snippet import tensorflow as tf try tf string unsorted segment join input 123 segment id 0 num segment 1 except exception print an exception should be throw but unsorted segment join crash print not reach the call to tf string unsorted segment join cause a crash describe the expect behavior since num segment be negative an exception should be throw perhaps an invalidargumenterror or valueerror the code should not crash standalone code to reproduce the issue the code snippet above should reproduce the issue the follow colab notebook run the notebook should crash the session demonstrate the issue |
tensorflowtensorflow | tf experimental numpy array should have the same behavior as numpy array but currently crash | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 python version 3 7 12 bazel version if compile from source gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior currently the follow code snippet below lead to a crash due to the incorrect value of ndmin a tf constant value 1 b tf constant value 1000 tf experimental numpy array val a ndmin b describe the expect behavior it should not crash it should have the same behavior as numpy array give the same input in which ndmin be validate valueerror ndmin big than allowable number of dimension npy maxdims 32 contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue the follow colab demonstrate the issue |
tensorflowtensorflow | tf nn ctc loss documentation mislead about gpu support lead to 50x performance difference | Bug | url s with the issue description of issue what need change the documentation for tf nn ctc loss l790 state on tpu and gpu only dense padded label be support on cpu caller may use sparsetensor or dense padded label but call with a sparsetensor will be significantly fast however read through the code for the ctc loss implementation l921 l957 show that sparsetensor be support on gpu and in practice the sparsetensor code path use cudnn and be about 50x fast than the non sparse code path I have test this on gpu but not on tpu where I expect the documentation may be accurate as the cudnn code path doesn t apply to tpu submit a pull request would be happy to after confirmation that this be indeed an issue cc kaixih f90 |
tensorflowtensorflow | tfx starter tutorial typo | Bug | hello there be a typo in the tfx starter tutorial run the pipeline localdagrunner provide fast iteration for developemnt and debug tfx also support other orchestrator include kubeflow pipeline and apache airflow which be suitable for production use case |
tensorflowtensorflow | event file summary metric type change break the example in summary iterator py | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example l45 description of issue what need change clear description for e in tf compat v1 train summary iterator path to event file for v in e summary value if v tag loss print v simple value some version between tf 2 3 and tf 2 8 it seem the behavior of log metric in tb event file change before value be log as simple value in the proto now it be log as a tensor that require additional parsing with tf make ndarray for example as a result this example in the documentation here no long work I m not sure if the change to metric writing be intentional wasn t able to find release note on that but we should update the documentation here to a work one tf 2 3 gist v simple value tf 2 8 gist tf make ndarray v tensor |
tensorflowtensorflow | tf compat v1 signal rfft2d and rfft3d lack input validation lead to crash | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 8 0 python version 3 7 12 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 base on a colab notebook gpu model and memory tesla t4 15109mib base on a colab notebook describe the current behavior the follow code snippet lead to crash when execute import numpy as np import tensorflow as tf a np empty 6 0 b np array 1 1 try tf compat v1 signal rfft2d input tensor a fft length b on a different machine check fail size 0 9223372036854775808 vs 0 abort core dump except pass print execution do not reach this line and import numpy as np import tensorflow as tf a np empty 6 1 1 b np array 1 2 0 try tf compat v1 signal irfft3d input tensor a fft length b on a different machine fail to initialize batch cufft plan with customize allocator fail to make cufft batch plan abort core dump except pass print execution do not reach this line in either case the input do not quite make sense and tensorflow should throw describe the expect behavior tensorflow should throw exception instead of crash contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue here be a colab notebook edit the notebook have to be run with gpu the code snippet above should also reproduce the issue |
tensorflowtensorflow | fix invalid example for tf raggedtensor abs | Bug | this pr try to address the issue raise in 53828 where tf tensor example be add to tf raggedtensor abs replace the tensor example with tf raggedtensor example this pr fix 53828 |
tensorflowtensorflow | crash if use negative num stream in quantileopst | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 5 0 python version 3 7 11 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory describe the current behavior quantileopstest get crash if use negative parameter in create quantile stream resource in tf 2 5 0 when run the same test case in tf 2 6 0 it would not crash and print error info describe the expect behavior the test case sould fail and give relative error information contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue I run the follow code name bug3 py from tensorflow python framework import test util from tensorflow python ops import boost tree op from tensorflow python op import resource from tensorflow python ops gen boost tree op import boost tree quantile stream resource handle op as resource handle op from tensorflow python ops gen boost tree op import be boost tree quantile stream resource initialize as resource initialize from tensorflow python platform import googlet test util run deprecate v1 class quantileopst test util tensorflowtestcase def setup self self eps 0 01 self max element 1 16 def testbasicquantilebucketssingleresourcesaddflushe self with self cache session quantile accumulator handle resource handle op container share name float 0 name float 0 create op boost tree op create quantile stream resource quantile accumulator handle epsilon self eps max element self max element num stream 2 be initialize op resource initialize quantile accumulator handle resource register resource quantile accumulator handle create op be initialize op resource initialize resource resource share resource run if name main googlet main other info log include any log or source code that would be helpful to here be the log in tf 2 5 0 run quantileopst testbasicquantilebucketssingleresourcesaddflushe to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2022 03 14 05 21 01 193346 I tensorflow core platform profile util cpu util cc 114 cpu frequency 2097635000 hz terminate call after throw an instance of std length error what vector reserve fatal python error abort thread 0x00007efcd9c1b180 most recent call first file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 1453 in call tf sessionrun file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 1360 in run fn file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 1375 in do call file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 1369 in do run file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 1191 in run file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python client session py line 968 in run file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python framework test util py line 1729 in run file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python framework op py line 5578 in run use default session file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python framework op py line 2625 in run file test bug bug3 py line 24 in testbasicquantilebucketssingleresourcesaddflushe file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python framework test util py line 1345 in decorate file root anaconda3 envs tf2 5 0 lib python3 7 unitt case py line 628 in run file root anaconda3 envs tf2 5 0 lib python3 7 unitt case py line 676 in call file root anaconda3 envs tf2 5 0 lib python3 7 unitt suite py line 122 in run file root anaconda3 envs tf2 5 0 lib python3 7 unitt suite py line 84 in call file root anaconda3 envs tf2 5 0 lib python3 7 unitt suite py line 122 in run file root anaconda3 envs tf2 5 0 lib python3 7 unitt suite py line 84 in call file root anaconda3 envs tf2 5 0 lib python3 7 unitt runner py line 176 in run file root anaconda3 envs tf2 5 0 lib python3 7 site package absl test pretty print reporter py line 87 in run file root anaconda3 envs tf2 5 0 lib python3 7 unitt main py line 271 in runtest file root anaconda3 envs tf2 5 0 lib python3 7 unitt main py line 101 in init file root anaconda3 envs tf2 5 0 lib python3 7 site package absl testing absltest py line 2553 in run and get test result file root anaconda3 envs tf2 5 0 lib python3 7 site package absl testing absltest py line 2585 in run test file root anaconda3 envs tf2 5 0 lib python3 7 site package absl testing absltest py line 2172 in run in app file root anaconda3 envs tf2 5 0 lib python3 7 site package absl testing absltest py line 2065 in main file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python platform googlet py line 55 in g main file root anaconda3 envs tf2 5 0 lib python3 7 site package absl app py line 258 in run main file root anaconda3 envs tf2 5 0 lib python3 7 site package absl app py line 312 in run file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python platform googlet py line 64 in main wrapper file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python platform benchmark py line 518 in benchmark main file root anaconda3 envs tf2 5 0 lib python3 7 site package tensorflow python platform googlet py line 66 in main file test bug bug3 py line 28 in abort core dump |
tensorflowtensorflow | miss input validation on tf ragged constant | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 8 0 python version 3 7 12 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version use a colab notebook gpu model and memory use a colab notebook you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior if I pass an empty list with a large ragged rank to tf ragged constant all ram be consume cause the notebook to crash the doc indicate that ragged rank should be between 0 and the rank of pylist so the large value of ragged rank should be reject describe the expect behavior some input validation should be do and an exception throw contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook the colab notebook import tensorflow as tf tf ragged constant pylist ragged rank 8968073515812833920 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | add executable code for tf sparse softmax | Bug | change the documentation of tf sparse softmax with executable example code the value pass to tf sparse sparsetensor must be a rank 1 tensor instead of a rank 3 tensor fix 55035 |
tensorflowtensorflow | the documentation of tf sparse softmax have inexecutable example code | Bug | url s with the issue description of issue what need change the documentation of tf sparse softmax have inexecutable example code import tensorflow as tf first batch e 1 second batch e e e shape 2 2 2 3 d sparsetensor value np asarray 0 np e 1 0 np e 0 np e np e indice np vstack np where value astype np int64 t result tf sparse softmax tf sparse sparsetensor index value shape valueerror return a 3 d sparsetensor equivalent to 1 1 1 and 5 5 where mean implicitly zero output valueerror shape 2 2 2 must have rank 1 reason the value pass to tf sparse sparsetensor must be a rank 1 tensor instead of a rank 3 tensor fix the above code should be change to import tensorflow as tf shape 2 2 2 3 d sparsetensor value np asarray 0 np e 1 0 np e 0 np e np e indice np vstack np where value astype np int64 t value value np where value flatten value result tf sparse softmax tf sparse sparsetensor index value shape print tf sparse to dense result output as expect tf tensor 0 1 1 0 1 0 0 5 0 5 shape 2 2 2 dtype float64 |
tensorflowtensorflow | tensorflow api version hyperlink for 2 7 point to 2 8 | Bug | url s with the issue description of issue what need change the link to version r2 7 point to but should point point to |
tensorflowtensorflow | tf cpp min vlog level and tf cpp min log level do not output log info in tf 2 8 0 2 9 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 tensorflow instal from source or binary yes tensorflow version use command below v2 8 0 2 ge994fb9c3ad and 2 9 master branch python version 3 8 10 bazel version if compile from source 4 2 1 gcc compiler version if compile from source 9 3 0 cuda cudnn version na gpu model and memory na describe the current behavior set tf cpp min log level and tf cpp min vlog level do not work with the late release 2 8 0 and 2 9 of the master branch both build from source nor export an envvar before launch the tf script nor use the os module in the code try set it before and after the tensorflow module import for source build I follow the step mention in the official guide use the build command bazel build tensorflow tool pip package build pip package both build be cpu only without cuda nor rocm support describe the expect behavior as discuss in 31870 set tf cpp min log level 0 and or tf cpp min vlog level 2 should show log debug information regard the c implementation inner operation memory allocation etc standalone code to reproduce the issue import os os environ tf cpp min vlog level 2 import tensorflow as tf os environ tf cpp min vlog level 2 a tf variable tf zero shape 2 name a print a |
tensorflowtensorflow | tf raw op rgbtohsv lack support for bfloat16 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf image tf random uniform 1 1 3 dtype tf bfloat16 tf raw op rgbtohsv image image throw error notfounderror could not find device for node node rgbtohsv rgbtohsv t dt bfloat16 all kernel register for op rgbtohsv device cpu t in dt double device cpu t in dt float device gpu t in dt double device gpu t in dt float op rgbtohsv describe the current behavior tf raw op rgbtohsv should support half bfloat16 float32 float64 accord to the document |
tensorflowtensorflow | tf rank return incorrect value in trace compile conditional when a sparsetensor be involve | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux cento 7 9 tensorflow instal from source or binary binary tensorflow version use command below v2 7 0 rc1 69 gc256c071bb2 2 7 0 python version 3 7 7 cuda cudnn version cuda 11 2 152 cudnn 8 1 1 gpu model and memory geforce rtx 2080 ti 11gbytes describe the current behavior tf rank return the wrong value at runtime when use for conditional branching in trace compile code and when one of branch involve a sparsetensor specifically tf rank v return 2 for a tensor v of shape 2 10 3 when call as part of a conditional test when one of the execution branch run tf sparse sparse dense matmul l v where l be a sparse matrix and v be a dense tensor describe the expect behavior in this scenario I would expect tf rank v to return 3 for a tensor v of shape 2 10 3 standalone code to reproduce the issue import tensorflow as tf tf function input signature tf sparsetensorspec shape none none dtype tf float32 tf tensorspec shape none dtype tf float32 def f l v tf print runtime tf rank v tf rank v tf print runtime tf shape v tf shape v if tf rank v 2 return tf sparse sparse dense matmul l v else return v 0 0 0 v l tf sparse from dense tf eye 10 dtype tf float32 f l tf one 2 10 3 dtype tf float32 the code above print runtime tf rank v 2 runtime tf shape v 2 10 3 because tf rank return the wrong value the wrong branch be execute and result in the traceback below note that if l be a dense matrix then tf rank return the correct result and the correct branch be execute here be a version of the code above with this modification import tensorflow as tf tf function input signature tf tensorspec shape none none dtype tf float32 tf tensorspec shape none dtype tf float32 def f l v tf print runtime tf rank v tf rank v tf print runtime tf shape v tf shape v if tf rank v 2 return tf matmul l v else return v 0 0 0 v l tf eye 10 dtype tf float32 f l tf one 2 10 3 dtype tf float32 other info log traceback obtain when run the first code example above runtime tf rank v 2 runtime tf shape v 2 10 3 traceback most recent call last file tmp tf cond v3 py line 16 in f l tf one 2 10 3 dtype tf float32 file lucas ilm dept rnd home sgrabli src fluxml venv lib64 python3 7 site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file lucas ilm dept rnd home sgrabli src fluxml venv lib64 python3 7 site package tensorflow python eager execute py line 59 in quick execute input attrs num output tensorflow python framework error impl invalidargumenterror tensor b be not a matrix node cond sparsetensordensematmul sparsetensordensematmul define at tmp tf cond v3 py 11 op inference f 61 error may have originate from an input operation input source operation connect to node cond sparsetensordensematmul sparsetensordensematmul in 0 cond sparsetensordensematmul sparsetensordensematmul l in 1 cond sparsetensordensematmul sparsetensordensematmul l 1 in 2 cond sparsetensordensematmul sparsetensordensematmul l 2 in 3 cond sparsetensordensematmul sparsetensordensematmul v operation define at most recent call last file tmp tf cond v3 py line 16 in f l tf one 2 10 3 dtype tf float32 file tmp tf cond v3 py line 10 in f if tf rank v 2 file tmp tf cond v3 py line 11 in f return tf sparse sparse dense matmul l v function call stack f cond true 30 |
tensorflowtensorflow | square root symbol sqrt not display correctly | Bug | url s with the issue description of the issue what need change the square root symbol be not display in the render html documentation although the source code include the appropriate code sqrt x one would expect a square symbol around the x in the follow image image l5 l5 clear description the miss square symbol be confuse for someone read the documentation fix one could change every occurrence of the sqrt symbol to x 1 2 however this be not a good solution and we should rather fix the display issue itself |
tensorflowtensorflow | drive | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | please remove invalid hyperlink | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change there be a link but the domain be for sell clear description in the very end check out tf2up ml for a convenient tool to upgrade jupyter notebook and python file in a github repository this link be not work anymore correct link parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | link to some of the guide be break | Bug | link to guide basic guide autodiff guide basic training loop guide keras functional guide keras train and evaluate guide kera rnn guide keras transfer learn guide advanced autodiff guide tf numpy guide datum guide data performance and guide save model be break they be respond with ipynb file please look into it |
tensorflowtensorflow | savedmodel guide show raw jupyter code instead of html | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change clear description the above url show raw jupyter notebook code instead of html as in other page in the documentation correct link parameter define return define raise list and define usage example request visual if applicable submit a pull request |
tensorflowtensorflow | libtensorflowlite so build fail for aarch64 with flex delegate include in dependency | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux 0be7898f8b6f 5 4 144 from google colab mobile device raspberry pi aarch64 target architecture tensorflow instal from source or binary source tensorflow version use command below 2 7 0 python version 3 7 12 bazel version if compile from source 3 7 2 describe the current behavior I m try to build a share library for tensorflow lite include the flex delegate by modify the build to add follow dependency tensorflow lite delegate flex delegate since I m target a linux system run on my aarch64 board the command I run be the below bazel build config monolithic config elinux aarch64 define with select tf op true c opt tensorflow lite libtensorflowlite so describe the expect behavior the expect behavior be a build complete successfully standalone code to reproduce the issue I provide a gist from google colab to make you able to reproduce the issue other info log error root cache bazel bazel root 889612a75a81b3d8b4ed860522ba4e34 external com github grpc grpc build 1897 16 c compilation of rule com github grpc grpc grpc transport chttp2 server secure fail exit 1 aarch64 linux gnu gcc fail error execute command root cache bazel bazel root 889612a75a81b3d8b4ed860522ba4e34 external aarch64 linux toolchain bin aarch64 linux gnu gcc fstack protector g0 o2 dndebug ffunction section fdata section remain 67 argument s skip in file include from usr include openssl evp h 16 from usr include openssl x509 h 18 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl bio h 690 1 error expect constructor destructor or type conversion before deprecatedin 1 1 0 deprecatedin 1 1 0 int bio get port const char str unsigned short port ptr in file include from usr include openssl asn1 h 23 from usr include openssl object h 15 from usr include openssl evp h 28 from usr include openssl x509 h 18 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl bn h 183 43 error bn ulong do not name a type do you mean sha long int bn abs be word const bignum a const bn ulong w sha long usr include openssl bn h 186 39 error bn ulong do not name a type do you mean sha long int bn be word const bignum a const bn ulong w sha long usr include openssl bn h 214 22 error bn ulong be not declare in this scope int bn num bit word bn ulong l usr include openssl bn h 214 22 note suggest alternative sha long int bn num bit word bn ulong l sha long usr include openssl bn h 266 1 error bn ulong do not name a type do you mean sha long bn ulong bn mod word const bignum a bn ulong w sha long usr include openssl bn h 267 1 error bn ulong do not name a type do you mean sha long bn ulong bn div word bignum a bn ulong w sha long usr include openssl bn h 268 28 error bn ulong have not be declare int bn mul word bignum a bn ulong w usr include openssl bn h 269 28 error bn ulong have not be declare int bn add word bignum a bn ulong w usr include openssl bn h 270 28 error bn ulong have not be declare int bn sub word bignum a bn ulong w usr include openssl bn h 271 28 error bn ulong have not be declare int bn set word bignum a bn ulong w usr include openssl bn h 272 1 error bn ulong do not name a type do you mean sha long bn ulong bn get word const bignum a sha long usr include openssl bn h 288 37 error bn ulong have not be declare int bn mod exp mont word bignum r bn ulong a const bignum p usr include openssl bn h 323 24 error variable or field bn consttime swap declare void void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 323 24 error bn ulong be not declare in this scope usr include openssl bn h 323 24 note suggest alternative sha long void bn consttime swap bn ulong swap bignum a bignum b int nword sha long usr include openssl bn h 323 46 error expect primary expression before token void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 323 47 error a be not declare in this scope void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 323 57 error expect primary expression before token void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 323 58 error b be not declare in this scope void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 323 61 error expect primary expression before int void bn consttime swap bn ulong swap bignum a bignum b int nword usr include openssl bn h 332 1 error expect constructor destructor or type conversion before deprecatedin 0 9 8 deprecatedin 0 9 8 int usr include openssl bn h 403 1 error expect constructor destructor or type conversion before deprecatedin 0 9 8 deprecatedin 0 9 8 int bn get param int which 0 mul 1 high 2 low 3 in file include from usr include openssl object h 15 from usr include openssl evp h 28 from usr include openssl x509 h 18 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl asn1 h 555 7 error expect constructor destructor or type conversion before unsigned const unsigned char asn1 string get0 datum const asn1 string x in file include from usr include openssl x509 h 22 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl ec h 274 1 error expect constructor destructor or type conversion before deprecatedin 1 2 0 deprecatedin 1 2 0 int ec group get curve gfp const ec group group bignum p usr include openssl ec h 543 1 error expect constructor destructor or type conversion before deprecatedin 1 2 0 deprecatedin 1 2 0 int ec point get affine coordinate gfp const ec group group usr include openssl ec h 631 1 error expect constructor destructor or type conversion before size t size t ec point point2oct const ec group group const ec point p in file include from usr include openssl x509 h 25 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl rsa h 239 1 error expect constructor destructor or type conversion before int int rsa generate key ex rsa rsa int bit bignum e bn gencb cb in file include from usr include openssl dsa h 25 from usr include openssl x509 h 26 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl dh h 142 1 error expect constructor destructor or type conversion before int int dh generate parameter ex dh dh int prime len int generator in file include from usr include openssl x509 h 26 from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl dsa h 103 1 error expect constructor destructor or type conversion before int int dsa sign int type const unsigned char dgst int dlen usr include openssl dsa h 127 1 error expect constructor destructor or type conversion before int int dsa generate parameter ex dsa dsa int bit in file include from external com github grpc grpc src core tsi ssl transport security h 28 from external com github grpc grpc src core lib security security connector security connector h 33 from external com github grpc grpc src core lib security credential credential h 35 from external com github grpc grpc src core lib security context security context h 28 from external com github grpc grpc src core ext transport chttp2 server secure server secure chttp2 cc 35 usr include openssl x509 h 728 1 error expect constructor destructor or type conversion before deprecatedin 1 1 0 deprecatedin 1 1 0 asn1 time x509 crl get nextupdate x509 crl crl target tensorflow lite libtensorflowlite so fail to build use verbose failure to see the command line of fail build step info elapse time 916 628s critical path 16 49 info 2033 process 1075 internal 958 local fail build do not complete successfully thank you in advance for your support |
tensorflowtensorflow | can not open page | Bug | the website download page instead of open they |
tensorflowtensorflow | error message of tf nn gelu with uint16 input be mislead | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf feature tf zero 3 4 dtype tf uint16 tf nn gelu feature throw typeerror typeerror can not convert 0 5 to eagertensor of dtype uint16 describe the current behavior the current message be mislead as it seem to be some computation error if tf nn gelu do not accept uint16 input the message should be a standard message like value for attr t of uint16 be not in the list of allow value similar to tf nn crelu import tensorflow as tf feature tf zero 3 4 dtype tf uint16 tf nn crelu feature invalidargumenterror value for attr t of uint16 be not in the list of allow value bfloat16 half float double int8 int16 int32 int64 complex64 complex128 nodedef node neg op y t attr t type allow dt bfloat16 dt half dt float dt double dt int8 dt int16 dt int32 dt int64 dt complex64 dt complex128 op neg describe the expect behavior tf nn gelu should have well error message in this case |
tensorflowtensorflow | internal error blas xgemv launch fail on tensorflow v2 8 0 for the same block of code that run perfectly well on tensorflow v2 4 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device unknown tensorflow instal from source or binary binary tensorflow version use command below v2 8 0 rc1 32 g3f878cff5b6 2 8 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 11 2 1 gpu model and memory tesla t4 16 gb describe the current behavior run a block of code with tensorflow v2 8 0 cuda 11 2 cudnn 8 1 return an internal error blas xgemv launch fail when it run perfectly well with tensorflow v2 4 1 cuda 11 0 cudnn 8 0 describe the expect behavior return the same output as tensorflow v2 4 1 cuda 11 0 cudnn 8 0 contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute n a standalone code to reproduce the issue the follow block of code work perfectly well with tensorflow v2 4 1 cuda 11 0 cudnn 8 0 but not with tensorflow v2 8 0 cuda 11 2 cudnn 8 1 import tensorflow as tf empty image tf zero shape 1280 1280 3 dtype tf float32 gray image tf image rgb to grayscale empty image an important point to note be that when I reduce the shape of empty image to 512 512 3 there be no issue however I believe this be not a device memory issue as I can reproduce this with geforce rtx 2080 ti 11 gb as well as tesla t4 16 gb other info log traceback most recent call last file line 1 in file home ubuntu miniconda3 envs docrec lib python3 8 site package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file home ubuntu miniconda3 envs docrec lib python3 8 site package tensorflow python framework op py line 7186 in raise from not ok status raise core status to exception e from none pylint disable protect access tensorflow python framework error impl internalerror blas xgemv launch fail a shape 1 1638400 3 b shape 1 3 1 m 1638400 n 1 k 3 op matmul |
tensorflowtensorflow | stringlookup fail as first layer in a sequential model in tf 2 8 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary preinstalle on google colab tensorflow version use command below v2 8 0 0 g3f878cff5b6 2 8 0 python version 3 7 12 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior use a tf keras layer stringlookup layer as the first layer in a sequential model raise an exception when call the model unimplementederror exception encounter when call layer sequential type sequential cast string to int64 be not support op cast see full stacktrace below standalone code to reproduce the issue python import tensorflow as tf cat paris singapore auckland str lookup layer tf keras layer stringlookup str lookup layer adapt cat lookup and embed tf keras sequential str lookup layer tf keras layer embed input dim str lookup layer vocabulary size output dim 2 lookup and embed tf constant paris singapore auckland error this code be available in this gist describe the expect behavior this should work like it do in tensorflow 2 7 1 I believe it s a regression it should output something like this other info log instead it raise this exception full stacktrace stacktrace unimplementederror traceback most recent call last in 7 output dim 2 8 9 lookup and embed tf constant paris singapore auckland 1 frame usr local lib python3 7 dist package keras util traceback util py in error handler args kwargs 65 except exception as e pylint disable broad except 66 filter tb process traceback frames e traceback 67 raise e with traceback filter tb from none 68 finally 69 del filter tb usr local lib python3 7 dist package tensorflow python framework op py in raise from not ok status e name 7184 def raise from not ok status e name 7185 e message name name if name be not none else 7186 raise core status to exception e from none pylint disable protect access 7187 7188 unimplementederror exception encounter when call layer sequential type sequential cast string to int64 be not support op cast call argument receive input tf tensor shape 3 1 dtype string training none mask none |
tensorflowtensorflow | module structure not recognize by visual studio code linter | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 11 10 0 22000 tensorflow instal from source or binary through pip tensorflow version use command below 2 8 0 python version 3 10 2 I be very perplexed by tensorflow s module structure I recently start use tensorflow in visual studio code and immediately run into a problem where import from tensorflow kera can not be resolve by pylance for example the layer module be not recognize from the line from tensorflow keras import layer interestingly enough the code run fine despite this error but the lack of support from the linter make write code very difficult for reference I be use the probabilistic bayesian neural network example script my attempt to fix this issue lead to a number of other discovery which still have I confuse use from tensorflow import kera be recognize by the linter but the linter still can t offer any helpful prediction off of the keras module in this instance replace reference to layer with keras layer still work despite no indication from the linter that they should import kera directly 2 8 0 instead of through tensorflow be recognize and keras layer be still valid but hint from linte be still not present and other part of the code will break unexpectedly for example a call to keras optimizer rmsprop be invalid even though keras optimizer be recognize and both be recognize if keras be import through tensor flow import kera through tensorflow python behave similarly to directly import kera even though keras layer be not recognize by the linter after import just kera the linter do recognize from keras import layer and offer completion for layer after the fact from tensorflow import kera be recognize in tensorflow 2 7 0 this issue only start when I update admittedly my understanding of packaging and linte in python be somewhat limited but I ve never have issue like this with any other package I ve work with be I miss something obvious here be this a know issue if this kind of structure be intentional what be the rationale behind it be there know workaround resource that I could use to well understand this issue thank in advance for the help |
tensorflowtensorflow | tf histogram fix width bin lack check for nbin | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf nbin 16 value range 0 0 5 0 new value 1 0 0 0 1 5 2 0 5 0 15 indice tf histogram fix width bin new value value range nbin nbin indice numpy output array 0 0 0 0 0 0 dtype int32 describe the current behavior tf histogram fix width bin have an argument nbin which should be a positive integer however it do not perform any validity check and can accept a negative value like 16 tf histogram fix width another api with similar functionality can detect this error and raise an invalidargumenterror import tensorflow as tf nbin 16 value range 0 0 5 0 new value 1 0 0 0 1 5 2 0 5 0 15 indice tf histogram fix width new value value range nbin nbin indice numpy invalidargumenterror nbin should be a positive number but get 16 op histogramfixedwidth describe the expect behavior tf histogram fix width bin should have well input checking |
tensorflowtensorflow | tf compat as byte do not check the encoding string | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf byte or text hello encode valid t1 tf compat as text byte or text encode encoding print t1 hello t2 tf compat as bytes byte or text encoding encode lookuperror unknown encoding valid describe the current behavior valid be not valid value for encoding as we can see that tf compat as byte would throw an loopuperror however tf compat as text do not perform any validity check and can accept it and even give an output describe the expect behavior tf compat as text should check the validity of encode |
tensorflowtensorflow | tf boolean mask lack check for bool argument | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf tensor 0 1 2 3 mask tf random uniform 4 dtype tf float64 tf boolean mask tensor mask output describe the current behavior tf boolean mask have an argument mask which should be a bool tensor however it do not perform any validity check and can accept a float64 value describe the expect behavior tf boolean mask should check the dtype of input tensor mask for example tf math reduce any would check the first argument and throw an invalidargumenterror for non boolean input import tensorflow as tf input tensor tf random uniform 4 dtype tf float64 tf math reduce any input tensor invalidargumenterror can not compute any as input 0 zero base be expect to be a bool tensor but be a double tensor op any |
tensorflowtensorflow | tf math atan lack support for complex64 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf x tf complex tf random uniform 8 8 dtype tf float32 tf random uniform 8 8 dtype tf float32 print x dtype tf math atan x describe the current behavior tf math atan can not accept a tensor of type complex64 however accord to the document it should support complex64 and complex128 for the above code snippet the error message be notfounderror could not find device for node node atan atan t dt complex64 all kernel register for op atan device gpu t in dt double device gpu t in dt float device gpu t in dt half device gpu t in dt bfloat16 device cpu t in dt double device cpu t in dt float device cpu t in dt bfloat16 device cpu t in dt half op atan |
tensorflowtensorflow | typo in | Bug | layer with a on top |
tensorflowtensorflow | some test misuse asserttrue for comparison | Bug | asserttrue be not for compare argument should use assertequal for that the developer s intent of the test be to compare argument 1 with argument 2 which be not happen really what be happen be the test be pass because first argument be truthy the correct method to use be assertequal more detail l85 l383 l1182 l1007 l2549 I find this issue automatically see other issue here |
tensorflowtensorflow | valueerror no gradient provide for any variable | Bug | I be use tensorflow 2 6 0 and python 3 9 I be attempt to implement a variational autoencoder toy example use mnist dataset with convolutional neural network as encoder and decoder you can refer to the complete jupyter notebook here for some reason on use gradienttape for training in this case compute the gradient with respect to the trainable parameter of the define model it keep give valueerror no gradient provide for any variable error message the exact line of code be with tf gradienttape as tape total loss compute total loss datum x reconstruction x recon mu mu log var log var alpha 1 grad tape gradient total loss model trainable weight type grad len grad list 16 no gradient be compute for x in grad print x none none none none none none none none none none none none none none none none optimizer apply gradient zip grad model trainable weight valueerror traceback most recent call last appdata local temp ipykernel 232 111942921 py in 1 optimizer apply gradient zip grad model trainable weight anaconda3 envs tf cpu lib site package tensorflow python keras optimizer v2 optimizer v2 py in apply gradient self grad and var name experimental aggregate gradient 639 runtimeerror if call in a cross replica context 640 641 grad and var optimizer util filter empty gradient grad and var 642 var list v for v in grad and var 643 anaconda3 envs tf cpu lib site package tensorflow python keras optimizer v2 util py in filter empty gradient grad and var 73 74 if not filter 75 raise valueerror no gradient provide for any variable s 76 v name for v in grad and var 77 if var with empty grad valueerror no gradient provide for any variable vae 1 encoder 4 conv2d 8 kernel 0 vae 1 encoder 4 conv2d 8 bias 0 vae 1 encoder 4 conv2d 9 kernel 0 vae 1 encoder 4 conv2d 9 bias 0 vae 1 decoder 3 dense 9 kernel 0 vae 1 decoder 3 dense 9 bias 0 vae 1 decoder 3 conv2d transpose 9 kernel 0 vae 1 decoder 3 conv2d transpose 9 bias 0 vae 1 decoder 3 conv2d transpose 10 kernel 0 vae 1 decoder 3 conv2d transpose 10 bias 0 vae 1 decoder 3 conv2d transpose 11 kernel 0 vae 1 decoder 3 conv2d transpose 11 bias 0 vae 1 dense 10 kernel 0 vae 1 dense 10 bias 0 vae 1 dense 11 kernel 0 vae 1 dense 11 bias 0 be this a bug be it the case that tf gradienttape api be somehow not compute the gradient |
tensorflowtensorflow | tf math asin lack support for complex | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n tensorflow instal from source or binary binary tensorflow version use command below 2 7 0 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue import tensorflow as tf x tf complex tf random uniform 4 dtype tf float64 tf random uniform 4 dtype tf float64 print tf math asin x could not find device for node node asin asin t dt complex128 expect output accord to the document tf math asin it should be able to accept a complex input |
tensorflowtensorflow | train step method of custom model be not call in graph execution I e non eager mode | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes but I modify only a minor portion from the stock example os platform and distribution e g linux ubuntu 16 04 archlinux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary via pip install tensorflow tensorflow gpu keras probably from binary tensorflow version use command below v2 8 0 rc1 32 g3f878cff5b6 2 8 0 python version python 3 9 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda 11 5 gpu model and memory nvidia geforce gtx 1060 6 gb describe the current behavior custom model s train step be not be use in non eager execution mode describe the expect behavior custom model s train step be use regardless of whether eager execution be enable or not contribute do you want to contribute a pr yes no no briefly describe your candidate solution if contribute standalone code to reproduce the issue python import kera import numpy as np import tensorflow as tf class custommodel keras model def train step self datum return m name m result for m in self metric if name main config tf compat v1 enable eager execution tf compat v1 disable eager execution print tensorflow version format tf version print eager execution format tf executing eagerly construct and compile an instance of custommodel input keras input shape 32 output keras layer dense 1 input model custommodel input output model compile optimizer adam loss mse metric mae just use fit as usual x np random random 1000 32 y np random random 1000 1 print model evaluate x y model fit x y epoch 3 print model evaluate x y other info log when eager execution be enable train step get call which mean the model isn t train as expect 32 32 0s 1ms step loss 0 3040 mae 0 4428 0 3039644658565521 0 442813515663147 epoch 1 3 32 32 0s 874u step loss 0 0000e 00 mae 0 0000e 00 epoch 2 3 32 32 0s 810us step loss 0 0000e 00 mae 0 0000e 00 epoch 3 3 32 32 0s 762us step loss 0 0000e 00 mae 0 0000e 00 32 32 0s 1ms step loss 0 3040 mae 0 4428 0 3039644658565521 0 442813515663147 when eager execution be disabled train step be ignore and the model be train normally and train step be ignore this be not expect 0 26861959040164946 0 41127136 train on 1000 sample epoch 1 3 1000 1000 0s 71us sample loss 0 2598 mae 0 4037 epoch 2 3 1000 1000 0s 40us sample loss 0 2432 mae 0 3912 epoch 3 3 1000 1000 0s 37us sample loss 0 2296 mae 0 3805 0 2220638926625252 0 3743403 relate issue 45922 40880 the snippet be modify from stock example here a first simple example |
tensorflowtensorflow | autograph could not transform | Bug | info tensorflow train at 0x2976dd700 be not cache for subkey conversionoption info tensorflow source code of train at 0x2976dd700 tf function def train batch s batch start s end s a batch start a end a s batch start s end s r batch start r end r do batch start do end do noise tf random normal args batch size env action dim a log actor s noise y r args gamma 1 do tf minimum critic 1 target s a critic 2 target s a args alpha log with tf gradienttape as tape msbe 1 1 args batch size tf reduce sum critic 1 s a y 2 msbe 1 grad tape gradient msbe 1 critic 1 trainable weight critic 1 optimizer apply gradient zip msbe 1 grad critic 1 trainable weight with tf gradienttape as tape msbe 2 1 args batch size tf reduce sum critic 2 s a y 2 msbe 2 grad tape gradient msbe 2 critic 2 trainable weight critic 2 optimizer apply gradient zip msbe 2 grad critic 2 trainable weight noise tf random normal args batch size env action dim with tf gradienttape as tape log tf tensor a log actor s noise expect reward 1 args batch size tf reduce sum critic 1 s a args alpha log neg expect reward expect reward expect reward grad tape gradient neg expect reward actor trainable weight actor optimizer apply gradient zip expect reward grad actor trainable weight polyak average critic 1 target variable critic 1 variable polyak average critic 2 target variable critic 2 variable if args model name msbe 1 log msbe 1 msbe 2 log msbe 2 expect reward log expect reward info tensorflow error transforming entity train at 0x2976dd700 traceback most recent call last file opt homebrew lib python3 9 site package tensorflow python autograph impl api py line 433 in convert call convert f convert actual target entity program ctx file opt homebrew lib python3 9 site package tensorflow python autograph impl api py line 275 in convert actual transform module source map transpiler transform entity program ctx file opt homebrew lib python3 9 site package tensorflow python autograph pyct transpiler py line 286 in transform return self transform function obj user context file opt homebrew lib python3 9 site package tensorflow python autograph pyct transpiler py line 470 in transform function nodes ctx super pytopy self transform function fn user context file opt homebrew lib python3 9 site package tensorflow python autograph pyct transpiler py line 363 in transform function result self transform ast node context file opt homebrew lib python3 9 site package tensorflow python autograph impl api py line 243 in transform ast node self initial analysis node ctx file opt homebrew lib python3 9 site package tensorflow python autograph impl api py line 231 in initial analysis node activity resolve node ctx none file opt homebrew lib python3 9 site package tensorflow python autograph pyct static analysis activity py line 709 in resolve return activityanalyzer context parent scope visit node file opt homebrew lib python3 9 site package tensorflow python autograph pyct transformer py line 445 in visit result super base self visit node file opt homebrew cellar 3 9 9 framework python framework version 3 9 lib python3 9 ast py line 407 in visit return visitor node file opt homebrew lib python3 9 site package tensorflow python autograph pyct static analysis activity py line 601 in visit functiondef node body self visit block node body file opt homebrew lib python3 9 site package tensorflow python autograph pyct transformer py line 340 in visit block replacement self visit node file opt homebrew lib python3 9 site package tensorflow python autograph pyct transformer py line 445 in visit result super base self visit node file opt homebrew cellar 3 9 9 framework python framework version 3 9 lib python3 9 ast py line 407 in visit return visitor node file opt homebrew lib python3 9 site package tensorflow python autograph pyct static analysis activity py line 651 in visit with node self generic visit node file opt homebrew cellar 3 9 9 framework python framework version 3 9 lib python3 9 ast py line 483 in generic visit value self visit value file opt homebrew lib python3 9 site package tensorflow python autograph pyct transformer py line 445 in visit result super base self visit node file opt homebrew cellar 3 9 9 framework python framework version 3 9 lib python3 9 ast py line 407 in visit return visitor node file opt homebrew lib python3 9 site package tensorflow python autograph pyct static analysis activity py line 394 in visit annassign node value self visit node value file opt homebrew lib python3 9 site package tensorflow python autograph pyct transformer py line 431 in visit raise valueerror msg valueerror invalid value for node expect ast ast get to visit list of node use visit block instead warn tensorflow autograph could not transform train at 0x2976dd700 and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause invalid value for node expect ast ast get to visit list of node use visit block instead to silence this warning decorate the function with tf autograph experimental do not convert system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 12 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip3 install tensorflow macos tensorflow version use command below unknown 2 7 0 python version 3 9 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook sorry but I can t provide the full source code other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | model fit bug when use a zipped dataset as input for a multiple input model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary pip tensorflow version use command below 2 9 0 dev20220202 python version 3 10 2 describe the current behavior I have a custom model which take 3 image as input I have 3 separate currently unbatched as I debug this error dataset class encode as categorical mean each input tensor have shape x y z c try to input the 3 dataset separately fail either by inputte they as a dict mapping each ds to a name input input1 ds1 input2 ds2 input3 ds3 or use a list ds ds2 ds3 I zip the three dataset test the result dataset with use the doc as guidance zip for element in zipped ds as numpy iterator print element element output element x1 y1 z1 c1 x2 y2 z2 c2 x3 y3 z3 c3 element x1 y1 z1 c1 x2 y2 z2 c2 x3 y3 z3 c3 seem to work right every call to the iterator return 3 element well when I use the zipped dataset as input of model fit the first element in the tuple return by the dataset object be treat as the input for the whole model mean that instead of use x1 y1 z1 c1 x2 y2 z2 c2 x3 y3 z3 c3 as the input to the model it use x1 y1 z1 c1 and the training fail I ve try many approach like use zip ds as numpy iterator or ds1 ds2 ds3 for idx ds1 ds2 ds3 in enumerate zip ds but both fail as the return item be empty standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import os import tensorflow as tf tensorflow nightly version 2 5 from tensorflow import kera from tensorflow image import crop to bound box as tfimgcrop from tensorflow kera preprocesse import image dataset from directory batch size 32 adjust img size 224 224 img shape img size 3 url path to zip tf keras util get file cat and dog zip origin url extract true path os path join os path dirname path to zip cat and dog filter train dir os path join path train validation dir os path join path validation train dataset tf keras preprocesse image dataset from directory train dir shuffle false label mode categorical batch size 32 image size img size validation dataset tf keras preprocesse image dataset from directory validation dir shuffle false label mode categorical batch size 32 image size img size base model1 tf keras application mobilenetv3large input shape img shape include top false weight imagenet minimalistic false pool max dropout rate 0 2 base model2 tf keras application mobilenetv3large input shape img shape include top false weight imagenet minimalistic false pool max dropout rate 0 2 base model3 tf keras application mobilenetv3large input shape img shape include top false weight imagenet minimalistic false pool max dropout rate 0 2 pre concat layer1 tf keras layer dense 64 activation relu kernel initializer random uniform bias initializer zero pre concat layer2 tf keras layer dense 64 activation relu kernel initializer random uniform bias initializer zero pre concat layer3 tf keras layer dense 64 activation relu kernel initializer random uniform bias initializer zero post concat layer tf keras layer dense 128 activation relu kernel initializer random uniform bias initializer zero prediction layer tf keras layer dense 2 activation softmax kernel initializer random uniform bias initializer zero input1 tf keras input shape 64 64 3 name first input2 tf keras input shape 64 64 3 name second input3 tf keras input shape 64 64 3 name third x base model1 input1 training false x tf keras layer globalaveragepooling2d x x tf keras layers dropout 0 2 x x tf keras layer batchnormalization x x pre concat layer1 x x tf keras layers dropout 0 2 x output tf keras layer batchnormalization x body1 tf keras model input1 output x base model2 input2 training false x tf keras layer globalaveragepooling2d x x tf keras layers dropout 0 2 x x tf keras layer batchnormalization x x pre concat layer2 x x tf keras layers dropout 0 2 x output tf keras layer batchnormalization x body2 tf keras model input2 output x base model3 input3 training false x tf keras layer globalaveragepooling2d x x tf keras layers dropout 0 2 x x tf keras layer batchnormalization x x pre concat layer3 x x tf keras layers dropout 0 2 x output tf keras layer batchnormalization x body3 tf keras model input3 output body1 get layer mobilenetv3large name mobilenetv3large1 body2 get layer mobilenetv3large name mobilenetv3large2 body3 get layer mobilenetv3large name mobilenetv3large3 combinedinput tf keras layers concatenate body1 output body2 output body3 output x post concat layer combinedinput x tf keras layers dropout 0 2 x x tf keras layer batchnormalization x foutput prediction layer x final model tf keras model input body1 input body2 input body3 input output foutput def resize data1 image class return tfimgcrop image offset height 0 offset width 0 target height 64 target width 64 class def resize data2 image class return tfimgcrop image offset height 0 offset width 64 target height 64 target width 64 class def resize data3 image class return tfimgcrop image offset height 0 offset width 128 target height 64 target width 64 class train dataset unb train dataset unbatch train dataset1 train dataset unb map resize data1 train dataset2 train dataset unb map resize data2 train dataset3 train dataset unb map resize data3 train dataset zip tf datum dataset zip train dataset1 train dataset2 train dataset3 validation dataset unb validation dataset unbatch validation dataset1 validation dataset unb map resize data1 validation dataset2 validation dataset unb map resize data2 validation dataset3 validation dataset unb map resize data3 validation dataset zip tf datum dataset zip validation dataset1 validation dataset2 validation dataset3 final model compile history final model fit train dataset zip epoch 999 validation datum validation dataset zip validation step 32 |
tensorflowtensorflow | asd | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.