repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | typeerror apply gradient get an unexpected keyword argument global step in tf agent agent reinforceagent | Bug | first of all I try everything in my code and it didn t change the outcome I do not know where be this global step come from not from my code then I belive the problem may be with tensor if its not I be sorry for post as bug issue but before cancel it could you at least tell I how can I train my model edit other agent work with the same layout ex dqnagent work well system information have I write custom code as oppose to use a stock example script provide in tensorflow stock and non stock os platform and distribution e g linux ubuntu 16 04 window 10 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device non tensorflow instal from source or binary source tensorflow version use command below 2 4 python version 3 8 bazel version if compile from source non gcc compiler version if compile from source non cuda cudnn version non gpu also doesn t work gpu model and memory gtx1070 6 gb describe the current behavior appdata local program python python38 lib site package tf agent agent reinforce reinforce agent py in train self experience weight 286 self train step counter 287 288 self optimizer apply gradient 289 grad and var global step 0 290 typeerror apply gradient get an unexpected keyword argument global step describe the expect behavior train loss tf agent train experience global step tf variable 1 name global step print train loss train los standalone code to reproduce the issue py import pyxinput import time import cv2 from pil import imagegrab import numpy as np import keyboard import tensorflow import tf agent import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import torch from tf agent network import actor distribution networ from tf agent policy import random py policy tensod spec tf agent spec boundedarrayspec 15 dtype np float32 name ximputspecs minimum 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 maximum 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 tensod spec2 tf agent spec tensorspec 440 600 1 dtype tf int32 name screenspec tensor reward spe tf agent spec tensorspec 1 1 dtype tf int32 name reward fromenv tf agent specs boundedtensorspec shape 440 600 1 dtype uint8 name observation minimum 0 maximum 255 fromenv2 tf agent spec boundedtensorspec shape 1 440 600 1 dtype tf int32 name observation minimum 0 maximum 255 fullscreen 110 130 710 570 screenpil imagegrab grab bbox fullscreen showprint np array screenpil grayscreen cv2 cvtcolor showprint cv2 color bgr2gray screenrect cv2 cvtcolor grayscreen cv2 color gray2bgr grayscreen grayscreen reshape 440 600 1 time step spec2 tf agent trajectory time step time step spec observation spec fromenv reward spec tensor reward spec time step spec tf agent trajectory time step time step spec observation spec fromenv reward spec tensor reward spec actor net tf agent network actor distribution network actordistributionnetwork input tensor spec fromenv output tensor spec tf agent spec tensor spec from spec tensod spec activation fn relu conv layer param 25 40 2 fc layer param 50 25 15 dtype int32 print actor net train step counter tf dtype cast 1 tf int32 optimizer tf keras optimizer adam learning rate 0 003 tf agent tf agent agent reinforceagent time step spec time step spec action spec tf agent spec tensor spec from spec tensod spec actor network actor net optimizer optimizer normalize return true train step counter tf variable 1 name global step tf agent initialize grayscreen2 grayscreen grayscreen2 grayscreen2 reshape 1 440 600 1 time step2 tf agent trajectory time step timestep step type tf agent trajectory time step steptype first reward tf dtype cast 1 tf float32 discount tf dtype cast 1 tf float32 observation grayscreen2 policy state tf agent policy get initial state batch size 1 policy step tf agent policy action time step2 policy state print policy step observe time step2 observation print observe dtype observe observe astype int print observe shape experience tf agent trajectory trajectory trajectory action tf compat v2 variable tf compat v2 variable policy step action tf compat v2 variable policy step action tf compat v2 variable policy step action reward tf compat v2 variable tf compat v2 variable time step2 reward tf compat v2 variable time step2 reward tf compat v2 variable time step2 reward step type tf compat v2 variable tf compat v2 variable tf agent trajectory time step steptype first tf compat v2 variable tf agent trajectory time step steptype mid tf compat v2 variable tf agent trajectory time step steptype last observation tf compat v2 variable tf compat v2 variable observe tf compat v2 variable observe tf compat v2 variable observe policy info tf agent policy info spec next step type tf compat v2 variable tf compat v2 variable tf agent trajectory time step steptype mid tf compat v2 variable tf agent trajectory time step steptype last tf compat v2 variable tf agent trajectory time step steptype last discount tf compat v2 variable tf dtype cast 1 tf float32 tf dtype cast 1 tf float32 tf dtype cast 1 tf float32 train loss tf agent train experience print train loss other info log include any log or source code that would be helpful to full erro py typeerror traceback most recent call last in 1 2 train loss tf agent train experience global step tf variable 1 name global step 3 print train loss appdata local program python python38 lib site package tf agent agent tf agent py in train self experience weight kwargs 516 517 if self enable function 518 loss info self train fn 519 experience experience weight weight kwargs 520 else appdata local program python python38 lib site package tf agent util common py in with check resource var fn args fn kwargs 183 we re either in eager mode or in tf function mode no in between so 184 autodep like behavior be already expect of fn 185 return fn fn args fn kwargs 186 if not resource variable enable 187 raise runtimeerror miss resource variable error appdata local program python python38 lib site package tf agent agent reinforce reinforce agent py in train self experience weight 286 self train step counter 287 288 self optimizer apply gradient 289 grad and var global step 0 290 typeerror apply gradient get an unexpected keyword argument global step thank you for your time |
tensorflowtensorflow | micro speech recognize command test cc bug | Bug | hi I try to follow the book tinyml chapter 07 and run recognize command test and get an error show in the img below I think in recognize command test cc line 158 const int bad dim 2 1 3 have to be change to const int bad dim 2 1 4 screenshot from 2021 04 08 16 43 54 |
tensorflowtensorflow | add link and more info in tf example readme | Bug | this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to |
tensorflowtensorflow | tensorboard can t support read datum from kerberos cluster hdfs | Bug | os platform and distribution e g linux ubuntu 16 04 cento 7 8 tensorflow instal from source or binary binary tensorflow version use command below tensorboard 2 4 1 tensorboard plugin wit 1 8 0 tensorflow 2 3 1 tensorflow estimator 2 3 0 python version 3 6 8 describe the current behavior sudo su hdfs export kerb ticket cache path tmp krb5cc 996 echo hdfs kinit hdfs test kdc bash 4 2 hdfs dfs ls tmp tensorflow mnist find 4 item drwxr xr x hdfs supergroup 0 2021 03 31 18 03 tmp tensorflow mnist input data drwxr xr x hdfs supergroup 0 2021 04 07 19 49 tmp tensorflow mnist work dir drwxr xr x hdfs supergroup 0 2021 03 31 19 25 tmp tensorflow mnist work dir 1131 drwxr xr x hdfs supergroup 0 2021 03 31 19 39 tmp tensorflow mnist work dir 231 bash 4 2 hdfs dfs ls tmp tensorflow mnist work dir find 17 item rw r r 1 hdfs supergroup 222 2021 04 07 19 49 tmp tensorflow mnist work dir checkpoint rw r r 1 hdfs supergroup 656044 2021 03 31 20 09 tmp tensorflow mnist work dir event out tfevent 1617217664 cdhhakerberos cdh core kudu 0 novalocal rw r r 1 hdfs supergroup 657680 2021 04 01 02 10 tmp tensorflow mnist work dir event out tfevent 1617239301 cdhhakerberos cdh core kudu 0 novalocal rw r r 1 hdfs supergroup 657796 2021 04 07 20 49 tmp tensorflow mnist work dir event out tfevent 1617824847 cdhhakerberos cdh core kudu 0 novalocal rw r r 1 hdfs supergroup 328014 2021 04 07 19 47 tmp tensorflow mnist work dir graph pbtxt rw r r 1 hdfs supergroup 39295624 2021 03 31 19 07 tmp tensorflow mnist work dir model ckpt 0 datum 00000 of 00001 rw r r 1 hdfs supergroup 994 2021 03 31 19 07 tmp tensorflow mnist work dir model ckpt 0 index rw r r 1 hdfs supergroup 138970 2021 03 31 19 07 tmp tensorflow mnist work dir model ckpt 0 meta classpath hadoop classpath glob tensorboard logdir hdfs default tmp tensorflow mnist host hostname verbosity 0 logg level other logg debug stderrthreshold debug describe the expect behavior I expect show datum in tensorboard but tensorboard show no dashboard be active for the current datum set other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 2021 04 07 22 22 10 690895 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine i0407 22 22 13 750683 139694758954816 plugin event multiplexer py 106 event multiplexer initialize i0407 22 22 13 750985 139694758954816 plugin event multiplexer py 125 event multiplexer do initialize i0407 22 22 13 751172 139694758954816 datum ingester py 124 launch reload in a daemon thread i0407 22 22 13 752043 139693371959040 datum ingester py 98 tensorboard reload process begin i0407 22 22 13 752586 139693371959040 plugin event multiplexer py 201 start addrunsfromdirectory hdfs default tmp tensorflow mnist i0407 22 22 13 756613 139693371959040 plugin event multiplexer py 207 do with addrunsfromdirectory hdfs default tmp tensorflow mnist i0407 22 22 13 756979 139693371959040 datum ingester py 102 tensorboard reload process reload the whole multiplexer i0407 22 22 13 757233 139693371959040 plugin event multiplexer py 212 begin eventmultiplexer reload i0407 22 22 13 757594 139693371959040 plugin event multiplexer py 256 reloading run serially one after another on the main thread i0407 22 22 13 757895 139693371959040 plugin event multiplexer py 265 finish with eventmultiplexer reload i0407 22 22 13 758112 139693371959040 datum ingester py 107 tensorboard do reloading load take 0 006 sec tensorboard 2 4 1 at press ctrl c to quit i0407 22 22 14 644470 139693380351744 internal py 113 192 168 140 253 07 apr 2021 22 22 14 get http 1 1 200 i0407 22 22 14 728922 139693380351744 internal py 113 192 168 140 253 07 apr 2021 22 22 14 get index js http 1 1 200 i0407 22 22 14 752892 139693388744448 internal py 113 192 168 140 253 07 apr 2021 22 22 14 get font roboto ommgfzmqthoryqo9n22dcuvvdin1pk8aktelpez5c0a woff2 http 1 1 200 i0407 22 22 15 515960 139693380351744 application py 439 plugin listing be active for scalar take 0 000 second i0407 22 22 15 516464 139693380351744 application py 439 plugin listing be active for custom scalar take 0 000 second i0407 22 22 15 516749 139693380351744 application py 439 plugin listing be active for image take 0 000 second i0407 22 22 15 516995 139693380351744 application py 439 plugin listing be active for audio take 0 000 second i0407 22 22 15 517701 139693380351744 application py 439 plugin listing be active for debugg v2 take 0 000 second i0407 22 22 15 518152 139693380351744 application py 439 plugin listing be active for graph take 0 000 second i0407 22 22 15 518453 139693380351744 application py 439 plugin listing be active for distribution take 0 000 second i0407 22 22 15 518700 139693380351744 application py 439 plugin listing be active for histogram take 0 000 second i0407 22 22 15 518924 139693380351744 application py 439 plugin listing be active for text take 0 000 second i0407 22 22 15 519140 139693380351744 application py 439 plugin listing be active for pr curve take 0 000 second i0407 22 22 15 519388 139693380351744 application py 439 plugin listing be active for profile redirect take 0 000 second i0407 22 22 15 519608 139693380351744 application py 439 plugin listing be active for hparam take 0 000 second i0407 22 22 15 519902 139693380351744 application py 439 plugin listing be active for mesh take 0 000 second i0407 22 22 15 520128 139693380351744 application py 439 plugin listing be active for timeserie take 0 000 second i0407 22 22 15 525379 139693397137152 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get icon bundle svg http 1 1 200 i0407 22 22 15 526228 139693380351744 application py 439 plugin listing be active for projector take 0 006 second i0407 22 22 15 526491 139693380351744 application py 439 plugin listing be active for whatif take 0 000 second i0407 22 22 15 529370 139693388744448 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get datum environment http 1 1 200 i0407 22 22 15 530524 139693380351744 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get datum plugin list http 1 1 200 i0407 22 22 15 541091 139693380351744 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get datum run http 1 1 200 i0407 22 22 15 542141 139693397137152 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get datum environment http 1 1 200 i0407 22 22 15 544588 139693388744448 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get font roboto rxzjdnzeo3r5zsexge8uuzbw1xu1rkptjj 0jans920 woff2 http 1 1 200 i0407 22 22 15 587946 139693388744448 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get datum run http 1 1 200 i0407 22 22 15 699429 139693388744448 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get font roboto d 6iyplofoccackzxwxsojbw1xu1rkptjj 0jans920 woff2 http 1 1 200 i0407 22 22 15 701413 139693397137152 internal py 113 192 168 140 253 07 apr 2021 22 22 15 get font roboto vpcynsl0qhq 6dx7lkvbyxyhjbspvc47ee6xr 80hnw woff2 http 1 1 200 i0407 22 22 18 765110 139693371959040 datum ingester py 98 tensorboard reload process begin i0407 22 22 18 765607 139693371959040 plugin event multiplexer py 201 start addrunsfromdirectory hdfs default tmp tensorflow mnist i0407 22 22 18 765993 139693371959040 plugin event multiplexer py 207 do with addrunsfromdirectory hdfs default tmp tensorflow mnist i0407 22 22 18 766208 139693371959040 datum ingester py 102 tensorboard reload process reload the whole multiplexer i0407 22 22 18 766429 139693371959040 plugin event multiplexer py 212 begin eventmultiplexer reload i0407 22 22 18 766694 139693371959040 plugin event multiplexer py 256 reloading run serially one after another on the main thread i0407 22 22 18 766852 139693371959040 plugin event multiplexer py 265 finish with eventmultiplexer reload i0407 22 22 18 766979 139693371959040 datum ingester py 107 tensorboard do reloading load take 0 002 sec |
tensorflowtensorflow | break outdated link in doc | Bug | while go through the documentation I encounter a few break outdated link and I ve find the correct link for the same kindly assign this issue to I so that I can make the require change |
tensorflowtensorflow | outdate or break link in tensorflow example documentation | Bug | while go through tensorflow example contribute md I find out some broken or outdated link in the doc I would like to fix it with the correct once please assign the issue to I |
tensorflowtensorflow | change sample to example in textvectorization | Bug | I make an issue about this and mark daoust say example would be a slightly well fit therefore I change all sample to example in the docstr of textvectorization fix 48298 |
tensorflowtensorflow | error validate datum cardinality when fit the model | Bug | system information I be write code to create model use tensorflow os platform and distribution linux ubuntu 18 04 tensorflow instal from binary tensorflow version 2 4 1 python version 3 7 4 cuda cudnn version not instal gpu model and memory geforce 940mx 256 mb system information extract from tf env collect sh tf env txt current behavior when fit the model input and output be join in single tuple before call the function check datum cardinality in tensorflow python keras engine datum adapter py then inside the check datum cardinality function a flatten method be call for joined x and y since input and output be of different dimension this result in error still inside check datum cardinality function I have try input and output manipulation to prevent the error since this could be cause by invalid input and or output format but have no success expect behavior do not throw an error do not mix dimension validation of input and output together standalone code to reproduce the issue other info log valueerror traceback most recent call last in 31 32 33 test concatenate neural network 4 frame usr local lib python3 7 dist package tensorflow python keras engine datum adapter py in check datum cardinality datum 1527 label join str I shape 0 for I in nest flatten single datum 1528 msg make sure all array contain the same number of sample 1529 raise valueerror msg 1530 1531 valueerror datum cardinality be ambiguous x size 3 3 3 3 3 3 3 3 y size 4 make sure all array contain the same number of sample |
tensorflowtensorflow | keras loss be stick at the very same value during training after the first epoch | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow instal from source or binary binary via python pip tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version python 3 8 5 cuda cudnn version 11 2 gpu model and memory tesla p100 pcie with 16 gb memory describe the current behavior I be write a kera base autoencoder use my own datum that dataset include about 20k training and about 4k validation image all of they be very similar all show the very same object my model look like this model autoencoder layer type output shape param input 1 inputlayer none 300 300 1 0 encoder functional none 16 5779216 decoder functional none 300 300 1 6176065 total param 11 955 281 trainable param 11 954 897 non trainable param 384 model encoder layer type output shape param input 1 inputlayer none 300 300 1 0 conv2d conv2d none 150 150 32 320 leaky re lu leakyrelu none 150 150 32 0 batch normalization batchno none 150 150 32 128 conv2d 1 conv2d none 75 75 64 18496 leaky re lu 1 leakyrelu none 75 75 64 0 batch normalization 1 batch none 75 75 64 256 flatten flatten none 360000 0 dense dense none 16 5760016 total param 5 779 216 trainable param 5 779 024 non trainable param 192 model decoder layer type output shape param input 2 inputlayer none 16 0 dense 1 dense none 360000 6120000 reshape reshape none 75 75 64 0 conv2d transpose conv2dtran none 150 150 64 36928 leaky re lu 2 leakyrelu none 150 150 64 0 batch normalization 2 batch none 150 150 64 256 conv2d transpose 1 conv2dtr none 300 300 32 18464 leaky re lu 3 leakyrelu none 300 300 32 0 batch normalization 3 batch none 300 300 32 128 conv2d transpose 2 conv2dtr none 300 300 1 289 activation activation none 300 300 1 0 total param 6 176 065 trainable param 6 175 873 non trainable param 192 then I initialize my model like this imgsize 300 epoch 20 lr 0 0001 encoder decoder autoencoder convautoencoder build imgsize imgsize 1 sche exponentialdecay initial learning rate lr decay step epoch decay rate lr epoch autoencod compile loss mean squared error optimizer adam learning rate sche then I train my model like this image generator imagedatagenerator rescale 1 0 255 train gen image generator flow from directory os path join args image training class mode input color mode grayscale target size imgsize imgsize batch size bs val gen image generator flow from directory os path join args image validation class mode input color mode grayscale target size imgsize imgsize batch size bs hist autoencoder fit train gen validation data val gen epochs epoch batch size bs my batch size bs be 32 and I start with an initial adam learning rate of 0 001 but I also try value like 0 1 down to 0 0001 I also try to increase the latent dimensionality to something like 1024 but that doesn t solve my issue either describe the expect behavior during train the loss go down in the first epoch from about 0 5 to about 0 2 and then begin from the second epoch that loss stick at the very same value e g 0 1989 and then it stay there forever regardless of how many epoch I train and or the initial learning rate I use I would expect that the loss go down a bit far or would change with different learning rate but as say that didn t help even another model layout do not solve my issue the training loss be stick to a value and it stay at this value forever |
tensorflowtensorflow | tflite interpreter fail to execute quantize model succeed on non quantize | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below nightly 2 6 0 dev20210402 instal use pip install tf nightly python version 3 8 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a give a tensorflow model I convert it to tflite in three way 1 plain conversion without post training quantization 2 post integer quantization when only the weight be quantize 3 full integer quantization when run inference use those three model 1 the non quantize version work as expect in 21 second 2 the version where just the weight be quantize do not terminate even after 10 minute 3 when use the model with full integer quantization the interpreter crash with the follow error message external ruy ruy apply multipli cc 52 ruy check le condition not satisfied shift 7 with value 108 7 describe the expect behavior the interpreter run the model with full integer quantization should terminate with a result similar to the non quantize version standalone code to reproduce the issue see link zip file quantize py be the code use to convert and quantize tensorflow to tflite tfliteinf py be the code use to run inference poolnet small1 tf be the original tensorflow savedmodel trevi small1 bmp be the input image expect png be the expect output image norm image be a directory contain the representative dataset |
tensorflowtensorflow | sample or example | Bug | url s with the issue description of issue what need change clear description lot of the description use the word sample however google machine learn glossary do not have an entry of sample it do have example example one row of a dataset an example contain one or more feature and possibly a label see also label example and unlabeled example since this class work on dataset I think the description be talk about example before we have sample there must be a process of sample I don t think textvectorization be do sample therefore it s unproper to call example sample should sample be replace by example |
tensorflowtensorflow | update refactor community page | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change there some outdated and missing info in this page see the thread at issuecomment 812568979 cc theadactyl |
tensorflowtensorflow | rnn input type output type uint8 float32 node number 18 tanh fail to prepare | Bug | 1 system information window 10 pip install tf nightly 2 6 0 dev20210330 2 code the follow code convert and quantize a stateless lstm layer with additional input and output to handle the state by the user program this be suggest at it be still possible to model a stateful keras lstm layer use the underlying stateless keras lstm layer and manage the state explicitly in the user program such a tensorflow program can still be convert to tensorflow lite use the feature be describe here from tensorflow import kera from tensorflow keras import layer import tensorflow as tf import numpy as np rng np random default rng def representative dataset yield rng random 1 1024 dtype np float32 rng random 1 1 1024 dtype np float32 rng random 1 1024 dtype np float32 def create keras model keras file name stateful true input keras input shape 1 1024 name input hide state keras input shape 1024 name lstm hidden state cell state keras input shape 1024 name lstm cell state output new hidden state new cell state layer lstm unit 1024 return sequence true unroll true return state true name lstm input initial state hide state cell state model keras model input hide state cell state output new hidden state new cell state model summary model compile model save keras file name def convert to tflite keras file name tflite filename converter tf lite tfliteconverter from save model keras file name converter experimental new converter true converter optimization tf lite optimize default converter representative dataset representative dataset converter target spec support op tf lite opsset tflite builtins int8 converter target spec support type tf int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite model converter convert with open tflite filename wb as f f write tflite model def main keras file name example model tflite filename example model tflite create keras model keras file name convert to tflite keras file name tflite filename run model with random input interpreter tf lite interpreter tflite filename interpreter allocate tensor for input detail in interpreter get input detail scale zero point input detail quantization input tensor rng random input detail shape dtype np float32 input tensor input detail dtype input tensor scale zero point interpreter set tensor input detail index input tensor interpreter invoke if name main main grafik 3 failure after conversion run the model after conversion fail with traceback most recent call last file c user user pycharmproject project lstm tflite playground py line 62 in main file c user user pycharmproject project lstm tflite playground py line 51 in main interpreter allocate tensor file c user user pycharmproject project venv lib site package tensorflow lite python interpreter py line 408 in allocate tensor return self interpreter allocatetensor runtimeerror tensorflow lite kernel activation cc 393 input type output type uint8 float32 node number 18 tanh fail to prepare a normal stateless model without the additional in and output run fine be there any workaround |
tensorflowtensorflow | issue tf function not work when deal with tf stack | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow version 2 3 0 describe the current behavior tf function be not work in converting function contain tf stack describe the expect behavior I would expect tf function to actually manage to convert the function standalone code to reproduce the issue import tensorflow as tf c tf variable 1 5 2 4 tf function def toy fct x y tf stack x 0 x 1 axis 0 return y toy fct c other info log the warning give be warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute index to silence this warning decorate the function with tf autograph experimental do not convert warning autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute index to silence this warning decorate the function with tf autograph experimental do not convert |
tensorflowtensorflow | break syntax highlighting on doc that render python notebook | Bug | tensorflow doc relate url s with the issue code example for multiheadattention at multi head attention and code example at create a tfdata dataset for training description of issue what need change clear description I actually waste some amount of time for my own stupidity but there seem to be a glitch in the syntax highlighting system for notebook render on the doc it seem that the syntax highlighter parse as comment in python code block not floordiv operator floordiv which it actually mean this actually may cause some amount of confusion especially for people come from c world like myself who tend to eye parse away the as comment and the broken syntax highlighter be encourage this it look so natural request visual if applicable image image also in dark mode image |
tensorflowtensorflow | update index yaml | Bug | update broken link to mlir repo fix github issue 48189 |
tensorflowtensorflow | documentation for time distribute layer | Bug | documentation for time distribute layer in keras mention any layer apply with it will be apply to every temporal slice but in my assumption for example consider mant to many sequence model train for ner or language model with the follow code if return sequence true in the previous bilstm layer then dense n and timedistribute dense n be exactly the same and either of they can be use be my assumption correct model sequential model add embed input dim voc size output dim embed dim input length 50 model add bidirectional lstm unit lstm unit return sequence true merge mode ave model add timedistribute dense n |
tensorflowtensorflow | break link in mlir doc | Bug | url s with the issue mlir on github box link be break that link be point that return 404 I think be valid link |
tensorflowtensorflow | minor update to documentation | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary n a tensorflow version commit sha if source 899fdb415dc970d5bca7d98a7dddf95968ef07c2 target platform e g arm mbe os arduino nano 33 etc n a describe the problem some documentation be outdate and need an update the memory mgmt markdown need an update regard offline plan tensor allocation please provide the exact sequence of command step when you run into the problem n a |
tensorflowtensorflow | contain broken link | Bug | url s with the issue description of issue what need change the doc for the embedding metadata argument contain a break link it say see the detail about metadata file format and the link redirect to metadata optional which be break |
tensorflowtensorflow | miss add function for builtin operator fill | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 5 lts tensorflow instal from source or binary source build locally tensorflow version commit sha if source 771c870a81c1025c4886a4fb60ca33971e98c577 target platform e g arm mbe os arduino nano 33 etc nrf52840 dk describe the problem I successfully convert my model to tf lite for microcontroller and I be now try to consume it from an nrf52840 dk target but I get a failure when allocate the tensor err model debug log cc 12 didn t find op for builtin opcode fill version 1 an old version of this builtin might be support be you use an old tflite binary with a new model err model debug log cc 12 err model debug log cc 12 fail to get registration from op code fill err model debug log cc 12 err model debug log cc 12 fail start model allocation err model debug log cc 12 err model debug log cc 12 allocatetensor fail be proprietary software unfortunately I can not share more detail about the model use I already have a propose resolution of the issue and I m open this issue ticket in order to open a pr please provide the exact sequence of command step when you run into the problem the first step be build tflm locally cd tensorflow make f tensorflow lite micro tool make makefile target cortex m generic target arch cortex m4 fp target toolchain root opt gcc arm none eabi 9 2020 q2 update bin optimize kernel dir cmsis nn microlite once libtensorflow microlite a be build I can include it as a library in the makefile of my project the second step be to consume my tflm model from th target nrf52840 the code fail within this function call void model init set up log static tflite microerrorreporter micro error reporter error reporter micro error reporter map the model into a usable data structure model tflite getmodel g model if model version tflite schema version tf lite report error error reporter model provide be schema version d not equal to support version d n model version tflite schema version this pull in all the operation implementation we need static tflite allopsresolver resolver resolver addexpanddim build an interpreter to run the model with static tflite microinterpreter static interpreter model resolver tensor arena ktensorarenasize error reporter interpreter static interpreter allocate memory from the tensor arena for the model s tensor tflitestatus allocate status interpreter allocatetensor if allocate status ktfliteok tf lite report error error reporter allocatetensor fail return obtain pointer to the model s input and output tensor input interpreter input 0 output interpreter output 0 |
tensorflowtensorflow | output size same as input size only if stride 1 | Bug | url s with the issue description of issue what need change in the description of the input argument padding it say one of valid or same case insensitive valid mean no padding same result in pad evenly to the left right or up down of the input such that output have the same height width dimension as the input however same would make the output have the same dimension as the input only if stride be set to be 1 I be initially very confused by this typo |
tensorflowtensorflow | equation be terribly format | Bug | hello if I change the language e g german in the documentation the equation be terribly format this problem appear e g here in english language every look fine nevertheless as soon as I change the language there seem to be a format issue good regard |
tensorflowtensorflow | tutorial redirect for time series forecasting give 404 | Bug | describe the problem in the machine learn tutorial the time series example hyperlink redirect to a 404 page instead of the proper tutorial this need to be update but I m uncomfortable make the change and then create a pull request please nuke this comment when you can I do not know another way to point this issue out so I do this |
tensorflowtensorflow | transformeddistribution documentation be not readable | Bug | describe the problem the documentation of tfp transformeddistribution be not readable in between see the attachement source code log please see attachment capture |
tensorflowtensorflow | tf 2 4 still depend on grpcio v1 32 0 | Bug | grpcio be mean to be update in tf 2 4 but tf 2 4 0 and 2 4 1 still depend on it l121 issuecomment 806233485 |
tensorflowtensorflow | incorrect zeropadde before maxpool2d in keras resnet | Bug | hi I ve notice that the implementation of resnet network in keras introduce a zeropadde layer before the initial maxpool2d 3x3 stride 2 l161 I don t think this be correct zero be not a neutral element for a maxpool2d operation the input value at the edge could be negative I believe the intention of that zero padding layer be what would be correctly represent as same padding for the maxpool2d directly note that the padding layer be also add pad element to the right and bottom edge of the input that the 3x3 stride 2 maxpool2d operation win t ever use if the input be even size as I believe it commonly be |
tensorflowtensorflow | tensorflow probability tensorflow 2 4 and tf distribute mirroredstrategy error not json serializable mirroredvariable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 6 9 cuda cudnn version 11 0 cuda compilation tool release 11 2 v11 2 152 build cuda 11 2 r11 2 compiler 29618528 0 gpu model and memory 12 gb describe the current behavior point 1 warn tensorflow model fail to serialize as json ignore not json serializable mirroredvariable 0 point 2 error detail share below describe the expect behavior tensorflow probability tensorflow 2 4 and tf distribute mirroredstrategy should work without any problem standalone code to reproduce the issue please confirm if the combination of tensorflow probability tensorflow 2 4 and tf distribute mirroredstrategy and share the key point to be take care of for this case since the code be big if I share the code the question will be to share a small code to reproduce other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach file vevn lib python3 6 site package tensorflow python keras engine training py line 1145 in fit callback on epoch end epoch epoch log file vevn lib python3 6 site package tensorflow python keras callbacks py line 432 in on epoch end callback on epoch end epoch numpy log file home rr sensor fusion ws rr sf net tf2 ws good crl vgg 21mar2021 re trial rrsfnet rrsfnet callback common py line 31 in on epoch end self callback on epoch end epoch log log file vevn lib python3 6 site package tensorflow python keras callbacks py line 1344 in on epoch end self save model epoch epoch log log file vevn lib python3 6 site package tensorflow python keras callbacks py line 1396 in save model self model save filepath overwrite true option self option file vevn lib python3 6 site package tensorflow python keras engine training py line 2002 in save signature option save trace file vevn lib python3 6 site package tensorflow python keras save save py line 154 in save model model filepath overwrite include optimizer file vevn lib python3 6 site package tensorflow python keras save hdf5 format py line 119 in save model to hdf5 v default json util get json type encode utf8 file usr lib python3 6 json init py line 238 in dump kw encode obj file usr lib python3 6 json encoder py line 199 in encode chunk self iterencode o one shoot true file usr lib python3 6 json encoder py line 257 in iterencode return iterencode o 0 file vevn lib python3 6 site package tensorflow python keras save save model json util py line 134 in get json type raise typeerror not json serializable obj typeerror not json serializable mirroredvariable 0 |
tensorflowtensorflow | expression issue of batchnormalization layer | Bug | a issue about the expression in the batchnormalization layer url s with the issue description of issue what need change in the provide url during inference this layer return batch self move mean self move var epsilon gamma beta but I think a square operation be miss it should return batch self moving mean sqrt self move var epsilon gamma beta similarly the square operation should be add in the equation during training as batch mean batch sqrt var batch epsilon gamma beta please refer to algorithm 1 in the paper batch normalization accelerate deep network training by reduce internal covariate shift |
tensorflowtensorflow | step per epoch documentation not in accordance with the tutorial on the tf website | Bug | the documentation for the step per epoch argument to the tf keras model fit function locate here l942 specifie that if x be a tf data dataset and step per epoch be none the epoch will run until the input dataset be exhaust also the default value of step per epoch be none in the load image tutorial train a model the train ds variable be pass to model fit for training and be a tf datum dataset on which several map s have be apply before image follow the documentation how can the step underline in red in the picture be display please feel free to ask if you need any more detail |
tensorflowtensorflow | python 3 6 12 debug be not support but documentation say it be | Bug | this work pyenv shell 3 6 12 pip install upgrade pip pip install tensorflow this also work pyenv shell 3 8 5 debug pip install upgrade pip pip install tensorflow but this pyenv shell 3 6 12 debug pip install upgrade pip pip install tensorflow say error could not find a version that satisfy the requirement tensorflow error no match distribution find for tensorflow and afaik this be nowhere document system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 arch linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below all version python version 3 6 12 debug cuda cudnn version different version gpu model and memory different version |
tensorflowtensorflow | invalid format in tf math log doc page | Bug | url s with the issue description of issue what need change clear description the example section be break l13 l22 maybe the above line be invalid and markdown style codeblock line should be remove like tensorflow core api def python api api def log1p pbtxt |
tensorflowtensorflow | no visible interface for tflinterpreter declare the selector copydata toinputtensoratindex error | Bug | hello I have a little problem with the tensorflow documentation I would like to use tensorflow lite with objective c by follow this documentation url s with the issue description of issue what need change apparently this function be not declare interpreter copydata inputdata toinputtensoratindex 0 error error I get this follow error no visible interface for tflinterpreter declare the selector copydata toinputtensoratindex error be I miss something how can I fill a tensor with objective c |
tensorflowtensorflow | bug issue test | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow rocm get wrong version | Bug | didn t find this error on error list system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 10 with kernel 5 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 2 with tensorflow rocm2 3 4 python version 3 8 6 gpu model and memory amd radeon rx580 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version traceback most recent call last file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python pywrap tensorflow py line 64 in from tensorflow python pywrap tensorflow internal import importerror libamdhip64 so 4 can not open share object file no such file or directory during handling of the above exception another exception occur traceback most recent call last file line 1 in file home static pyenv version 3 8 6 lib python3 8 site package tensorflow init py line 41 in from tensorflow python tool import module util as module util file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python init py line 40 in from tensorflow python eager import context file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python eager context py line 35 in from tensorflow python import pywrap tfe file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python pywrap tfe py line 28 in from tensorflow python import pywrap tensorflow file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python pywrap tensorflow py line 83 in raise importerror msg importerror traceback most recent call last file home static pyenv version 3 8 6 lib python3 8 site package tensorflow python pywrap tensorflow py line 64 in from tensorflow python pywrap tensorflow internal import importerror libamdhip64 so 4 can not open share object file no such file or directory fail to load the native tensorflow runtime see for some common reason and solution include the entire stack trace above this error message when ask for help describe the current behavior I have rocm 3 9 1 I downgrade from 4 but it be still try to get libamdhip64 so 4 instead of libamdhip64 so 3 describe the expect behavior look for correct version standalone code to reproduce the issue python c import tensorflow as tf print tf version git version tf version version |
tensorflowtensorflow | add documentation for numeric column api | Bug | add documentation for number column api that will work self sufficiently pull request for the work item |
tensorflowtensorflow | segmentation fault when benchmarke tflite efficientnetb0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 02 android 10 mobile device xiaomi redmi note 7 huawei p30 tensorflow instal from source or binary pip tensorflow version use command below 2 4 1 python version 3 8 describe the current behavior when run the tflite benchmark of a converted efficientnetb0 model on a mobile phone I get a segmentation fault error I would be interested in any idea of what could be cause this if it be come from the tflite file or when run the benchmark I be quite lost and have no idea where to look thank in advance describe the expect behavior get the profiling of the model standalone code to reproduce the issue I be generate the tflite file with the follow script import tensorflow as tf import tensorflow dataset as tfds create model model tf keras application efficientnetb0 include top false create mnist dataset ds tfds load name coco 2017 split train datum dir datum dataset tensorflow dataset ds ds map lambda obj tf image resize obj image 224 224 transform the dataset into a representative dataset as in the tf guide def representative datum gen for input value in ds batch 1 take 100 model have only one input so each datum point have one element yield input value converter converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default set the representative dataset in order to quantize the activation converter representative dataset representative datum gen ensure that if any op can t be quantize the converter throw an error converter target spec support op tf lite opsset tflite builtins int8 converter target spec support type tf int8 set the input and output tensor to uint8 apis add in r2 3 converter inference input type tf uint8 converter inference output type tf uint8 additional trick converter experimental new converter true converter experimental new quantizer true converter target spec support op tf lite opsset tflite builtin enable tensorflow lite op tf lite quant model converter convert save convert model in tflite file with open efficientnetb0 tflite wb as tf file tf file write tf lite quant model print convert I use the tflite benchmarking tool that can be download here I be run the benchmark with the successive command adb push android arm benchmark model datum local tmp benchmark adb shell chmod x datum local tmp benchmark adb push efficientnetb0 tflite datum local tmp model tflite adb shell datum local tmp benchmark graph datum local tmp model tflite use gpu false adb shell rm f datum local tmp benchmark adb shell rm f datum local tmp model tflite |
tensorflowtensorflow | module tensorflow keras layer have no attribute mulitiheadattention | Bug | when I use the new version2 4 1 when I use head attention1 tf keras layer mulitiheadattention num head 1 key dim 1 conv1 it will have a attributeerror module tensorflow keras layer have no attribute mulitiheadattention |
tensorflowtensorflow | could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory ld library path usr local cuda 11 2 lib64 usr local cuda extras cupti lib64 usr local cuda lib64 | Bug | I m have issue with lot of stuff and I ve a limited amount of knowledge in nvidia driver so I want to use my gpu for training in kera for that I have to make tensorflow use my gpu correct I if I m wrong I have a support gpu I follow this 1 tutorial I have a geforce gtx 1080 ti I instal nvidia driver 455 successfully I ve instal nvidia cuda toolkit 11 2 1 and have try all three method of instal I think cupti come with nvidia cuda toolkit so I do nothing for that and instal cudnn v8 1 1 33 when I import tensorflow and list device I get this tf config list physical device 2021 03 18 10 56 30 410381 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 03 18 10 56 30 411387 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 03 18 10 56 30 451723 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 0a 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 582ghz corecount 28 devicememorysize 10 92gib devicememorybandwidth 451 17gib s 2021 03 18 10 56 30 451771 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 18 10 56 30 454059 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 03 18 10 56 30 454124 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 03 18 10 56 30 454855 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 03 18 10 56 30 455023 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 03 18 10 56 30 455150 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory ld library path usr local cuda 11 2 lib64 usr local cuda extras cupti lib64 usr local cuda lib64 2021 03 18 10 56 30 455675 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2021 03 18 10 56 30 455764 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 03 18 10 56 30 455776 w tensorflow core common runtime gpu gpu device cc 1757 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device physicaldevice name physical device cpu 0 device type cpu this be my ld library path all these directory exist export ld library path usr local cuda 11 2 lib64 export ld library path ld library path usr local cuda extras cupti lib64 export ld library path ld library path usr local cuda lib64 I know this issue have be report before but the solution there seem to be reinstall and I ve remove and instal all of the require software multiple time I m at this for 4 day now also I instal an outdated nvidia cuda toolkit v10 from apt once and later remove it please help thank you 1 software requirement |
tensorflowtensorflow | update cmsis kernel | Bug | tensorflow micro the cmsis kernel in lite micro kernels cmsis nn need to be update to match the late change in the corresponding micro kernel |
tensorflowtensorflow | train a model md problem | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 google cloud platform tensorflow instal from source or binary tensorflow version commit sha if source tensorflow 1 15 target platform e g arm mbe os arduino nano 33 etc esp32 cam describe the problem please provide the exact sequence of command step when you run into the problem I be work on custom object detection for use in arduino board esp32 cam esp32 cam work stand alone no wifi mode I be practice github page 1 pythonpath problem echo export pythonpath pythonpath model research slim bashrc source bashrc result file line 1 export pythonpath pythonpath model research slim bashrc syntaxerror invalid syntax 2 python can t open file model research slim dataset build visualwakeword datum py errno 2 no such file or directory build visualwakeword datum py not exist in the dataset folder python model research slim dataset build visualwakeword datum py logtostderr train image dir coco raw datum train2014 val image dir coco raw data val2014 train annotation file coco raw datum annotation instance train2014 json val annotation file coco raw datum annotation instance val2014 json output dir coco process small object area threshold 0 005 foreground class of interest person 3 build visualwakeword datum py file not exist in the dataset folder python model research slim dataset build visualwakeword datum py |
tensorflowtensorflow | keras tuner miss validation split for model training | Bug | url with the issue instantiate the tuner and perform hypertune description of issue what need change on the second to last code cell where the hypermodel be re instantiate and train with the optimal number of epoch a validation split parameter on the training set be not define as per good practise when fit a model be the link to the source code correct yes be all parameter define and format correctly yes be return value define yes be you plan to also submit a pull request to fix the issue yes here |
tensorflowtensorflow | micro port op l2 pool 2d from lite | Bug | tensorflow micro this issue track my work porting operator l2 pool 2d from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 step 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 step 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence the next 3 step be combine into a single pr3 with separate commit step 3 copy operator from lite to micro make minimal change and not include in the build step 4 delete extra code from the micro copy of the operator step 5 port micro copy of operator as necessary and add a corresponding test |
tensorflowtensorflow | imageresizerstate s public function contain redundant argument | Bug | imageresizerstate validateandcreateoutput need a tensor as the second argument actually it be unnecessary because input can be retrieve through context input 0 imageresizergradientstate have a similar issue |
tensorflowtensorflow | inconsistency in tf keras model model fit doc | Bug | url s with the issue fit description of the issue what need change relate to the validation data parameter the doc clearly state the follow note that validation datum do not support all the datum type that be support in x eg dict generator or keras util sequence I don t get whether this be just an oversight or it be intend since this method be work fine even when validation datum be a generator |
tensorflowtensorflow | cuda version should be 11 instead of 10 in install cuda with apt doc | Bug | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request just a minor change in documentation in install cuda with apt install cuda with apt where cuda 10 0 need to be update to cuda 11 0 source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem |
tensorflowtensorflow | estimator guide page doesn t work | Bug | image |
tensorflowtensorflow | cache augmentation in segmentation tutorial this do not increase dataset size | Bug | url s with the issue description of issue what need change augmentation to increase the size of the dataset have to convert one source datapoint into many augment datapoint but in this tutorial augmentation be apply once to each datapoint effectively keep the dataset size the same the root cause be that augmentation be apply before a dataset cache these be the relevant line of the tutorial python tf function def load image train datapoint input image tf image resize datapoint image 128 128 input mask tf image resize datapoint segmentation mask 128 128 this bit be random augmentation mix into the load function if tf random uniform 0 5 input image tf image flip leave right input image input mask tf image flip leave right input mask input image input mask normalize input image input mask return input image input mask here we load the dataset include augmentation train dataset train map load image train num parallel call tf datum autotune then we cache that single round of augmentation and repeat that single round forever train dataset train cache shuffle buffer size batch batch size repeat I believe this should look more like untested python tf function def load image train datapoint input image tf image resize datapoint image 128 128 input mask tf image resize datapoint segmentation mask 128 128 input image input mask normalize input image input mask return input image input mask we can cache here because the cache dataset be deterministic train dataset train map load image train num parallel call tf datum autotune cache tf function def random augment input image input mask if tf random uniform 0 5 input image tf image flip leave right input image input mask tf image flip leave right input mask return input image input mask now apply augmentation after cache get different result each time train dataset train map random augment shuffle buffer size batch batch size repeat |
tensorflowtensorflow | update core py | Bug | mirror cl 357056439 fix github 45637 |
tensorflowtensorflow | change a1825c95 break tflite for raspberry pi | Bug | when compile the tensorflow lite python wheel for raspberry pi as describe on the result throw an exception when I try to use it traceback most recent call last file venv lib python3 7 site package tflite runtime interpreter py line 45 in from tensorflow lite python import metric portable as metric modulenotfounderror no module name tensorflow during handling of the above exception another exception occur traceback most recent call last import tflite runtime interpreter as tflite file venv lib python3 7 site package tflite runtime interpreter py line 47 in from tensorflow lite python import metric nonportable as metric modulenotfounderror no module name tensorflow the offend line be add in change a1825c95 in the file tensorflow lite python interpreter py diff git a tensorflow lite python interpreter py b tensorflow lite python interpreter py index f7ef3b34ba6 5c5898b6d4d 100644 a tensorflow lite python interpreter py b tensorflow lite python interpreter py 40 6 40 13 else return lambda x x try from tensorflow lite python import metric portable as metric except importerror from tensorflow lite python import metric nonportable as metric pylint enable g import not at top class delegate object python wrapper class to manage tflitedelegate object 321 6 328 9 class interpreter object delegate get native delegate pointer pylint disable protect access self signature def self get signature list self metric metric tflitemetric self metric increase counter interpreter creation def del self must make sure the interpreter be destroy before thing that be use by it like the delegate note this only work on cpython I remove these line from the copy of interpreter py after instal the wheel and the rest of the code work fine it appear that these metric be need for unit testing but something need to be change so they be not use when the tflite package be run a system where tensorflow itself be not present e g a small platform like raspberry pi |
tensorflowtensorflow | update normalization v2 py | Bug | mirror cl 362162412 fix github 47608 |
tensorflowtensorflow | keras io list of topic on the right can not be navigate | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue it be an example this issue apply to the whole site description of issue what need change list of topic on the right can not be navigate at keras io clear description when navigate doc at keras io when the list of topic on the right lotor be long than the browser height you can not touch topic in the bottom there be no scroll on the page so the topic be un reachable suggest solution lotor be fix in position now lotor should have a scroll bar or move with all the page many thank |
tensorflowtensorflow | keras tuner incorrect parameter for model training | Bug | url with the issue instantiate the tuner and perform hypertune description of issue what need change on the second to last code cell where the hypermodel be re instantiate and train with the optimal number of epoch the test set be incorrectly use for the fit process the train set should be use for this process since we then proceed to evaluate the model on the test set in the final code block be the link to the source code correct yes be all parameter define and format correctly yes be return value define yes be you plan to also submit a pull request to fix the issue yes here |
tensorflowtensorflow | gpu and cpu gradient diverge in tf 2 4 for approximate gelu activation | Bug | hi upon upgrade from tf 2 2 to tf 2 4 my team notice a problem with an approximate version of gelu show below as gelu approximate which exist also in tf v2 4 as tf nn gelu approximate true the gradient for gelu approximate calculate on the gpu diverge from the gradient on the cpu as the input get far from 0 since the implementation of gelu approximate consist of primitive tanh pow etc we should make sure something more fundamental isn t break in tf please see the follow reproduce code which print gpu and cpu gradient be not close for v2 4 but not for v2 2 python gelu problem py import numpy as np import panda as pd import tensorflow as tf import matplotlib pyplot as plt def gelu approximate x copy and inline from tf nn gelu approximate true which exist in tf v2 4 but not tf v2 2 return 0 5 x 1 0 tf tanh 0 7978845608028654 x 0 044715 tf pow x 3 def gelu gradient x device with tf gradienttape as tape tape watch x with tf device device y gelu approximate x return tape gradient y x def main print f tf version be tf version x tf linspace 500 0 500 0 500 cpu gelu gradient x cpu 0 gpu gelu gradient x gpu 0 abs error np abs cpu gpu df pd dataframe dict cpu cpu gpu gpu abs error ab error index x try np testing assert allclose cpu gpu atol 1e 3 except assertionerror as e print gpu and cpu gradient be not close print e df plot title cpu vs gpu gradient for tf nn gelu approximate true plt show if name main main output on our system nvidia smi we d mar 10 15 49 02 2021 nvidia smi 450 80 02 driver version 450 80 02 cuda version 11 0 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 tesla v100 pcie off 00000000 04 00 0 off 0 n a 39c p0 42w 250w 29877mib 32510mib 0 default n a 1 tesla v100 pcie off 00000000 06 00 0 off 0 n a 40c p0 38w 250w 31045mib 32510mib 0 default n a 2 tesla v100 pcie off 00000000 07 00 0 off 0 n a 40c p0 37w 250w 31045mib 32510mib 0 default n a 3 tesla v100 pcie off 00000000 08 00 0 off 0 n a 41c p0 38w 250w 31045mib 32510mib 0 default n a process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 4126251 c 3 7 x dist bin python3 7 29871mib 1 n a n a 4126251 c 3 7 x dist bin python3 7 31039mib 2 n a n a 4126251 c 3 7 x dist bin python3 7 31039mib 3 n a n a 4126251 c 3 7 x dist bin python3 7 31039mib python um gelu problem 2021 03 10 15 48 44 908579 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 10 1 tf version be 2 4 0 2021 03 10 15 48 50 149409 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 03 10 15 48 50 150776 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 03 10 15 48 50 220817 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 04 00 0 name tesla v100 pcie 32 gb computecapability 7 0 coreclock 1 38ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 836 37gib s 2021 03 10 15 48 50 220842 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 10 1 2021 03 10 15 48 50 224004 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 10 2021 03 10 15 48 50 224038 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 10 2021 03 10 15 48 50 226493 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 03 10 15 48 50 227454 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 03 10 15 48 50 229775 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2021 03 10 15 48 50 231379 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 10 2021 03 10 15 48 50 235484 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 7 2021 03 10 15 48 50 237107 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2021 03 10 15 48 50 237501 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation sse3 sse4 1 sse4 2 avx avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 03 10 15 48 50 239017 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 03 10 15 48 50 239769 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 04 00 0 name tesla v100 pcie 32 gb computecapability 7 0 coreclock 1 38ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 836 37gib s 2021 03 10 15 48 50 239789 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 10 1 2021 03 10 15 48 50 239805 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 10 2021 03 10 15 48 50 239819 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 10 2021 03 10 15 48 50 239832 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 03 10 15 48 50 239845 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 03 10 15 48 50 239858 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 10 2021 03 10 15 48 50 239872 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 10 2021 03 10 15 48 50 239885 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 7 2021 03 10 15 48 50 241280 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2021 03 10 15 48 50 241307 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 10 1 2021 03 10 15 48 51 004798 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2021 03 10 15 48 51 004834 I tensorflow core common runtime gpu gpu device cc 1267 0 2021 03 10 15 48 51 004839 I tensorflow core common runtime gpu gpu device cc 1280 0 n 2021 03 10 15 48 51 010315 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 1908 mb memory physical gpu device 0 name tesla v100 pcie 32 gb pci bus i d 0000 04 00 0 compute capability 7 0 gpu and cpu gradient be not close not equal to tolerance rtol 1e 07 atol 0 001 mismatch element 466 500 93 2 max absolute difference 3 1899037 max relative difference 1 x array 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 y array 3 189903e 00 3 151703e 00 3 113807e 00 3 076217e 00 3 038931e 00 3 001947e 00 2 965265e 00 2 928882e 00 2 892799e 00 2 857012e 00 2 821522e 00 2 786328e 00 gelu problem this particular system be run debian linux version 9 13 python be version 3 7 9 tf be version 2 4 0 cudnn be 7 6 5 and cuda be 10 1 thank you ben and team |
tensorflowtensorflow | runtime error of execute hlo when add spmd pass | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux n130 024 068 4 14 81 bm 26 amd64 1 smp debian 4 14 81 bm 26 mon sep 14 09 46 45 utc 2020 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below tf master branch with commit i d a4ac4894d536bde05dafe74a3a3fec3aa0cb93b8 python version 3 8 5 bazel version if compile from source 3 7 2 gcc compiler version if compile from source gcc debian 8 3 0 6 8 3 0 cuda cudnn version v11 0 221 gpu model and memory v100 32 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I want to test spmd pass the input be a single device hlo ir with spmd sharde the program firstly would modify the single device hlo ir to 4 device hlo ir then it would compile the modify hlo ir to and executable after that the pjrt client would run the executable on 4 gpu and return the result currently I use the late tf code tf master branch with commit i d a4ac4894d536bde05dafe74a3a3fec3aa0cb93b8 it can successfully compile the code however it fail to execute the code and return the expect result describe the expect behavior it should compile the hlo execute the executable and return the result successfully I e 3 3 7 7 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook there be four step to reproduce the problem step1 tensorflow compiler xla tool build bazel new add tf cc binary to build the pjrt demo tf cc binary name pjrt demo testonly true srcs pjrt client main cc dep com google absl absl string tensorflow compiler xla pjrt gpu device tensorflow compiler xla pjrt pjrt client tensorflow compiler xla pjrt cpu device tensorflow compiler xla tool hlo module loader spmd pass tensorflow compiler xla service spmd spmd partitioner tensorflow compiler xla service hlo pass pipeline tensorflow compiler xla service computation layout tensorflow compiler xla service layout assignment if cuda or rocm tensorflow compiler xla service gpu plugin step2 tensorflow compiler xla tool pjrt client main cc new add file to present the pjrt demo in cpp cpp include include include include tensorflow compiler xla literal h include tensorflow compiler xla literal util h include tensorflow compiler xla pjrt cpu device h include tensorflow compiler xla pjrt gpu device h include tensorflow compiler xla pjrt pjrt client h include tensorflow compiler xla status h include tensorflow compiler xla statusor h include tensorflow compiler xla tool hlo module loader h include tensorflow core platform init main h include tensorflow core platform log h include tensorflow compiler xla service spmd spmd partitioner h include tensorflow compiler xla service hlo pass pipeline h include tensorflow compiler xla service hlo verifier h include tensorflow compiler xla service computation layout h include tensorflow compiler xla service layout assignment h use namespace xla use namespace xla spmd namespace void assignlayout hlomodule m computationlayout entry computation layout channellayoutconstraint channel constraint nullptr layoutassignment layout assignment entry computation layout layoutassignment instructioncanchangelayout channel constraint channel constraint if layout assignment run m status ok log fatal layout assign fail struct pjrtdemoargs pjrtdemoargs platform gpu reference platform default print literal false run test hlo pass true run reference hlo pass true use large float range false todo b 68721786 these tolerance be set to match the value in the isolation test the goal be to lower these to 0 001 abs error bind 0 1 rel error bind 0 1 input format hlo input text iteration 1 print hlo ir false num replicas 1 dump to file false speficy layout false execute share false enable spmd pass false std string platform std string reference platform bool print literal bool run test hlo pass bool run reference hlo pass bool use large float range float ab error bind float rel error bind std string input format std string input text int iteration bool print hlo ir int num replicas bool dump to file bool speficy layout bool execute share bool enable spmd pass pjrtdemoarg get args int argc char argv pjrtdemoargs opt std vector flag list tensorflow flag platform opt platform the test platform that the hlo module will be execute on gpu cpu etc tensorflow flag reference platform opt reference platform the reference platform that hlo module will be execute on the result produce on the reference platform will be compare against the result produce on the test platform a value of default will use the tpu interpreter as a reference if the test platform be a tpu and interpreter otherwise if the flag value be the empty string then the module will not be run on a reference platform at all tensorflow flag print literal opt print literal print the input and result literal to stdout tensorflow flag run test hlo pass opt run test hlo pass run hlo pass pipeline for the test platform on the hlo module before run the module on the test platform this should be set to true if the hlo module be unoptimized and set to false if the hlo module already have be optimize tensorflow flag run reference hlo pass opt run reference hlo pass run hlo pass pipeline for the reference platform on the hlo module before run the module on the reference platform in general if the give hlo module be optimize for a platform other than the reference this be necessary because some hlo pass be legalization pass which must be run prior to code generation tensorflow flag use large float range opt use large float range generate float point value use a large uniform log distribution as oppose to a small uniform distribution tensorflow flag abs error bind opt abs error bind the absolute error bind use when compare the test and reference result tensorflow flag rel error bind opt rel error bind the relative error bind use when compare the test and reference result tensorflow flag input format opt input format the format of the input file valid value n hlo hlo textual format n pb xla hloproto in binary proto format n pbtxt xla hloproto in text proto format tensorflow flag input text opt input text a path to a file contain the hlo module can also pass a this as argv 1 but this flag be more explicit tensorflow flag iteration opt iteration the number of time to run the module each iteration will be run with different input datum tensorflow flag print hlo ir opt print hlo ir decide whether print the hlo ir or not tensorflow flag num replicas opt num replicas decide how many replica to use in data paralleism tensorflow flag dump to file opt dump to file dump to stdio default be true tensorflow flag speficy layout opt speficy layout decide whether speficy layout tensorflow flag execute share opt execute share decide whether use executable share tensorflow flag enable spmd pass opt enable spmd pass decide whether enable spmd pass xla appenddebugoptionsflag flag list the usage string include the message at the top of the file the debugoption flag and the flag define above bool parse ok tensorflow flag parse argc argv flag list tensorflow port initmain argc argv if parse ok log qfatal can not parse cmd datum return opt int main int argc char argv tensorflow port initmain argc argv pjrtdemoargs opt get args argc argv vlog 0 use input datum opt input text vlog 0 use replicas opt num replica if opt input text length 0 log fatal please speficy the input data hlo text load hlomodule from file std string hlo filename opt input text std function config modifier hook xla hlomoduleconfig config config set seed 42 std unique ptr test module loadmodulefromfile hlo filename xla hlo module loader detail config txt config modifier hook valueordie int num device opt num replica if opt enable spmd pass vlog 0 enable spmd pass some test backpropfilter convs set this flag false to test two different path of the implementation spmdpartitioneroption option option conv halo exchange always on lhs true option allow module signature change true option choose fast windowed einsum over mem false auto collective op creator getdefaultcollectiveopscreator num device num replicas opt num replica do not use all gather for pattern match purpose as the partitioner might create reshape transpose around it collective op creator create cross partition all gather nullptr note run spmd partition if necessary hlopasspipeline pass spmd partitioning pass addpass layout sensitive false allow mixed precision true pass addpass num device num replicas opt num replicas option collective op creator pass addpass layout sensitive false allow mixed precision true if opt speficy layout vlog 0 speficy layout layout assignment test cc 888 layoutassignment computationlayout computation layout test module entry computation computeprogramshape false vlog 0 before computation layout n computation layout tostre computation layout settodefaultlayoutifempty vlog 0 after computation layout n computation layout tostre shape param shape shapeutil makeshape f32 2 2 if computation layout mutable parameter layout 0 copylayoutfromshape param shape status ok log fatal error of assign computation layout for param 0 if computation layout mutable parameter layout 1 copylayoutfromshape param shape status ok log fatal error of assign computation layout for param 1 computation layout mutable result layout resetlayout layoututil makelayout 1 0 channellayoutconstraint channel constraint assignlayout test module get computation layout channel constraint for int index 1 index 3 index vlog 0 channel tostre if opt enable spmd pass if pass run test module get status ok log fatal error of run hlo pass vlog 0 after spmd pass n test module tostre const xla hlomoduleproto test module proto test module toproto run it use jax c runtime pjrt get a gpu client std unique ptr client xla getgpuclient asynchronous true xla gpuallocatorconfig distribute client nullptr node i d 0 valueordie log info compile the code compile xlacomputation to pjrtexecutable xla xlacomputation xla computation test module proto if opt speficy layout computationlayout computation layout test module entry computation computeprogramshape false vlog 0 before computation layout n computation layout tostre computation layout settodefaultlayoutifempty vlog 0 after computation layout n computation layout tostre shape param shape shapeutil makeshape f32 2 2 if computation layout mutable parameter layout 0 copylayoutfromshape param shape status ok log fatal error of assign computation layout for param 0 if computation layout mutable parameter layout 1 copylayoutfromshape param shape status ok log fatal error of assign computation layout for param 1 computation layout mutable result layout resetlayout layoututil makelayout 1 0 channellayoutconstraint channel constraint assignlayout test module get computation layout channel constraint for int index 1 index 3 index vlog 0 channel set xla gpu disable multi stream false compile option executable build option mutable debug option set xla gpu use random stream true compile option executable build option set num replicas opt num replicas cy compile option executable build option set run i d runid deviceassignment device assignment opt num replicas 1 for int index 0 indexaddressable device size index pjrtdevice device client addressable device at index device assignment index 0 device i d vlog 0 device i d device i d compile option executable build option set device assignment device assignment std unique ptr executable client compile xla computation compile option valueordie prepare input xla literal literal x xla literalutil creater2 1 0f 2 0f 3 0f 4 0f xla literal literal y xla literalutil creater2 1 0f 1 0f 1 0f 1 0f auto device list client addressable device if opt execute share vlog 0 run iteration opt iteration for int iter 0 iter opt iteration iter vlog 0 enable execute share std vector result vlog 0 device list device list size tensorflow thread threadpool execution pool nullptr tensorflow thread threadpool pool tensorflow env default replicas opt num replicas auto a i d runid for int64 index 0 index opt num replicas index pool schedule index vlog 0 submit task on device index std unique ptr param x client bufferfromhostliteral literal x device list index valueordie std unique ptr param y client bufferfromhostliteral literal y device list index valueordie execute on gpu xla executeoption execute option execute option context const executecontext a i d std vector result executable executesharde param x get param y get device list index execute option valueordie vlog 0 do task execute on device index get result std share ptr result literal result 0 toliteral valueordie log info result result literal vlog 0 task be execute do from main thread return 0 std unique ptr param x client bufferfromhostliteral literal x client addressable device 0 valueordie std unique ptr param y client bufferfromhostliteral literal y client addressable device 0 valueordie std unique ptr param x1 client bufferfromhostliteral literal x client addressable device 1 valueordie std unique ptr param y1 client bufferfromhostliteral literal y client addressable device 1 valueordie std unique ptr param x2 client bufferfromhostliteral literal x client addressable device 2 valueordie std unique ptr param y2 client bufferfromhostliteral literal y client addressable device 2 valueordie std unique ptr param x3 client bufferfromhostliteral literal x client addressable device 3 valueordie std unique ptr param y3 client bufferfromhostliteral literal y client addressable device 3 valueordie std vector tmp buffer for int index 0 index param x client bufferfromhostliteral literal x client addressable device index valueordie std unique ptr param y client bufferfromhostliteral literal y client addressable device index valueordie const std vector buf param x get param y get tmp buffer push back std move buf absl span input buffer absl makespan tmp buffer execute on gpu xla executeoption execute option one vector for each device std vector result executable execute param x get param y get param x1 get param y1 get param x2 get param y2 get param x3 get param y3 get execute option valueordie executable executesharde param x get param y get nullptr execute option valueordie get result std share ptr result literal result 0 0 toliteral valueordie log info result result literal return 0 step3 fn hlo txt a text that indicate a computation a 2x2 dot b 2x2 with spmd sharding shell hlomodule module entry entry p0 f32 2 2 p1 f32 2 2 f32 2 2 p0 f32 2 2 1 0 parameter 0 parameter replication false sharde maximal device 0 p1 f32 2 2 1 0 parameter 1 parameter replication false sharde maximal device 0 root my add f32 2 2 1 0 dot f32 2 2 1 0 p0 f32 2 2 1 0 p1 lhs contracting dim 1 rh contracting dim 0 sharding replicate step4 compile and run the code bash bazel bin tensorflow compiler xla tool pjrt demo input text path to your fn hlo txt num replicas 4 execute share enable spmd pass other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the full log be as follow bash 2021 03 09 11 11 38 345132 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 09 11 11 38 349232 I tensorflow compiler xla tool pjrt client main cc 184 use input datum tensorflow compiler xla tool hlo file fn hlo txt 2021 03 09 11 11 38 349258 I tensorflow compiler xla tool pjrt client main cc 185 use replicas 4 2021 03 09 11 11 38 351253 I tensorflow compiler xla tool pjrt client main cc 270 before spmd pass hlomodule module entry entry p0 f32 2 2 p1 f32 2 2 f32 2 2 p0 f32 2 2 1 0 parameter 0 parameter replication false sharde maximal device 0 p1 f32 2 2 1 0 parameter 1 parameter replication false sharde maximal device 0 root my add f32 2 2 1 0 dot f32 2 2 1 0 p0 f32 2 2 1 0 p1 lhs contracting dim 1 rh contracting dim 0 sharding replicate 2021 03 09 11 11 38 354686 I tensorflow compiler xla tool pjrt client main cc 279 after spmd pass hlomodule module add x f32 y f32 f32 x f32 parameter 0 y f32 parameter 1 root add f32 add f32 x f32 y add 1 x 1 f32 y 1 f32 f32 x 1 f32 parameter 0 y 1 f32 parameter 1 root add 1 f32 add f32 x 1 f32 y 1 entry entry spmd param f32 2 2 param 1 f32 2 2 f32 2 2 partition i d u32 partition i d constant u32 constant 0 compare 1 pre compare u32 partition i d u32 constant direction eq broadcast 2 pre 2 2 1 0 broadcast pre compare 1 dimension param f32 2 2 1 0 parameter 0 parameter replication false sharde maximal device 0 constant 1 f32 constant 0 broadcast 3 f32 2 2 1 0 broadcast f32 constant 1 dimension select 1 f32 2 2 1 0 select pre 2 2 1 0 broadcast 2 f32 2 2 1 0 param f32 2 2 1 0 broadcast 3 all reduce 1 f32 2 2 1 0 all reduce f32 2 2 1 0 select 1 channel i d 2 replica group 0 1 2 3 to apply add 1 param 1 f32 2 2 1 0 parameter 1 parameter replication false sharde maximal device 0 select f32 2 2 1 0 select pre 2 2 1 0 broadcast 2 f32 2 2 1 0 param 1 f32 2 2 1 0 broadcast 3 all reduce f32 2 2 1 0 all reduce f32 2 2 1 0 select channel i d 1 replica group 0 1 2 3 to apply add root dot f32 2 2 1 0 dot f32 2 2 1 0 all reduce 1 f32 2 2 1 0 all reduce lhs contracting dim 1 rh contracting dim 0 2021 03 09 11 11 38 355791 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 03 09 11 11 39 274557 I tensorflow compiler xla service service cc 169 xla service 0x55b7a4c08a00 initialize for platform cuda this do not guarantee that xla will be use device 2021 03 09 11 11 39 274596 I tensorflow compiler xla service service cc 177 streamexecutor device 0 tesla v100 sxm2 32 gb compute capability 7 0 2021 03 09 11 11 39 274607 I tensorflow compiler xla service service cc 177 streamexecutor device 1 tesla v100 sxm2 32 gb compute capability 7 0 2021 03 09 11 11 39 274615 I tensorflow compiler xla service service cc 177 streamexecutor device 2 tesla v100 sxm2 32 gb compute capability 7 0 2021 03 09 11 11 39 274622 I tensorflow compiler xla service service cc 177 streamexecutor device 3 tesla v100 sxm2 32 gb compute capability 7 0 2021 03 09 11 11 39 276501 I tensorflow compiler xla pjrt gpu device cc 125 xla backend allocate 30063919104 byte on device 0 for bfcallocator 2021 03 09 11 11 39 276640 I tensorflow compiler xla pjrt gpu device cc 125 xla backend allocate 30063919104 byte on device 1 for bfcallocator 2021 03 09 11 11 39 276756 I tensorflow compiler xla pjrt gpu device cc 125 xla backend allocate 30063919104 byte on device 2 for bfcallocator 2021 03 09 11 11 39 276865 I tensorflow compiler xla pjrt gpu device cc 125 xla backend allocate 29774197555 byte on device 3 for bfcallocator 2021 03 09 11 11 39 283129 I tensorflow compiler xla tool pjrt client main cc 292 compile the code 2021 03 09 11 11 39 283303 I tensorflow compiler xla tool pjrt client main cc 352 device i d 0 2021 03 09 11 11 39 283317 I tensorflow compiler xla tool pjrt client main cc 352 device i d 1 2021 03 09 11 11 39 283325 I tensorflow compiler xla tool pjrt client main cc 352 device i d 2 2021 03 09 11 11 39 283332 I tensorflow compiler xla tool pjrt client main cc 352 device i d 3 2021 03 09 11 11 39 747483 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 03 09 11 11 40 299515 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 03 09 11 11 40 485692 f tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error unimplemente request allreduce not implement on gpu replica count 4 partition count 1 group mode kcrossreplicaandpartition operand count 2 nccl support 1 first operand array element type f32 fish bazel bin tensorflow compiler terminate by signal sigabrt abort |
tensorflowtensorflow | update aliasing in tf linalg band part doc | Bug | url s with the issue description of issue what need change clear description l50 l66 tf matrix band part in above line should be tf linalg band part tf matrix band part be move to tf compat v1 matrix band part the below link should also be fix |
tensorflowtensorflow | typo issue in tf keras loss doc | Bug | url s with the issue description of issue what need change clear description l587 binarycrossentropy l748 sparsecategoricalcrossentropy above line should be note use from logit true may be more numerically stable like l667 categoricalcrossentropy |
tensorflowtensorflow | mobilenet v3 preprocess input do nothing | Bug | url s with the issue the documentation the source l556 l558 description of issue what need change this function do nothing it just return its first argument but the doc say it do multiple thing the preprocesse datum be write over the input datum the input pixel value be scale between 1 and 1 the source seem correct in the sense that mobilenetv3 appear to work correctly with input with 3 color channel with value in the range 0 255 I m guess the doc here have just be copy paste from the mobilenetv2 doc |
tensorflowtensorflow | can we add a check in model fit on dataset element spec | Bug | system information tensorflow version you be use 2 4 1 be you willing to contribute it yes no yes describe the feature and the current behavior state I have this code python import tensorflow as tf length 500 feature list range length label tf random uniform length minval 0 maxval 2 dtype tf int32 datum tf transpose range length tf random uniform length minval 0 maxval 2 dtype tf int32 dataset tf datum dataset from tensor slice datum dataset shuffle length train length int length 5 4 train datum dataset take train length test datum dataset skip train length assert isinstance train datum element spec tuple and len train datum element spec 0 when x be dataset its member must be a tuple of either input target or input target sample weight currently your tuple size be 0 model tf keras sequential model add tf keras layer dense 1 activation relu model add tf keras layer dense 1 activation sigmoid model compile optimizer adam loss binary crossentropy metric binary accuracy run eagerly true model fit train datum batch 10 validation datum test datum batch 10 epoch 10 if we ignore the assert run the code throw error valueerror no gradient provide for any variable dense kernel 0 dense bias 0 dense 1 kernel 0 dense 1 bias 0 the error reason may not be obvious that train datum doesn t return require example ie either input target or input target sample weight I hope the error message can be improve be it ok if we detect len train datum element spec in place like tensorflow python keras engine datum adapter datasetadapter validate args will this change the current api how no who will benefit with this feature people writte buggy code learn dataset not give correct shape |
tensorflowtensorflow | Invalid | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem | |
tensorflowtensorflow | batchnorm documentation problem in inference equation and also train | Bug | in the follow link sqrt be miss use in the notebook 1 during inference instead of batch self moving mean sqrt self move var epsilon gamma beta it be write batch self move mean self move var epsilon gamma beta during training instead of batch mean batch sqrt var batch epsilon gamma beta it be write batch mean batch var batch epsilon gamma beta |
tensorflowtensorflow | hexagon 685 690 wrong soc example for qcs610 qcs410 on tensorflow lite hexagon delegate | Bug | on this page it state that qcs610 qcs410 be soc example for hexagon 690 however accord to qualcomm s product info both soc qcs610 and qcs410 contain hexagon 685 rather than hexagon 690 |
tensorflowtensorflow | xla experimental compile true fail inside tf datum dataset | Bug | the experimental compile true option for tf function work outside of a tf datum dataset but fail once I try to map it over a dataset python import tensorflow as tf def log10 x numerator tf math log x denominator tf math log tf constant 10 dtype numerator dtype return numerator denominator tf function experimental compile true def drc input threshold 36 0 release time 80 0 ratio 8 0 makeup gain 0 0 sample rate 44100 downmix buffer tf reduce mean input axis 1 keepdim true alpha release tf exp 1 0 001 sample rate release time y prev tf constant 0 0 dtype tf float32 length tf shape input 0 c tf tensorarray dtype tf float32 size length element shape 1 for I in tf range length if tf abs buffer I 0 000001 x g tf constant 120 0 dtype tf float32 else x g tf multiply 20 0 log10 tf abs buffer I if x g threshold y g tf add threshold tf divide tf subtract buffer I threshold ratio else y g x g x l tf subtract x g y g y l tf add tf multiply alpha release y prev tf multiply tf subtract 1 0 alpha release x l y l tf math pow 10 0 makeup gain y l 20 0 c c write I y l y prev y l out c stack return tf stack signal 0 tf squeeze out signal 1 tf squeeze out axis 1 signal tf random normal 100000 2 dtype tf float32 out tf drc signal work ds tf datum dataset from tensor slice signal ds ds map drc batch 5 for b in ds print b fail I get the follow error invalidargumenterror function invoke by the follow node be not compilable node partitionedcall partitionedcall tin dt float dt float tout dt float xlahasreferencevar false xlamustcompile true collective manager ids input hostmem read only resource input config config proto n 007 n 003cpu 020 001 n 007 n 003gpu 020 0002 002j 0008 001 202 001 000 executor type f inference drc 250598 device job localhost replica 0 task 0 device cpu 0 args 0 unknown uncompilable node partitionedcall could not instantiate call inference drc 250598 stacktrace node partitionedcall function partitionedcall op makeiterator it work as expect if I set experimental compile to false I m use tensorflow 2 4 0 |
tensorflowtensorflow | pass constant to costume rnn cell not work | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 manjaro 20 2 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 2 gpu model and memory geforce gtx 1080 6858 mb memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when pass the constant argument to a tf keras layer rnn model instance with a costume rnn cell the python script crash describe the expect behavior the constant argument be pass to the costume rnn cell call function and no error occur standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf from tensorflow import kera from tensorflow keras import layer class dummy rnn cell tf keras model def init self unit super dummy rnn cell self init self gru tf keras layer grucell unit recurrent initializer glorot uniform self unit self gru unit self state size self gru state size self output size self gru output size def call self input at t state at t constant output state self gru input at t state at t return output state def get initial state input none batch size none dtype none return self gru get initial state input batch size dtype class dummy model tf keras model def init self unit super dummy model self init self rnn tf keras layer rnn dummy rnn cell unit true def call self inp seq inp 0 const inp 1 out self rnn seq constant const return out model dummy model 1 seq 1 2 1 2 1 2 1 2 const 3 3 3 3 dataset tf datum dataset from tensor slice seq const for inp in dataset print model inp other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 2021 03 04 13 59 52 420241 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 04 13 59 53 466141 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 03 04 13 59 53 466826 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcuda so 1 2021 03 04 13 59 53 492760 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 493343 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 0d 00 0 name geforce gtx 1080 computecapability 6 1 coreclock 1 797ghz corecount 20 devicememorysize 7 91gib devicememorybandwidth 298 32gib s 2021 03 04 13 59 53 493366 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 04 13 59 53 495658 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 03 04 13 59 53 495704 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 03 04 13 59 53 496468 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 03 04 13 59 53 496635 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 03 04 13 59 53 497347 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 11 2021 03 04 13 59 53 497917 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2021 03 04 13 59 53 498048 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 03 04 13 59 53 498137 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 498663 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 499094 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2021 03 04 13 59 53 499318 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 03 04 13 59 53 500255 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 03 04 13 59 53 500322 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 500769 I tensorflow core common runtime gpu gpu device cc 1720 find device 0 with property pcibusid 0000 0d 00 0 name geforce gtx 1080 computecapability 6 1 coreclock 1 797ghz corecount 20 devicememorysize 7 91gib devicememorybandwidth 298 32gib s 2021 03 04 13 59 53 500786 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 04 13 59 53 500798 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcubla so 11 2021 03 04 13 59 53 500809 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcublaslt so 11 2021 03 04 13 59 53 500820 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcufft so 10 2021 03 04 13 59 53 500830 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcurand so 10 2021 03 04 13 59 53 500840 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusolver so 11 2021 03 04 13 59 53 500850 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcusparse so 11 2021 03 04 13 59 53 500859 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudnn so 8 2021 03 04 13 59 53 500902 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 501396 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 501948 I tensorflow core common runtime gpu gpu device cc 1862 add visible gpu device 0 2021 03 04 13 59 53 501973 I tensorflow stream executor platform default dso loader cc 49 successfully open dynamic library libcudart so 11 0 2021 03 04 13 59 53 884072 I tensorflow core common runtime gpu gpu device cc 1261 device interconnect streamexecutor with strength 1 edge matrix 2021 03 04 13 59 53 884104 I tensorflow core common runtime gpu gpu device cc 1267 0 2021 03 04 13 59 53 884110 I tensorflow core common runtime gpu gpu device cc 1280 0 n 2021 03 04 13 59 53 884278 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 884769 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 885213 I tensorflow stream executor cuda cuda gpu executor cc 941 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 03 04 13 59 53 885636 I tensorflow core common runtime gpu gpu device cc 1406 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6858 mb memory physical gpu device 0 name geforce gtx 1080 pci bus i d 0000 0d 00 0 compute capability 6 1 2021 03 04 13 59 53 885875 I tensorflow core common runtime process util cc 146 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance traceback most recent call last file home moritz project bitblade tensorflow dummy py line 39 in print model inp file usr lib python3 9 site package tensorflow python keras engine base layer py line 1012 in call output call fn input args kwargs file home moritz project bitblade tensorflow dummy py line 31 in call out self rnn seq constant const file usr lib python3 9 site package tensorflow python keras layers recurrent py line 717 in call return super rnn self call input kwargs file usr lib python3 9 site package tensorflow python keras engine base layer py line 1008 in call self maybe build input file usr lib python3 9 site package tensorflow python keras engine base layer py line 2710 in maybe build self build input shape pylint disable not callable file usr lib python3 9 site package tensorflow python keras layers recurrent py line 578 in build self cell build step input shape file usr lib python3 9 site package tensorflow python keras engine training py line 407 in build raise valueerror valueerror currently you can not build your model if it have positional or keyword argument that be not input to the model but be require for its call method instead in order to instantiate and build your model call your model on real tensor datum with all expect call argument |
tensorflowtensorflow | wrong output value in tflite detection postprocess operation when max class per detection 1 | Bug | system information os platform and distribution ubuntu 18 04 tensorflow instal from pypi tensorflow version 1 15 2 python version 3 6 9 describe the current behavior when you add the nms operation at export the tflite graph with the option max class per detection 1 you ll get more than one class per detection as expect but the class ids and their correspond score be not consecutive in the output array instead the output array have a position without value that only contain garbage memory for example with max class per detection 2 you get output 0 detection1 top1 class i d output 1 garbage output 2 garbage output 3 detection1 top2 class i d output 4 garbage output 5 garbage describe the expect behavior output 0 detection1 top1 class i d output 1 detection1 top2 class i d output 2 detection2 top1 class i d standalone code to reproduce the issue export the model with bash python3 export tflite ssd graph py pipeline config path configs ssdlite mobiledet cpu 320x320 coco sync 4x4 config train checkpoint prefix ssdlite model ckpt 100000 output directory ssdlite tflite max detection 10 max class per detection 2 add postprocesse op true load tflite model and allocate tensor interpreter tf lite interpreter model path ssdlite tflite model tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail test model input shape input detail 0 shape input datum load image into numpy array image path 0 interpreter set tensor input detail 0 index input datum interpreter invoke the function get tensor return a copy of the tensor datum count int interpreter get tensor output detail 3 index 0 box interpreter get tensor output detail 0 index count class interpreter get tensor output detail 1 index count score interpreter get tensor output detail 2 index count class 1 and score 1 will contain random number from unassigned memory slot I think that the error stem from this line of code l710 box offset should have the value of output box index which be already incremente in the inner loop where the top class of the detection be assign or move the line output box index out of the inner loop to really make it account for the number of detection |
tensorflowtensorflow | tf datum zip and tf datum cache throw cachefile alreadyexistserror and or delete valid cachefile | Bug | system information run on ubuntu 20 04 use the docker container 2 4 1 gpu jupyter also occur on google colab environment run tf 2 4 1 describe the current behavior tf datum dataset cache and tf datum dataset zip do not operate well together in certain situation where there be a hierarchy between dataset but nevertheless they be both desire and cache imagine the scenario of an audio datum pipeline where client extract spectrogram and use those spectrogram to extract mfccs furthermore the user want to retain the spectrogram dataset at the end of the pipeline and include in the final zip operation now if the spectrogram dataset be cache to disk to allow for fast feature extraction later say with different hyperparameter user will be face the exception alreadyexistserror there appear to be a concurrent cache iterator run cache lockfile already exist currently the user can sort of get behind this behavior by simply move the cache operation on the intermediary dataset to after the map operation that create the conflicting dataset this will introduce a performance hit however unless all the dataset upstream of the intermediary dataset be cache which bring we to the second issue even if the upstream dataset be cache there be a bug behavior that be cause the cache file to be delete after an iteration over the entire zipped dataset be complete this second bug essentially render the workaround useless when the user want to cache both the downstream and upstream dataset describe the expect behavior the expect or ideal behavior would be the ability of cache retain intermediary dataset without a problem this would make intuitive sense for user build multi step pipeline with the tf datum api standalone code to reproduce the issue the notebook below contain the problematic case as well as the workaround describe and its demise link to google colab notebook other info log alreadyexistserror traceback most recent call last usr local lib python3 7 dist package tensorflow python eager context py in execution mode mode 2112 ctx executor executor new 2113 yield 2114 finally 9 frame alreadyexistserror there appear to be a concurrent cache iterator run cache lockfile already exist spectrogram 0 lockfile if you be sure no other run tf computation be use this cache prefix delete the lockfile and re initialize the iterator lockfile content create at 1614817563 op iteratorgetnext during handling of the above exception another exception occur alreadyexistserror traceback most recent call last usr local lib python3 7 dist package tensorflow python eager executor py in wait self 67 def wait self 68 wait for op dispatch in this executor to finish 69 pywrap tfe tfe executorwaitforallpendingnode self handle 70 71 def clear error self |
tensorflowtensorflow | tf keras backend in train phase tf keras backend in test phase not document | Bug | tensorflow 2 4 1 tf keras backend in train phase tf keras backend in test phase not document reference tf 2 4 1 doc here on the tf keras backend lib you will see neither function appear |
tensorflowtensorflow | unable to access resource use tf datum experimental service | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 4 19 160 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf resource use in map function be not available to worker when use tf datum experimental service distribute to process a dataset when the dataset be iterate over locally this do not produce an error describe the expect behavior the expect behavior would be that locally run call to dataset map and one which use tf datum experimental service distribute work equally standalone code to reproduce the issue import tensorflow as tf key abcdefghijklmnopqrstuvwxyz table tf lookup statichashtable tf lookup keyvaluetensorinitializer list key range len key 1 ds tf datum dataset from tensor slice a b c d e f ds ds map lambda x table lookup x when true this produce an error but work when set to false use data service true if use data service set up datum service dispatcher tf datum experimental service dispatchserver dispatch address dispatch target split 1 worker tf datum experimental service workerserver tf datum experimental service workerconfig dispatcher address dispatch address for in range 2 processing mode parallel epoch ds d apply tf datum experimental service distribute processing mode processing mode service dispatch target ds iter get next other info log include any log or source code that would be helpful to notfounderror traceback most recent call last opt conda lib python3 7 site package tensorflow python eager context py in execution mode mode 2112 ctx executor executor new 2113 yield 2114 finally opt conda lib python3 7 site package tensorflow python data op iterator op py in next internal self 732 output type self flat output type 733 output shape self flat output shape 734 opt conda lib python3 7 site package tensorflow python ops gen dataset op py in iterator get next iterator output type output shape name 2578 except core notokstatusexception as e 2579 op raise from not ok status e name 2580 except core fallbackexception opt conda lib python3 7 site package tensorflow python framework op py in raise from not ok status e name 6861 pylint disable protect access 6862 six raise from core status to exception e code message none 6863 pylint enable protect access opt conda lib python3 7 site package six py in raise from value from value notfounderror fail to get element container localhost do not exist could not find resource localhost 28 node none lookup lookuptablefindv2 op iteratorgetnext during handling of the above exception another exception occur notfounderror traceback most recent call last in 22 processing mode processing mode service dispatch target 23 24 ds iter get next opt conda lib python3 7 site package tensorflow python data op iterator op py in get next self 798 799 def get next self 800 return self next internal 801 802 def get next as optional self opt conda lib python3 7 site package tensorflow python data op iterator op py in next internal self 737 return self element spec from compatible tensor list ret pylint disable protect access 738 except attributeerror 739 return structure from compatible tensor list self element spec ret 740 741 property opt conda lib python3 7 contextlib py in exit self type value traceback 128 value type 129 try 130 self gen throw type value traceback 131 except stopiteration as exc 132 suppress stopiteration unless it s the same exception that opt conda lib python3 7 site package tensorflow python eager context py in execution mode mode 2114 finally 2115 ctx executor executor old 2116 executor new wait 2117 2118 opt conda lib python3 7 site package tensorflow python eager executor py in wait self 67 def wait self 68 wait for op dispatch in this executor to finish 69 pywrap tfe tfe executorwaitforallpendingnode self handle 70 71 def clear error self notfounderror fail to get element container localhost do not exist could not find resource localhost 28 node none lookup lookuptablefindv2 |
tensorflowtensorflow | estimator export save model section link to irrelevant guide or guide miss relevant section | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue export save model savedmodel from estimator description of issue what need change the api docs link have a link to the guide save model section the section reference do not exist in the guide nor do any mention of the estimator method export save model |
tensorflowtensorflow | tflite segmentation fault on inference due to int overflow in im2col | Bug | hello the convolution kernel use im2col to optimize some convolution and use l453 an intermediate buffer of size batch output height output width input depth filter height filter width for that if the size of this buffer overflow an int there be a risk of integer overflow in the im2col method l247 which use int as type for indexing such large buffer can quickly happen in super resolution model with large output resolution the script bellow be base on the how to generate super resolution image use tensorflow lite on android tutorial and illustrate the problem it use a large 700x700 input image to generate a 2800x2800 output image the last convolution need an im2col buffer of size 1 2800 2800 32 3 3 2 257 920 000 which be large than the 32 bit int use for indexing in im2col and result in a segmentation fault with the following stack trace due to some of the index go negative 0 memmove avx unaligned erm at sysdep x86 64 multiarch memmove vec unaligned erm s 366 1 0x00007fffae09dc86 in void tflite optimize op im2col tflite convparam const int int int const int tflite runtimeshape const sign char const tflite runtimeshape const sign char from lib python3 6 site package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 2 0x00007fffae0c7cdd in tflite optimize op hybridconvperchannel tflite convparam const float tflite runtimeshape const sign char const tflite runtimeshape const sign char const tflite runtimeshape const float const tflite runtimeshape const float tflite runtimeshape const sign char float const int tflite runtimeshape const int int bool tflite cpubackendcontext from lib python3 6 site package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 3 0x00007fffae0c861f in tflitestatus tflite op builtin conv evalhybridperchannel tflite op builtin conv kerneltype 2 tflitecontext tflitenode tfliteconvparam tflite op builtin conv opdata tflitetensor const tflitetensor const tflitetensor const tflitetensor tflitetensor the test be do use the late tf nightly 2 5 0 dev20210303 python import os import tensorflow as tf import tensorflow hub as hub input shape 1 700 700 3 model hub load concrete func model signature tf save model default serve signature def key concrete func input 0 set shape input shape converter tf lite tfliteconverter from concrete function concrete func converter optimization tf lite optimize default tflite model converter convert interpreter tf lite interpreter model content tflite model num thread os cpu count interpreter allocate tensor interpreter invoke force the usage of the reference kernel which don t use the im2col optimization circumvent the problem one way to solve it would be to use large integer for indexing or disable the im2col optimization when the intermediate buffer be too large thibaut |
tensorflowtensorflow | zero gradient for high order derivative when use tf function and tf scan | Bug | system information os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below v2 4 1 0 g85c8b2a817f python version 3 0 describe the current behavior when use tf function decorator on function involve tf scan the second order derivative go to zero describe the expect behavior I would expect the gradient not to be zero when tf function be use standalone code to reproduce the issue |
tensorflowtensorflow | http error 404 page not find | Bug | look like this show a 404 to I at the time of writing |
tensorflowtensorflow | correct range of input value for mobilenet | Bug | hello I want to use the implementation of mobilenetv3 either the mobilenet large or mobilenet small version in my project I m use the tensorflow v2 4 1 I get confuse when I read the documentation about the range of input value for mobilenet base model accord to this link the expect range of value for the model be 0 1 however accord to this other link the correct range of value be 1 1 a similar issue be already report at this link on the other hand to my surprise the function preprocess input of the module mobilenetv3 l556 l558 do not apply any change over the input I e the function be return directly the input keras export keras application mobilenet v3 preprocess input def preprocess input x datum format none pylint disable unused argument return x so what should be the correct range of input value for mobilenetv3 |
tensorflowtensorflow | problem with tensorflow model example | Bug | hi I would like to practice with the tensorflow model example create a tensorflow model in particular I m interested to repeat the error error some of the operator in the model be not support by the standard tensorflow lite runtime here be a list of operator for which you will need custom implementation sin how can I do this use for example tf lite tfliteconverter from concrete function thank in advance |
tensorflowtensorflow | tensorflow addon connect component not work properly | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 8 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda compilation tool release 10 1 v10 1 243 gpu model and memory geforce gtx 1080 ti 11264 mb dedicate video memory I m use cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tfa connect component should produce 2 component with the attach script it be not describe the expect behavior tfa result should show same number of component as scipy ndimage measurement label as claim in the documentation standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook tfa addon connected component check use tensorflow 2 4 0 python 3 8 5 tfa 0 12 1 scipy 1 6 1 import tensorflow import numpy as np import tensorflow addon as tfa import matplotlib pyplot as plt import scipy ndimage measurement as mea with tensorflow device cpu 0 m 500 img np zero m m one square top 50 bottom 270 leave 300 right 450 img np ix np arange top bottom 1 np arange leave right 1 1 second one top 100 bottom 150 leave 200 right 250 img np ix np arange top bottom 1 np arange leave right 1 1 convert image to tensor img tensorflow convert to tensor img imgtf tensorflow expand dim img 1 connect component with tfa d img tfa tfa image connect component imgtf numpy plot it plt figure plt title tfa connect component image plt imshow d img tfa plt show block false connect component with scipy d img scipy meas label imgtf count number of non background component subtract 1 for background num comp tfa np unique d img tfa flatten size 1 num comp scipy np unique d img scipy flatten size 1 these image should be the same print ntfa connect component have d component num comp tfa print scipy connect component have d component n num comp scipy plot plt figure plt title scipy connect component image plt imshow d img scipy plt show block true other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | login loop cause by unmet package in gpu ubuntu16 04 installation guide | Bug | system information os platform and distribution linux ubuntu 16 04 geforce gtx titan 2070 other inapplicable cause I follow a guide ubuntu 1604 cuda 110 describe the problem 23th line sudo apt get install no install recommend cuda 11 0 libcudnn8 8 0 4 30 1 cuda11 0 libcudnn8 dev 8 0 4 30 1 cuda11 0 lead to read package list do building dependency tree read state information do libcudnn8 dev be already the new version 8 0 4 30 1 cuda11 0 libcudnn8 be already the new version 8 0 4 30 1 cuda11 0 the follow additional package will be instal cuda demo suite 11 0 cuda driver cuda driver 460 cuda runtime 11 0 nvidia 460 nvidia 460 dev recommend package nvidia prime bumblebee the follow package will be remove nvidia 450 the follow new package will be instal cuda 11 0 cuda demo suite 11 0 cuda driver cuda driver 460 cuda runtime 11 0 nvidia 460 nvidia 460 dev 0 upgrade 7 newly instal 1 to remove and 66 not upgrade need to get 0 b 168 mb of archive after this operation 93 2 mb of additional disk space will be use do you want to continue y n if yes be choosen next reboot lead to login loop because of nvidia nvml driver library version mismatch nvidia 450 be instal at 19th line and nvidia 460 after 23th line delete nvidia 460 allow to login normally with nvidia smi give 450 version but from tensorflow python client import device lib device lib list local device still don t show accessible gpu provide the exact sequence of command step that you execute before run into the problem ubuntu 1604 cuda 110 except 19th line sudo apt get install no install recommend nvidia driver 450 due to instead of it sudo apt get install no install recommend nvidia 450 and sudo apt get update allow unauthenticated before 2nd line due to w fail to fetch could not resolve host developer download nvidia com |
tensorflowtensorflow | install gpu ubuntu16 04 miss crucial step unable to locate package nvidia driver 450 | Bug | url s with the issue ubuntu 1604 cuda 110 description of issue what need change 19th line sudo apt get install no install recommend nvidia driver 450 lead to error e unable to locate package nvidia driver 450 |
tensorflowtensorflow | unable to migrate tf1 code to tf2 tape gradient return none | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 macos big sur tensorflow instal from source or binary binary pip tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 8 7 describe the current behavior I m migrate code from tf1 that use tf gradient for do a custom gradient calculation I m try to get the same result I get with tf1 in tf2 use tf gradienttape however no matter what I do include I try to use tape watch and the issue persist I try manually create a tf variable with trainable true watch the variable and the issue persist I try use tf gradient within a tf function and tf compat v1 gradient if there be any difference at all and the issue persist here s a jupyter notebook with the full code to be able to reproduce the issue here s the code I m migrate check line 156 176 below be the part of interest g tf gradient loss f loss be a float and f be a m n tensor k f pol f ep f pol another m n tensor and ep a float k dot g tf reduce sum k g axis 1 adj tf maximum 0 0 tf reduce sum k g axis 1 delta tf reduce sum tf square k axis 1 ep g g tf reshape adj nenvs nstep 1 k grad f g nenvs nstep grad policy tf gradient f param grad f param be the model parameter and here s a simplified version of what I m try to do with tf gradienttape as tape f calculate f f pol calculate f pol other do further calculation loss calculate loss g tape gradient loss f print g result in none describe the expect behavior as far as I understand tf gradienttape be the tf2 alternative to tf gradient I m try to replicate the exact same result in tf2 and it doesn t work so this imply either there be something wrong with my code or it be a bug this be not the first time it happen with someone I find numerous other complain include closed issue and neither include a solution I have not try standalone code to reproduce the issue the problem if possible please share a link to colab jupyter any notebook jupyter notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | xla gpu build failure bazel cache end up be corrupt | Bug | this commit commit 9f8d5d0ef4e990ab2398d573a14aaa8ed02a10e2 author rahul joshi date mon feb 22 08 41 59 2021 0800 mlir lhlo add optional call target arg mapping to lmhlo customcall operation xla hlo lmhlo conversion drop all token argument and return value however custom call that user write still expect to get buffer pointer for these token type to be able to support this add an optional call target argument mapping attribute to lmhlo custom call when this attribute be present it indicate the number of argument and return that the custom call expect and also indicate which lmhlo arg or output map to which arg or result number of the custom call piperorigin revid 358826664 change i d i36e839e9ff5b73890715b71717a4c13631955fba introduce a compilation regression before that commit this command line work bazel test c opt config cuda tensorflow compiler xla test conv depthwise test gpu tensorflow compiler xla test group convolution test gpu start at this commit it give this error error home fbastien github tensorflow tf2 upstream2 tensorflow compiler mlir hlo build 485 11 undeclared inclusion s in rule tensorflow compiler mlir hlo lhlo this rule be miss dependency declaration for the follow file include by tensorflow compiler mlir hlo lib dialect mhlo ir lhlo op struct cc bazel out k8 opt bin tensorflow compiler mlir hlo virtual include lhlo ops inc gen mlir hlo dialect mhlo ir lhlo op struct h inc bazel out k8 opt bin tensorflow compiler mlir hlo virtual include lhlo ops inc gen mlir hlo dialect mhlo ir lhlo op struct cc inc cc1plus warn command line option wno pointer sign be valid for c objc but not for c jurahul that author it |
tensorflowtensorflow | modelcheckpoint callback create png file instead of h5 file | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memgory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior if we already have a save model say model name h5 in model directory and then we use modelcheckpoint callback and provide the same name model name h5 to save the model after the training be finish we get model name png file instead of model name h5 file describe the expect behavior the user should get model name h5 file instead of model name png after train standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook scrollto 0qqwjf v1lvw note run the training cell twice first time to generate model name h5 file then when you run the training cell second time we can observe model name png be generate other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | sparsetensordensematmul adjoint a take transpose not adjoint | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior on cpu haven t check gpu the adjoint a argument to sparsetensordensematmul op access via tf sparse sparse dense matmul take the transpose of the argument but not the conjugate look like the issue be here l55 shouldn t that be take a conjugate describe the expect behavior adjoint a take the adjoint of a standalone code to reproduce the issue import tensorflow as tf x tf sparse from dense 1j y tf constant 1 dtype tf complex128 print tf sparse sparse dense matmul x y adjoint a true numpy give 1j see gist here |
tensorflowtensorflow | bug gfile walk return full path instead of filename for ram filesystem | Bug | test via tf nightly on 2020 02 22 yyyy mm dd code to reproduce python3 from tensorflow io import gfile gfile mkdir ram testdir with gfile gfile ram testdir file txt w as f f write test print list gfile walk ram testdir the third element be ram testdir file txt but it should be just file txt colab notebook |
tensorflowtensorflow | update audio recognition tutorial link | Bug | update audio recognition tutorial link fix issue 47305 |
tensorflowtensorflow | tutorial url outdate should probably be www tensorflow org tutorial audio simple audio | Bug | the url in line 21 of tensorflow tensorflow example speech command train py be outdated and produce an 404 error it should probably be |
tensorflowtensorflow | micro port op log softmax from lite | Bug | tensorflow micro this issue track my work porting operator log softmax from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence pr 3 copy operator from lite to micro make minimal change and not include in the build pr 4 delete extra code from the micro copy of the operator pr 5 port micro copy of operator as necessary and add a corresponding test |
tensorflowtensorflow | micro port op cumsum from lite | Bug | tensorflow micro this issue track my work porting operator cumsum from lite to micro the port will be submit in a number of prs here s a rough flight plan per advaitjain and petewarden pr 1 step 1 extract the code for parse the op from a flatbuffer out of parseopdatatflite in tensorflow lite core api flatbuffer conversion cc into a standalone function that can be call from micro s op resolver pr 2 step 2 extract the reference implementation out of tensorflow lite kernels internal reference reference op h into its own header which can be include without drag in reference op h s dependence the next 3 step be combine into a single pr3 with separate commit step 3 copy operator from lite to micro make minimal change and not include in the build step 4 delete extra code from the micro copy of the operator step 5 port micro copy of operator as necessary and add a corresponding test |
tensorflowtensorflow | the example code in tf feature column numeric column be not executable complete self sufficient | Bug | please provide a link to the documentation entry for example example description of issue what need change the example code can not be execute as it be we have to add the namespace by search in the tensorflow org site complete self sufficient stand alone example code will be very helpful especially for the new developer |
tensorflowtensorflow | can not convert a symbolic tensor gru stride slice 0 to a numpy array | Bug | system information os platform and distribution arch linux tensorflow instal from package python tensorflow tensorflow version unknown 2 4 1 python version 3 9 1 cuda cudnn version no gpu standalone code to reproduce the issue python from tensorflow keras layer import embed input gru x input shape none x embed input dim 50 output dim 16 mask zero true x x gru unit 256 x other info log traceback most recent call last file home jnphilipp nextcloud code jnphilipp deep learn test3 py line 5 in x gru unit 256 x file usr lib python3 9 site package tensorflow python keras layers recurrent py line 660 in call return super rnn self call input kwargs file usr lib python3 9 site package tensorflow python keras engine base layer py line 951 in call return self functional construction call input args kwargs file usr lib python3 9 site package tensorflow python keras engine base layer py line 1090 in functional construction call output self keras tensor symbolic call file usr lib python3 9 site package tensorflow python keras engine base layer py line 822 in keras tensor symbolic call return self infer output signature input args kwargs input mask file usr lib python3 9 site package tensorflow python keras engine base layer py line 863 in infer output signature output call fn input args kwargs file usr lib python3 9 site package tensorflow python keras layers recurrent v2 py line 439 in call input initial state self process input input initial state none file usr lib python3 9 site package tensorflow python keras layers recurrent py line 859 in process input initial state self get initial state input file usr lib python3 9 site package tensorflow python keras layers recurrent py line 642 in get initial state init state get initial state fn file usr lib python3 9 site package tensorflow python keras layers recurrent py line 1948 in get initial state return generate zero fill state for cell self input batch size dtype file usr lib python3 9 site package tensorflow python keras layers recurrent py line 2987 in generate zero fill state for cell return generate zero fill state batch size cell state size dtype file usr lib python3 9 site package tensorflow python keras layers recurrent py line 3005 in generate zero fill state return create zero state size file usr lib python3 9 site package tensorflow python keras layers recurrent py line 3000 in create zero return array op zeros init state size dtype dtype file usr lib python3 9 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file usr lib python3 9 site package tensorflow python op array op py line 2819 in wrap tensor fun args kwargs file usr lib python3 9 site package tensorflow python op array op py line 2868 in zero output constant if small zero shape dtype name file usr lib python3 9 site package tensorflow python op array op py line 2804 in constant if small if np prod shape 1000 file array function internal line 5 in prod file usr lib python3 9 site package numpy core fromnumeric py line 3030 in prod return wrapreduction a np multiply prod axis dtype out file usr lib python3 9 site package numpy core fromnumeric py line 87 in wrapreduction return ufunc reduce obj axis dtype out passkwargs file usr lib python3 9 site package tensorflow python framework op py line 852 in array raise notimplementederror notimplementederror can not convert a symbolic tensor gru stride slice 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support |
tensorflowtensorflow | tflite converter produce wrong output shape signature if rnn lstm output layer in model become all unknown dimension | Bug | 1 system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source tf nightly 2 5 0 dev20210216 and tensorflow 2 4 0 2 code collab notebook code py import tensorflow as tf from pprint import pprint def convertmodeltotflite path outpath print f info use tensorflow v tf version converter tf lite tfliteconverter from save model path converter experimental new converter true tflitemodel converter convert save the model with open outpath wb as f f write tflitemodel unit 256 savepath my model create model I tf keras input shape 1 521 name input x tf keras layer dense unit I x tf keras layers lstm unit return sequence true x x tf keras layer dense unit x have a dense output layer give correct output signature save as savedmodel model tf keras model model input I output x model save savepath save format tf convertmodeltotflite savepath f savepath tflite args optimize ip tf lite interpreter f savepath tflite print model summary pprint ip get output detail 3 failure after conversion conversion be successful and model work if I resize the output tensor to expect dimension and reallocate tensor but I expect the shape signature of the output to be correct and have the last dimension feature dimension be know and equal to the number of unit cell in the lstm with an lstm or rnn layer as the output layer to a keras model the tflite model have the unexpected output shape signature shape signature array 1 1 1 dtype int32 all 1s however with a dense layer the output have the expect shape signature shape signature array 1 1 256 dtype int32 5 optional any other info log output detail and model summary with a rnn lstm output layer dtype index 42 name statefulpartitionedcall 0 quantization 0 0 0 quantization parameter quantize dimension 0 scale array dtype float32 zero point array dtype int32 shape array 1 1 1 dtype int32 shape signature array 1 1 1 dtype int32 all 1s sparsity parameter layer type output shape param input inputlayer none 1 521 0 dense 3 dense none 1 256 133632 lstm 3 lstm none 1 256 525312 output detail and model summary with dense output layer dtype index 51 name statefulpartitionedcall 0 quantization 0 0 0 quantization parameter quantize dimension 0 scale array dtype float32 zero point array dtype int32 shape array 1 1 256 dtype int32 shape signature array 1 1 256 dtype int32 sparsity parameter layer type output shape param input inputlayer none 1 521 0 dense dense none 1 256 133632 lstm lstm none 1 256 525312 dense 1 dense none 1 256 65792 |
tensorflowtensorflow | can not import tensorflow lite model with an lstm layer to android studio | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 google colaboratory tensorflow instal from source or binary bundle with google colaboratory tensorflow version use command below 2 5 0 late instal from nightly but also occur on the late stable version 2 4 python version 3 6 bundle with google colaboratory describe the current behavior I have create a tensorflow lite model use the official guide convert a keras model and then try to import it to android studio use another official guide use android studio ml model bind however android studio will not make the next button available while display an error say this be not a valid tensorflow lite model file describe the expect behavior the model should have be import as it follow your own official guide standalone code to reproduce the issue 1 create the minimal model I have be use the follow google colaboratory notebook 2 download the result file to your local computer 3 in any android studio project right click on any module and select new other tensorflow lite model 4 choose the file you download in step 2 5 the error message will be display this can be see in the attach image other info log I open the same bug with android studio in the follow link you can see the android studio detail and screenshot of the bug in the link also note that remove the lstm layer cause android studio to recognize the model as valid it s the inclusion of the lstm layer that cause the problem |
tensorflowtensorflow | model benchmark binary throw unconsumed cmdline flag graph error | Bug | system information window 10 adb binary download from here describe the current behavior adb shell input and output cph1920 datum local tmp android aarch64 benchmark model performance option graph test start unconsumed cmdline flag graph test warn unrecognized commandline flag graph warning unrecognized commandline flag test the list of tflite runtime option to be benchmarke all please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph please specify the name of your tf lite input file with graph summary of all run w different performance option gpu fp16 fail cpu w 1 thread fail cpu w 2 thread xnnpack fail cpu w 4 thread xnnpack fail gpu default fail nnapi w o accel name fail cpu w 4 thread fail cpu w 1 thread xnnpack fail cpu w 2 thread fail dsp w hexagon fail describe the expect behavior should not be ask for graph flag if it be already provide |
tensorflowtensorflow | efficientnet model from tensorflow kera not be reproducible on gpu | Bug | after download an efficientnet model from tensorflow keras application efficientnet 1 and retrain it on our own datum I ve notice that the result be not reproducible the result be reproducible for other architecture like vgg16 resnet101 inceptionv3 and inceptionresnetv2 but not for any of the efficientnetbx model please note that I ve set all the follow seed and even have tensorflow determinism random seed 1 np random seed 1 tf random set seed 1 os environ tf cudnn deterministic true os environ tf deterministic op true tensorflow version tensorflow gpu 2 3 1 |
tensorflowtensorflow | runtimeerror can not use a constraint function on a sparse variable in google colab | Bug | I be try to train my model use kera and tensorflow 2 x while use the model fit method I run into this error error epoch 1 10 runtimeerror traceback most recent call last in 2 3 for I in range n epoch 4 model fit x train x y train y batch size 32 epoch 10 verbose 1 validation datum val x val y 5 output model predict proba val x batch size 10 verbose 1 6 find validation accuracy use the good threshold value t 9 frame usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 975 except exception as e pylint disable broad except 976 if hasattr e ag error metadata 977 raise e ag error metadata to exception e 978 else 979 raise runtimeerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 805 train function return step function self iterator usr local lib python3 6 dist package tensorflow python keras engine training py 795 step function output model distribute strategy run run step args datum usr local lib python3 6 dist package tensorflow python distribute distribute lib py 1259 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2730 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 3417 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 788 run step output model train step datum usr local lib python3 6 dist package tensorflow python keras engine training py 757 train step self optimizer minimize loss self trainable variable tape tape usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 498 minimize return self apply gradient grad and var name name usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 635 apply gradient name name usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2941 merge call return self merge call merge fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2948 merge call return merge fn self strategy args kwargs usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 683 distribute apply var apply grad to update var args grad group false usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2494 update return self update var fn args kwargs group usr local lib python3 6 dist package tensorflow python distribute distribute lib py 3431 update return self update non slot var fn var tuple args kwargs group usr local lib python3 6 dist package tensorflow python distribute distribute lib py 3437 update non slot result fn args kwargs usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 650 apply grad to update var can not use a constraint function on a sparse variable runtimeerror can not use a constraint function on a sparse variable system information have I write custom code please find the code below os platform and distribution mac osx big sur tensorflow instal from source or binary binary tensorflow version use command below tensorflow 2 2 python version python 3 9 1 code be give below link to the colab for full code train datum preparation n dataset 0 shape 0 conv input width w shape 1 conv input height int dataset 0 shape 1 1 for each word write a word index not vector to x tensor train x np zero n conv input height dtype np int train y np zero n 2 dtype np int for I in range n for j in range conv input height train x I j dataset 0 I j print train x shape format train x shape print train y shape format train y shape validation datum preparation nv dataset 1 shape 0 for each word write a word index not vector to x tensor val x np zero nv conv input height dtype np int val y np zero nv 2 dtype np int for I in range nv for j in range conv input height val x I j dataset 1 I j print val x shape format val x shape print val y shape format val y shape for I in range nv val y I data train iloc i 3 1 from keras optimizers import rmsprop from keras import backend backend set image datum format channel first import kera number of feature map output of convolutional layer n fm 200 kernel size of convolutional layer kernel size 5 model sequential embed layer lookup table of trainable word vector model add embed input dim w shape 0 output dim w shape 1 input length conv input height weight w embedding constraint unitnorm name e l reshape word vector from embed to tensor format suitable for convolutional layer model add reshape 1 conv input height conv input width first convolutional layer model add convolution2d n fm kernel size conv input width kernel initializer random uniform pad valid kernel regularizer l2 0 001 relu activation model add activation relu aggregate datum in every feature map to scalar use max operation model add maxpooling2d pool size conv input height kernel size 1 1 padding same model add flatten model add dropout 0 4 model add dense 128 kernel initializer random uniform model add activation relu model add dropout 0 4 inner product layer as in regular neural network but without non linear activation function model add dense 2 softmax activation actually dense softmax work as multinomial logistic regression model add activation softmax custom optimizer could be use though right now standard adadelta be employ opt rmsprop lr 0 001 rho 0 9 epsilon none model compile loss mean squared error optimizer opt metric accuracy the line that throw the error n epoch 3 for I in range n epoch model fit x train x y train y batch size 32 epoch 10 verbose 1 validation datum val x val y output model predict proba val x batch size 10 verbose 1 find validation accuracy use the good threshold value t vacc np max np sum output 1 t val y 1 0 5 1 0 len output for t in np arange 0 0 1 0 0 01 find validation auc vauc roc auc score val y output val acc append vacc val auc append vauc print epoch validation accuracy 3 validation auc 3 format epoch vacc vauc epoch 1 print epoch pass format epoch print accuracy on validation dataset print val acc print auc on validation dataset print val auc tweak I try 1 I have try change the unitnorm in the embed layer 2 verify the embed layer doesn t use sparse datum instead it use a dense matrix to store that datum 3 refer this link but couldn t solve my error please can anyone suggest a solution thank |
tensorflowtensorflow | valueerror can not add function inference dataset map 12 because a different function with the same name already exist | Bug | hi I be use ubuntu 16 04 and tensorflow gpu 1 14 0 I obtain this error valueerror can not add function inference dataset map 12 because a different function with the same name already exist please let I know if you have idea how I can resolve it thank you in advance |
tensorflowtensorflow | tensorflow | Invalid | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem |
tensorflowtensorflow | micro benchmark be exclude for cortex m corstone 300 makefile inc | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc describe the problem I hit this issue when work with micro benchmark have to be exclude please provide the exact sequence of command step when you run into the problem try to run micro benchmark for cortex m corstone 300 makefile inc |
tensorflowtensorflow | tensorflow cc save model readme md be break | Bug | url s with the issue description of issue what need change I would expect to see some documentation regard the package instead it say |
tensorflowtensorflow | conv2d output shape become fully dynamic when only input batch size be dynamic in tf nightly | Bug | 1 system information os platform and distribution e g linux ubuntu 16 04 colab tensorflow installation pip package or build from source pip tensorflow library version if pip package or github sha if build from source 2 4 1 v s nightly 2 code nightly 2 4 1 3 failure after conversion model be correct but nightly have fully dynamic output shape and 2 4 1 have dynamic batch dim only nightly image 2 4 1 image 1 here mean dynamic dim netron can not show it precisely it do not really matter in my use case though just a regression issue |
tensorflowtensorflow | why be snappy compression output buffer size so small | Bug | hi frankchn the snappy compression buffer size be hard code to 262144 byte l30 this be a quite small number lead to high chance of hit the error l113 I wonder why it be set to thus small furthermore output buffer be a fix sized array l73 why can t it be a vector so that we don t need the output buffer size at all I m happy to submit a pr to make the according change if need thank |
tensorflowtensorflow | valueerror can not iterate over a shape with unknown rank | Bug | 1 system information os platform and distribution ubuntu 20 04 tensorflow installation pip package or build from source pip python 3 7 tensorflow library version if pip package or github sha if build from source tensorflow 2 4 1 2 code provide code to help we reproduce your issue use one of the follow option python import sys from pathlib import path import tensorflow as tf specify the model save model dir path training model admin test2 1 export model 1 if save model dir exist print f converting model str save model dir else print f could not find model str save model dir sys exit 1 convert the model converter tf compat v1 lite tfliteconverter from save model str save model dir tflite model converter convert save the model with open model tflite wb as f f write tflite model print ready 3 failure after conversion error message traceback most recent call last file convert to tflite py line 17 in tflite model converter convert file home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python lite py line 1947 in convert return super tfliteconverter self convert file home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python lite py line 1304 in convert converter kwargs file home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python convert py line 606 in toco convert impl input tensor output tensor args kwargs file home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python convert py line 497 in build toco convert protos for dim in shape file home thijs virtualenvs tflite lib python3 7 site package tensorflow python framework tensor shape py line 861 in iter raise valueerror can not iterate over a shape with unknown rank valueerror can not iterate over a shape with unknown rank 4 any other info log startup log 2021 02 09 20 09 36 646605 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcudart so 11 0 dlerror libcudart so 11 0 can not open share object file no such file or directory ld library path usr local qt 5 14 1 lib 2021 02 09 20 09 36 646636 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine converting model training model admin test2 1 export model 1 2021 02 09 20 09 38 125156 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 02 09 20 09 38 125298 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory ld library path usr local qt 5 14 1 lib 2021 02 09 20 09 38 125310 w tensorflow stream executor cuda cuda driver cc 326 fail call to cuinit unknown error 303 2021 02 09 20 09 38 125331 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host snowblower proc driver nvidia version do not exist 2021 02 09 20 09 38 125557 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 02 09 20 09 38 125891 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set warn tensorflow from home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python convert save model py 60 load from tensorflow python save model loader impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model loader load or tf compat v1 save model load there will be a new function for import savedmodel in tensorflow 2 0 2021 02 09 20 09 38 792020 I tensorflow compiler mlir mlir graph optimization pass cc 196 none of the mlir optimization pass be enable register 0 pass 2021 02 09 20 09 38 858416 I tensorflow core platform profile util cpu util cc 112 cpu frequency 3593310000 hz 2021 02 09 20 09 38 954802 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 02 09 20 09 39 889489 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 0 2021 02 09 20 09 39 889750 I tensorflow core grappler cluster single machine cc 356 start new session 2021 02 09 20 09 39 890025 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 02 09 20 09 39 953928 I tensorflow core grappler optimizer meta optimizer cc 928 optimization result for grappler item graph to optimize function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0ms warn tensorflow from home thijs virtualenvs tflite lib python3 7 site package tensorflow lite python util py 327 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from home thijs virtualenvs tflite lib python3 7 site package tensorflow python framework convert to constant py 856 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph 2021 02 09 20 09 40 561332 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 02 09 20 09 41 338828 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 02 09 20 09 42 358175 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 0 2021 02 09 20 09 42 358313 I tensorflow core grappler cluster single machine cc 356 start new session 2021 02 09 20 09 42 358532 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2021 02 09 20 09 42 426773 I tensorflow core grappler optimizer meta optimizer cc 928 optimization result for grappler item graph to optimize function optimizer function optimizer do nothing time 0 004ms function optimizer function optimizer do nothing time 0ms |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.