repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tf 2 0 api docs tf histogram fix width bin
Bug
url s with the issue description of issue what need change raise list and define raise be not define and list submit a pull request no
tensorflowtensorflow
tf 2 0 api docs tf nn dropout
Bug
url s with the issue description of issue what need change usage example usage be link but none be detailed inline on the page
tensorflowtensorflow
tf 2 0 api docs tf histogram fix width
Bug
url s with the issue description of issue what need change raise list and define raise be not define and list submit a pull request no
tensorflowtensorflow
tf 2 0 api docs tf custom gradient
Bug
url s with the issue description of issue what need change raise list and define error be not define
tensorflowtensorflow
tf 2 0 api doc tf variablesynchronization
Bug
url s with the issue usage example no usage example be provide
tensorflowtensorflow
tf 2 0 api docs tf keras metric
Bug
url s with the issue description of issue what need change the link to the source code link to 404 page define in python keras api v2 kera metric init py correct link submit a pull request no
tensorflowtensorflow
tf 2 0 api doc tf hessian
Bug
url s with the issue description of issue what need change no usage example define usage example usage example be not provide although the method do not work with eager execution enable and throw this error runtimeerror tf gradient be not support when eager execution be enable use tf gradienttape instead submit a pull request no
tensorflowtensorflow
tf autograph experimental feature
Bug
url s with the issue description of issue what need change clear description there be no description provide parameter define parameter be undefined return define return value be not define raise list and define error be not define usage example there be not a usage example
tensorflowtensorflow
tf 2 0 api docs tf nn l2 loss
Bug
url s with the issue description of issue what need change clear description the current description could be improve correct link it refer to a generate python file that we can not access raise list and define raise be not define usage example no usage example request visual if applicable no visual
tensorflowtensorflow
tf 2 0 api docs tf guarantee const
Bug
url s with the issue description of issue what need change no usage example be provide and the link do not exist the raise be also not define correct link the link do not exist and be also a simple text raise list and define raise be not list usage example no usage example provide submit a pull request no
tensorflowtensorflow
tf 2 0 api doc tf clip by norm
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change raise list and define error be not define request visual if applicable no visual be include
tensorflowtensorflow
tf 2 0 api docs tf group
Bug
url s with the issue description of issue what need change a clear description should be add and a proper usage example be to be add clear description for example a group operation can run multiple operation at the same time operation be not sequential and they will be execute when tf group will be call usage example there be no usage example provide except for a link to where it s be use submit a pull request yes
tensorflowtensorflow
tf 2 0 api docs tf variableaggregation
Bug
url s with the issue description of issue what need change clear description the description be not opinionate about when to use this symbol and unclear on what aggregation method for combine gradient would be useful for parameter define parameter be poorly define and not format appropriately return define return be not define raise list and define error be not define usage example no usage example be provide request visual if applicable no visual be include
tensorflowtensorflow
tf 2 0 api doc tf greater equal
Bug
url s with the issue description of issue what need change correct link be not provide in the sense that it be only a text and not an actual link to the file no usage example be give in the documentation raise be also not list correct link correct link be not provide in the sense that it be only a text and not an actual link to the file raise list and define raise be also not list usage example no usage example be give in the documentation
tensorflowtensorflow
tf 2 0 api doc tf great
Bug
url s with the issue description of issue what need change correct link be not provide in the sense that it be only a text and not an actual link to the file no usage example be give in the documentation raise be also not list correct link correct link be not provide in the sense that it be only a text and not an actual link to the file raise list and define raise be not list in the documentation usage example there be no usage example provide
tensorflowtensorflow
change py func to py function in apply arbitrary python logic
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue apply arbitrary python logic apply arbitrary python logic description of issue what need change minor typo just need to update py func in example so it be py function clear description in last 4 line of code block underneath apply arbitrary python logic section python dataset dataset map lambda filename label tuple tf py func read py function filename label tf uint8 label dtype dataset dataset map resize function correct link yes parameter define yes return define yes raise list and define yes usage example yes request visual if applicable no not need submit a pull request no
tensorflowtensorflow
tf 2 0 api docs tf autograph to code
Bug
url s with the issue description of issue what need change clear description initial description could be clear instead of refer to another function as similar to graph restate primary use case from similar to to graph but return python source code as a string to to code be a low level api that return the autograph generate python source code as a string this be similar to to graph which return the tensorflow graph instead of python usage example no usage example in doc only reference to guide example would suggest uplift an example from a guide to the doc for completeness ref submit a pull request yes
tensorflowtensorflow
tf 2 0 api docs tf autograph set verbosity
Bug
url s with the issue description of issue what need change clear description description could be clear reference to abseil s log format could be reference rather than only to abseil itself user would have to hunt through doc to see the log output format reference there s a slight misspelling in the args for alsologtostdout it be recommend to set this value to a large number like 10 should be it be recommend to set this value to a large number like 10 submit a pull request yes
tensorflowtensorflow
tf 2 0 api doc docstre for tf train experimental enable mixed precision graph rewrite
Bug
in response to 29241 improve the docstring for tf train experimental enable mixed precision graph rewrite add example for use function add colab notebook to demonstrate speed up without performance penalty add original graphic for loss scale source add more information about graph rewrite operation add performance guide add exception information add more clarification to loss scale argument a gist with the render docstring be here for ease of review thank you any feedback or criticism be welcome
tensorflowtensorflow
tf 2 0 api docs tf keras layers lstm
Bug
url s with the issue suggestion where applicable the documentation for this should be consistent with the base class tf keras layer rnn and other derive class description of issue what need change clear description 1 use of backtick can be make more consistent and in line with the documentation style write about code write about code for example the value true and false in the description be not surronde by backtick as recommend by the documentation style guide correct link 1 link to the source code at python keras layer recurrent v2 py be incorrect it point to which be a 404 page correct link for master should be parameter define 1 as with the clear heading section above use of backtick can be make more consistent and in line with the documentation style write about code write about code for example the value true and false in the description be not surronde by backtick as recommend by the documentation style guide 1 init parameter time major be not define the default value be specify within the text of some of the parameter definition but not all for example the definition for unroll be boolean default false if true the network will be unrolled else a symbolic loop will be use unroll can speed up a rnn although it tend to be more memory intensive unroll be only suitable for short sequence whereas the definition for return state be boolean whether to return the last state in addition to the output 1 get dropout mask for cell first word of the definition of the parameter should be capitalize 1 get initial state parameter input be not define 1 reset state parameter state be not define return define return value be not define for the follow get initial state reset dropout mask reset recurrent dropout mask reset state for the last three item above perhaps it be sufficiently clear that nothing will be return raise list and define no error be define usage example no usage example be provide however the description do have link to relevant guide and tutorial as follow use in the guide the keras functional api in tensorflow use in the tutorial load text with tf data text classification with an rnn text generation with an rnn request visual if applicable there be current no visual lstm itself might be too broad a topic to be deal with comprehensively use visual in this documentation page submit a pull request be you plan to also submit a pull request to fix the issue no I can fix the formatting and syntax issue but populate the miss parameter definition be currently beyond my level relate issue 26197
tensorflowtensorflow
tf 2 0 api docs tf image transpose
Bug
doc link description of issue what need change usage example no usage example be provide
tensorflowtensorflow
tf 2 0 api docs tf io extract jpeg shape
Bug
doc link description of issue what need change correct link link not provide path be write but href be not provide raise list and define error be not define usage example no usage example be provide
tensorflowtensorflow
tf 2 0 api docs tf image rot90
Bug
doc link description of issue what need change usage example no usage example be provide request visual if applicable no visual be include a visual example of rotation can be add although not necessary
tensorflowtensorflow
tf 2 0 api docs tf math maximum tf maximum
Bug
system information tensorflow version 2 0 doc link describe the documentation issue link python ops gen math op py the implementation of the code be in c however the documentation reference a generate python file which we can t open or can t view a representative implementation perhaps we can add a representative implementation for such function usage example no usage example be provide
tensorflowtensorflow
tf 1 14 api docs tf train experimental enable mixed precision graph rewrite
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change clear description should well explain the graph rewrite algorithm black white grey list op and where to see the list of op overall language can be clear and more precise minor issue with the format correct link link present be all correct parameter define briefly explain what the value dynamic the default value for loss scale do and link to the symbol for that return define return value be define properly raise list and define exception be not list nor explain usage example no usage example I can provide a simple code snippet additionally I want to provide a colab notebook to demonstrate increase in speed without negative impact on accuracy on cifar10 for example request visual if applicable overview of mixed precision process flow submit a pull request yes I have submit a pr pr be here 29249
tensorflowtensorflow
tf 2 0 api docs tf complex
Bug
url s with the issue description of issue what need change raise list and define no submit a pull request 29237 I will
tensorflowtensorflow
wrong color document in explanation tensorflow lite android
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
fix code of conduct link in covenant badge
Bug
cc wicke bhack
tensorflowtensorflow
gpu oom error when use kera and distribution strategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 1 14 1 dev20190524 python version python 3 7 3 cuda cudnn version cuda 10 cudnn 7 4 2 24 gpu model and memory 4 x nvidia v100 describe the current behavior use tf distribute mirroredstrategy together with kera to train model as describe in result in gpu out of memory error appear after several epoch of training we exclude our custom write code as the source of the memory leak and make sure that the model actually fit into memory with enough headroom it seam that either tf datum or tf keras metric have a memory leak that start show up after several epoch of training and evaluation describe the expect behavior tensorflow doesn t throw oom error code to reproduce the issue unfortunately I can not give a concrete code example to reproduce this issue since memory leak appear anytime in between 10min to 12h of training though I be happy to provide more information and would be eager to get suggestion on how to properly debug this problem other info log python traceback file usr local lib python3 7 dist package tensorflow python keras engine training py line 644 in fit use multiprocesse use multiprocesse file usr local lib python3 7 dist package tensorflow python keras engine training distribute py line 899 in fit validation freq validation freq file usr local lib python3 7 dist package tensorflow python keras engine training distribute py line 149 in fit distribute step name step per epoch file usr local lib python3 7 dist package tensorflow python keras engine training array py line 409 in model iteration step name validation step file usr local lib python3 7 dist package tensorflow python keras engine training array py line 274 in model iteration batch out f actual input file usr local lib python3 7 dist package tensorflow python keras backend py line 3351 in call run metadata self run metadata file usr local lib python3 7 dist package tensorflow python client session py line 1458 in call run metadata ptr tensorflow python framework error impl resourceexhaustederror 2 root error s find 0 resource exhaust fail to allocate memory for the batch of component 0 node multideviceiteratorgetnextfromshard hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info remotecall hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info iteratorgetnext 7 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info 1 resource exhaust fail to allocate memory for the batch of component 0 node multideviceiteratorgetnextfromshard hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info remotecall hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info iteratorgetnext 7 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info metric 4 categorical accuracy identity 2 3447 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info 0 successful operation 3 derive error ignore
tensorflowtensorflow
tf 2 0 api docs tf queue fifoqueue
Bug
url s with the issue description of issue what need change some method not provide raise error list raise list and define dequeue dequeue many dequeue up to enqueue enqueue many submit a pull request I will be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
tf 2 0 api docs tf dtype cast
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change usage example be there a usage example the example python x tf constant 1 8 2 2 dtype tf float32 tf cast x tf int32 1 2 dtype tf int32 be not correct it need to change tf cast to tf dtype cast this be correct example python x tf constant 1 8 2 2 dtype tf float32 tf dtype cast x tf int32 1 2 dtype tf int32 submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
tf 2 0 api docs tf keras layer conv2d
Bug
url s with the issue description of issue what need change correct link be incorrect it have 404 error parameter define kwargs be not define raise list and define error be not define usage example no usage example be provide request visual if applicable no visual be include
tensorflowtensorflow
how to use interactive graphviz for xla
Bug
in xla doc tutorial it suggest that tmp foo will contain the hlo before and after optimization for each hlo module that s run you can read this as be or you can visualize it use tensorflow compiler xla tool interactive graphviz but I can not locate this binary
tensorflowtensorflow
tf function spuriously fail for branched super call
Bug
tf function fail spuriously under python 3 7 3 for the follow example python import tensorflow as tf class base tf module def call self x return x 1 class sub base def call self x return super call x if true else 1 tf function def test return sub tf constant 42 print test colab this produce the follow error python runtimeerror in convert code bug py 16 test return sub tf constant 42 bug py 12 call return super call x if true else 1 runtimeerror super no argument observation everything work correctly without the tf function decoration the branch in call seem necessary to trigger the bug skip the branch doesn t trigger it replace true with false doesn t trigger it the bug can be trigger even if the condition evaluate to false for example replace true with x 0 for x 42 replace tf constant 42 with 42 doesn t trigger the bug test on tensorflow 2 0 nightly 2 0 0 dev20190529 on ubuntu 16 04 with python 3 7 3
tensorflowtensorflow
keras lstm do not work with tf distribute 2 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow minor tweak to tutorial code os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow version use command below tf nightly gpu 2 0 preview python version 3 6 cuda cudnn version 10 0 7 4 I copy paste this tutorial code mnist distribute training with tf2 0 but use tf distribute mirroredstrategy instead of multiworker it work then I change the model architecture to a simple embed lstm dense architecture it break with the follow errror can not place the graph because a reference or resource edge connect colocation group with incompatible assign device job worker replica 0 task 0 device gpu 0 vs job worker replica 0 task 0 device cpu 0 the edge src node be while 22 exit 100 and the dst node be while 0 retval node sequential lstm statefulpartitionedcall define at tf2 multiworker tutorial main py 109 this be execute on a remote cluster single machine with 2 gpu note that I ve be see this error ever since the initial 2 0 alpha release the code be as follow python import tensorflow as tf buffer size 10000 batch size 64 learning rate 1e 4 def input fn mode input context none max seq len 3 rnn dataset tf datum dataset range 10 repeat 10 buffer size map lambda x tf one shape max seq len dtype tf int64 tf one shape max seq len dtype tf int64 if input context rnn dataset rnn dataset shard input context num input pipeline input context input pipeline i d return rnn dataset batch batch size def model fn feature label mode vocab size 100 embed size 16 state size 7 model tf keras sequential tf keras layer embed input dim vocab size output dim embed size tf keras layers lstm unit state size return sequence true tf keras layer dense 10 activation softmax logit model feature training false if mode tf estimator modekey predict return tf estimator estimatorspec tf estimator modekey predict prediction logit logit optimizer tf compat v1 train gradientdescentoptimizer learning rate learning rate loss tf keras loss sparsecategoricalcrossentropy from logit true reduction tf keras loss reduction none loss tf reduce sum loss label logit 1 batch size if mode tf estimator modekeys eval return tf estimator estimatorspec mode loss loss return tf estimator estimatorspec mode mode loss loss train op optimizer minimize loss tf compat v1 train get or create global step def main strategy tf distribute mirroredstrategy config tf estimator runconfig train distribute strategy log step count step 1 classifier tf estimator estimator model fn model fn model dir tmp multiworker config config tf estimator train and evaluate classifi train spec tf estimator trainspec input fn input fn max step 10 eval spec tf estimator evalspec input fn input fn if name main main again the common theme I ve observe be that if tf keras lstm be part of my model and I m use tf distribute it break with this error otherwise it work just fine
tensorflowtensorflow
tf 2 0 can not use recurrent dropout with lstms grus
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no one line modification to stock example os platform and distribution e g linux ubuntu 16 04 linux ubuntu 14 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below tensorflow gpu 2 0 0 alpha0 also fail with every other tf 2 0 build I have explore python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version try multiple gpu model and memory try multiple describe the current behavior the program crash with a typeerror as below typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name encoder unify gru one like 0 this occur when try to backprop the gradient through the lstm gru with recurrent dropout enable describe the expect behavior no error code to reproduce the issue since this problem show up at the time of training one need to have the entire training pipeline dataset model etc setup to demonstrate this bug as a result I use the neural machine translation tutorial from tensorflow and modify their model to include recurrent dropout the entire code can be find in this colab notebook run the code block all the way till the block where we re train the model to see the bug other info log x typeerror traceback most recent call last in 8 9 for batch inp targ in enumerate dataset take step per epoch 10 batch loss train step inp targ enc hide 11 total loss batch loss 12 6 frame usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 436 lifting succeed so variable be initialize and we can run the 437 stateless function 438 return self stateless fn args kwd 439 else 440 canon args canon kwd self canonicalize function input args kwds usr local lib python3 6 dist package tensorflow python eager function py in call self args kwargs 1286 call a graph function specialize to the input 1287 graph function args kwargs self maybe define function args kwargs 1288 return graph function filter call args kwargs pylint disable protect access 1289 1290 property usr local lib python3 6 dist package tensorflow python eager function py in filter call self args kwargs 572 573 return self call flat 574 t for t in nest flatten args kwargs 575 if isinstance t op tensor 576 resource variable op resourcevariable usr local lib python3 6 dist package tensorflow python eager function py in call flat self args 625 only need to override the gradient in graph mode and when we have output 626 if context execute eagerly or not self output 627 output self inference function call ctx args 628 else 629 self register gradient usr local lib python3 6 dist package tensorflow python eager function py in call self ctx args 413 attrs executor type executor type 414 config proto config 415 ctx ctx 416 replace empty list with none 417 output output or none usr local lib python3 6 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 68 if any op be keras symbolic tensor x for x in input 69 raise core symbolicexception 70 raise e 71 pylint enable protect access 72 return tensor usr local lib python3 6 dist package tensorflow python eager execute py in quick execute op name num output input attrs ctx name 58 tensor pywrap tensorflow tfe py execute ctx handle device name 59 op name input attrs 60 num output 61 except core notokstatusexception as e 62 if name be not none typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name encoder unify gru one like 0
tensorflowtensorflow
tf keras predict stick with sequence when use multi processing
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory titan you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior hi when use tf kera with a custom sequence the program hang during predict with multi processing I be able to reproduce the issue with a simple nn that contain a single dense layer this happen after set the weight of the layer and running predict with multi processing when comment the set weight line or run with multi thread the program do not hang issue exist also in 1 14 0 rc0 same code work ok with tensorflow 1 12 0 and 2 0 0a0 code to reproduce the issue import numpy as np from tensorflow import kera input size 3 dense output 2 num of sample 1000 batch size 2 num of batch 5 class dummysequence keras util sequence def len self return num of sample batch size def getitem self index datum np full shape input size fill value index batch size I for I in range batch size label np full shape dense output fill value index batch size I input size for I in range batch size return np stack datum np stack label x keras layer input shape input size dense layer kera layer dense dense output y dense layer x model keras model x y remove comment in tf 1 12 model compile optimizer sgd loss kera loss mean square error shape v shape for v in dense layer weight dense layer set weight np full shape shape 0 fill value 1 0 np full shape shape 1 fill value 0 0 seq dummysequence worker 5 multiprocesse true work with multi threae multiprocesse false print running predict with multiprocesse format multiprocesse re model predict seq worker worker use multiprocesse multiprocesse step num of batch print predict of result nresult n format len res re other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 api doc some defy in link be break
Bug
url s with the issue also include but probably not limit to other recurrent keras layer description of issue what need change it seem that api doc be generate use tf nightly build master branch however link that define source code of api endpoint lead to tensorflow 2 0 0 alpha0 build r2 0 branch however use file structure of master branch it cause some 404 error see example below clear description this problem affect at least documentation of all the recurrent tf keras layer for example on tf keras gru page section define in lead to this file do not exist in r2 0 but it exist in master correct link be the link to the source code correct no see the section above for detail submit a pull request I m pretty sure that doc generation script be ok but there s some kind of misconfiguration problem
tensorflowtensorflow
tflite gpu support op not work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus3 poco f1 tensorflow instal from source or binary binary tensorflow version use command below 1 13 python version 3 6 8 bazel version if compile from source nil gcc compiler version if compile from source nil cuda cudnn version nil gpu model and memory nil describe the current behavior some of the gpu support tflite op do not run in gpu properly on cpu it behave properly and each time with a different combination even with support op the behavior of the model change and even not able to benchmark it to find the issue moreover fall back mechanism be also not follow if it be not run in gpu describe the expect behavior tflite gpu support op must run in gpu if there be unsupported op in the graph for which execution can not be do execution must fall back to cpu code to reproduce the issue we have try append some node on top of deeplab gpu convert model all append node be still support op by gpu delegate we have attach with this issue the graph and the error log while try to benchmark the tflite model attachment error log command adb shell taskset f0 datum local tmp benchmark model graph datum local tmp ret 9 27 tflite enable op profile true use gpu true output start min num run 50 min run duration second 1 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph datum local tmp ret 9 27 tflite input layer input shape use nnapi 0 use legacy nnapi 0 use gpu 1 allow fp16 0 enable op profile 1 load model datum local tmp ret 9 27 tflite resolve reporter info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error tflitegpudelegate prepare dimension can not be reduce to linear error node number 93 tflitegpudelegate fail to prepare fail to apply gpu delegate abort file to reproduce the issue retest 9 27 tflite zip
tensorflowtensorflow
when I use conv algorithm
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 debian mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 1 14 0 2 0 0a0 1 9 0 python version 3 7 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 7 5 0 7 4 2 7 4 1 7 4 0 cuda 9 can t support 2060 gpu model and memory rtx 2060 6 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue conda activate base 2 0 0 alpha0 2019 05 29 19 58 31 543654 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 05 29 19 58 31 557759 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 05 29 19 58 31 670769 I tensorflow stream executor cuda cuda gpu executor cc 1009 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 05 29 19 58 31 672348 I tensorflow compiler xla service service cc 162 xla service 0x55c9b57a0a60 execute computation on platform cuda device 2019 05 29 19 58 31 672364 I tensorflow compiler xla service service cc 169 streamexecutor device 0 graphic device compute capability 7 5 2019 05 29 19 58 31 693201 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2904000000 hz 2019 05 29 19 58 31 693619 I tensorflow compiler xla service service cc 162 xla service 0x55c9b580db30 execute computation on platform host device 2019 05 29 19 58 31 693637 I tensorflow compiler xla service service cc 169 streamexecutor device 0 2019 05 29 19 58 31 693791 I tensorflow core common runtime gpu gpu device cc 1467 find device 0 with property name graphic device major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 01 00 0 totalmemory 5 76gib freememory 5 17gib 2019 05 29 19 58 31 693804 I tensorflow core common runtime gpu gpu device cc 1546 add visible gpu device 0 2019 05 29 19 58 31 693844 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 05 29 19 58 31 694402 I tensorflow core common runtime gpu gpu device cc 1015 device interconnect streamexecutor with strength 1 edge matrix 2019 05 29 19 58 31 694413 I tensorflow core common runtime gpu gpu device cc 1021 0 2019 05 29 19 58 31 694419 I tensorflow core common runtime gpu gpu device cc 1034 0 n 2019 05 29 19 58 31 694516 I tensorflow core common runtime gpu gpu device cc 1149 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 4990 mb memory physical gpu device 0 name graphic device pci bus i d 0000 01 00 0 compute capability 7 5 number of training example 60000 number of test example 10000 epoch 1 5 2019 05 29 19 58 32 786674 w tensorflow core framework model h 202 encounter a stop event that be not precede by a start event 2019 05 29 19 58 36 114172 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 05 29 19 58 36 288348 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 05 29 19 58 36 934479 e tensorflow stream executor cuda cuda dnn cc 338 could not create cudnn handle cudnn status internal error 2019 05 29 19 58 36 945020 e tensorflow stream executor cuda cuda dnn cc 338 could not create cudnn handle cudnn status internal error 2019 05 29 19 58 36 945127 w tensorflow core common runtime base collective executor cc 214 basecollectiveexecutor startabort unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d conv2d other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
mix precision lossscaleoptimizer doesn t work with kera
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary pypi tensorflow version use command below 1 13 1 python version 3 5 2 describe the current behavior l102 tf contrib mixed precision loss scale optimizer lossscaleoptimizer doesn t implement variable method because it doesn t call base class init nor override it so it fail when use within keras model training with the follow stack trace on model fit generator call file usr local lib python3 5 dist package tensorflow python keras engine training py line 1426 in fit generator initial epoch initial epoch file usr local lib python3 5 dist package tensorflow python keras engine training generator py line 164 in model iteration callback call begin hook mode file usr local lib python3 5 dist package tensorflow python keras callbacks py line 212 in call begin hook self on train begin file usr local lib python3 5 dist package tensorflow python keras callbacks py line 279 in on train begin callback on train begin log file usr local lib python3 5 dist package horovod 0 16 0 py3 5 linux x86 64 egg horovod keras callbacks py line 30 in on train begin self backend get session run bcast op file usr local lib python3 5 dist package tensorflow python keras backend py line 482 in get session initialize variable session file usr local lib python3 5 dist package tensorflow python keras backend py line 749 in initialize variable variable get variable get graph file usr local lib python3 5 dist package tensorflow python keras backend py line 743 in get variable variable update opt optimizer variable file usr local lib python3 5 dist package tensorflow python training optimizer py line 946 in variable optimizer variable v for v in self non slot variable file usr local lib python3 5 dist package tensorflow python training optimizer py line 1025 in non slot variable return self non slot dict value attributeerror lossscaleoptimizer object have no attribute non slot dict describe the expect behavior wrap the optimizer in a lossscaleoptimizer should work transparently with keras model as it implement tf optimizer optimizer base class to solve this problem there s a need to either pass undelye optimizer variable method together with pass lossscalemanager variable too or fill slot and non slot variable some other way workaround example class myexponentialupdatelossscalemanager exponentialupdatelossscalemanager def variable self return self loss scale self num good step self num bad step class mylossscaleoptimizer lossscaleoptimizer def variable self return self opt variable self loss scale manager variable code to reproduce the issue from tensorflow train import adamoptimizer from tensorflow contrib mixed precision import lossscaleoptimizer exponentialupdatelossscalemanager optimizer adamoptimizer loss scale manager exponentialupdatelossscalemanager init loss scale 2 32 incr every n step 1000 optimizer lossscaleoptimizer optimizer loss scale manager model compile optimizer optimizer full example import numpy as np import tensorflow as tf from tensorflow train import adamoptimizer from tensorflow contrib mixed precision import lossscaleoptimizer exponentialupdatelossscalemanager input tf keras layers input shape 16 output tf keras layer dense 1 input model tf keras model model input output optimizer adamoptimizer work without these two line below loss scale manager exponentialupdatelossscalemanager init loss scale 2 32 incr every n step 1000 optimizer lossscaleoptimizer optimizer loss scale manager model compile optimizer optimizer loss binary crossentropy model fit np zero 16 16 np zero 16
tensorflowtensorflow
entity could not be transform and will be stage without change
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 2 0 alpha python version 3 6 cuda cudnn version 10 gpu model and memory gtx2080ti hi there could you please have a look at the issue a customize network module in keras can not work as for gradient backpropagation log w0529 19 29 46 658907 140071302989632 tf log py 161 entity could not be transform and will be stage without change error detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause unexpected error transforming if you believe this be due to a bug please set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output when file the bug report cause by node have ctx unset w0529 19 29 52 344057 140071302989632 tf log py 161 entity could not be transform and will be stage without change error detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause unexpected error transforming if you believe this be due to a bug please set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output when file the bug report cause by node have ctx unset w0529 19 29 52 960978 140071302989632 optimizer v2 py 928 gradient do not exist for variable conv2d 24 kernel 0 conv2d 24 bias 0 conv2d 25 kernel 0 conv2d 25 bias 0 conv2d 26 kernel 0 conv2d 26 bias 0 conv2d 27 kernel 0 conv2d 27 bias 0 conv2d 28 kernel 0 conv2d 28 bias 0 conv2d 29 kernel 0 conv2d 29 bias 0 conv2d 30 kernel 0 conv2d 30 bias 0 conv2d 31 kernel 0 conv2d 31 bias 0 conv2d 32 kernel 0 conv2d 32 bias 0 conv2d 33 kernel 0 conv2d 33 bias 0 conv2d 34 kernel 0 conv2d 34 bias 0 conv2d 35 kernel 0 conv2d 35 bias 0 conv2d 36 kernel 0 conv2d 36 bias 0 conv2d 37 kernel 0 conv2d 37 bias 0 conv2d 38 kernel 0 conv2d 38 bias 0 conv2d 39 kernel 0 conv2d 39 bias 0 when minimize the loss x
tensorflowtensorflow
all implicitly derive input to subclasse model must be tf tensor find sparsetensor
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os x 10 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 2821 gc5b8e15064 2 0 0 dev20190527 python version 2 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no describe the current behavior exception raise when try to feed sparse ragged tensor to keras model fit but that be an ordinary pipeline for estimator besides method mention in error self add input do not exist in tf source code describe the expect behavior sparse and ragged tensor should be acceptable input for keras model code to reproduce the issue python import numpy as np import tensorflow as tf feature dim 5 random np random random 10 32 feature dim this be my wish ragged input to sequence feature column feature tf raggedtensor from tensor random ragged rank 1 label tf reduce sum tf expand dim feature axis 2 axis 1 this be what should work right now index tf where tf not equal random 0 0 value tf gather nd random index feature tf sparsetensor index value random shape label tf sparse reduce sum feature axis 1 keepdim true dataset tf datum dataset from tensor slice feature feature label batch 32 class mymodel tf keras model def init self super mymodel self init self feature tf keras experimental sequencefeature tf feature column sequence numeric column feature shape feature dim self dense 1 tf keras layer timedistribute tf keras layer dense 32 activation relu self dense 2 tf keras layer timedistribute tf keras layer dense 1 def call self input output self feature input output self dense 1 output output self dense 2 output return output def compute output shape self input shape shape tf tensorshape input shape as list shape 1 1 return tf tensorshape shape model mymodel model compile optimizer adam loss mse model fit generator dataset epoch 5 other info log python epoch 1 5 valueerror traceback most recent call last in 40 model mymodel 41 model compile optimizer adam loss mse 42 model fit generator dataset epoch 5 usr local lib python2 7 site package tensorflow python keras engine training pyc in fit generator self generator step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch 1175 shuffle shuffle 1176 initial epoch initial epoch 1177 step name step per epoch 1178 1179 def evaluate generator self usr local lib python2 7 site package tensorflow python keras engine training generator pyc in model iteration model datum step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch mode batch size step name kwargs 262 263 be defer not model be compile 264 batch out batch function batch datum 265 if not isinstance batch out list 266 batch out batch out usr local lib python2 7 site package tensorflow python keras engine training pyc in train on batch self x y sample weight class weight reset metric 895 x y sample weight self standardize user datum 896 x y sample weight sample weight class weight class weight 897 extract tensor from dataset true 898 899 if self distribution strategy be true then we be in a replica context usr local lib python2 7 site package tensorflow python keras engine training pyc in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2341 tf tensor find s to add non tf tensor input please call 2342 self add input tf keras input sparseinput raggedinput etc 2343 in your subclasse model object input tensor 2344 2345 build the model use the retrieve input value or symbolic valueerror all implicitly derive input to subclasse model must be tf tensor find sparsetensor indice tf tensor 0 0 0 0 0 1 0 0 2 9 31 2 9 31 3 9 31 4 shape 1600 3 dtype int64 value tf tensor 0 76254113 0 44757111 0 9459519 0 64881651 0 2802026 0 79660244 shape 1600 dtype float64 dense shape tf tensor 32 32 5 shape 3 dtype int64 to add non tf tensor input please call self add input tf keras input sparseinput raggedinput etc in your subclasse model object
tensorflowtensorflow
there be a incorrect link in contribute md
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change correct link be the link to the source code correct no 1 use tool and library instal directly on your system refer to the cpu only developer dockerfile and gpu developer dockerfile for the require package alternatively use the say the link of cpu only developer dockerfile and gpu developer dockerfile be 404 submit a pull request I d like to fix it
tensorflowtensorflow
random seed not set in graph context of dataset map
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 jupyter notebook on mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary stock on tensorflow version use command below b v1 13 1 2 g09e3b09e69 1 13 1 python version 2 3 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the random seed set via tf set random seed seed be not set in the context in which the function pass to tf datum dataset map be invoke even for the single thread case describe the expect behavior the random seed set via tf set random seed seed should be set in the context in which the function pass to tf datum dataset map be invoke at least for the single thread case code to reproduce the issue python import tensorflow as tf def seed assert elt seed tf get default graph seed print seed be format seed assert seed be not none random seed be not set random graph operation add during mapping will not be reproducible return elt seed 37 tf set random seed seed ds tf datum dataset from generator lambda yield 0 tf int64 seed assert none ds map seed assert can run here seed in dataset map ipynb scrollto enwl9bo60byw other info log I originally see this issue locally but be able to reproduce it on the jupyter notebook provide by google here be the log of the error I see when run the above code python seed be 37 seed be none assertionerror traceback most recent call last in 15 seed assert none 16 17 ds map seed assert 8 frame in seed assert elt 4 seed tf get default graph seed 5 print seed be format seed 6 assert seed be not none random seed be not set random graph operation add during mapping will not be reproducible 7 return elt 8 assertionerror random seed be not set random graph operation add during mapping will not be reproducible
tensorflowtensorflow
drastically different behavior between tf1 and tf2
Bug
I ve notice drastically different behavior of the follow code between 2 0 0 alpha0 and 1 13 1 python import numpy as np from tensorflow keras datasets import mnist from tensorflow keras util import to categorical from tensorflow keras model import sequential from tensorflow keras layer import dense from tensorflow keras optimizer import adam x train raw y train raw x test raw y test raw mnist load datum x train x train raw reshape 60000 784 x test x test raw reshape 10000 784 y train to categorical y train raw y test to categorical y test raw basic net sequential dense 36 activation relu input shape 784 dense 10 activation softmax w bio np load w bio npy basic net layer 0 set weight w bio transpose basic net layer 0 get weight 1 basic net layer 0 trainable false basic net compile optimizer adam lr 0 0001 metric accuracy loss categorical crossentropy basic net fit x x train y y train epoch 15 in 2 0 0 alpha0 the accuracy of the network consistently reach 80 in the first few epoch in 1 13 1 the accuracy consistently reach only 20 after all 15 epoch what be go on here be the file with weight w bio npy
tensorflowtensorflow
there have no tensorflow contrib tpu op module in b211c7a commit
Bug
sorry if I m something miss it s my first attempt to make an issue commit b211c7a in the tensorflow contrib cmake python module txt tensorflow contrib tpu tensorflow contrib tpu op line 435 tensorflow contrib tpu profiler tensorflow contrib tpu python but this module be not in the directory tensorflow contrib tpu
tensorflowtensorflow
tf 2 0 beta custom metric example from documentation be wrong
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I implement the custom metric show in this page catgoricaltruepositive I think it be full of bug it have some comment say todo fix this any way here what s particularly wrong about it the accuracy of the nn approach 99 yet this metric say binary true positive 8459 0000 as show on the website too if number of sample in mnist be 50 000 then at least 45k of they should be true positive it be unstable I mess with it once and I get 49k true positive which make total sense then I rerun it and it return to 8k describe the expect behavior the result be show on the website for a low loss of 0 03 the true positive should be close to 50k however they re show to be 8k only code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
error try tensorflow litem operation be not support by gpu delegate
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device yes aarch64 android 8 1 tensorflow instal from source or binary source tensorflow version use command below 1 12 2 python version 2 7 bazel version if compile from source 0 22 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m try to run a c demo of tflite opengle on my aarch64 android 8 1 board I have try to build benchmark model tool with bazel it run the deeplabv3 257 mv gpu tflite successfully on my device now I want to integrate a simple demo of tflite similar to benchmark to my code which be build by cmake I spend some time extract all the static lib of tensorflowlite from bazle out folder and link they in my cmake I build the code successfully with ndk standalone toolchain r17c but when I run this new demo on my device it show I error like info create tensorflow lite delegate for gpu apply delegate for gpu next operation be not support by gpu delegate average pool 2d expect 1 input tensor s but node have 0 runtime input s conv 2d expect 1 input tensor s but node have 0 runtime input s conv 2d expect 1 input tensor s but node have 2 runtime input s conv 2d expect 1 input tensor s but node have 3 runtime input s depthwise conv 2d expect 1 input tensor s but node have 2 runtime input s depthwise conv 2d expect 1 input tensor s but node have 3 runtime input s resize bilinear expect 1 input tensor s but node have 2 runtime input s first 1 operation will run on the gpu and the remain 69 on the cpu tflitegpudelegate prepare readvalue value be a constant tensor 183 node number 70 tflitegpudelegate fail to prepare fail to apply gpu delegate delegate setting do node number 70 tflitegpudelegate fail to prepare fail to allocate tensor the model here be still deeplabv3 257 mv gpu tflite which have be prove work on my device I also have try to build my new demo code in bazel which perform correctly describe the expect behavior build tflite opengl delegate successfully with cmake and perform correctly on my aarch64 board code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem use your official benchmark model cc other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf function with input signature slow on unseen sequence length
Bug
system information have I write custom code yes os platform and distribution linux ubuntu 16 04 tensorflow instal from binary tensorflow version 2 0 0 dev20190526 python version 3 6 6 cuda cudnn version 10 0 gpu model and memory gtx 1060 describe the current behavior when run a tf function on 3d input a batch of sequence I find that the execution be slow on unseen sequence length even if a compatible input signature be set on a large graph this result in a low gpu usage and a grow cpu memory usage for several iteration until most sequence length be see it be as if new graph be compile internally even though the function do not seem to be retrace this effect do not affect eager mode or v1 graph mode where the execution directly run at its target speed and memory usage describe the expect behavior tf function with an input signature should behave like graph mode with constant memory usage and no warmup phase code to reproduce the issue while this issue be very visible on large graph I try to compile a small example to consistently show the effect python import time import itertool import random import tensorflow as tf def generate token base shape num tokens 4096 while true length random randint 1 100 batch size int num token length yield batch size length generate 500k tensor of shape none none 512 but with similar total size shape list itertool islice generate token base shape 500000 dataset tf datum dataset from tensor slice shape dataset dataset shuffle len shape dataset dataset map lambda shape tf zeros tf concat shape 512 axis 0 dataset dataset repeat dataset dataset prefetch 1 define a model with some layer model tf keras sequential tf keras layer dense 1024 tf keras layer dense 1024 tf keras layer dense 1024 tf keras layer dense 1024 tf keras layer dense 1024 tf function input signature tf tensorspec none none 512 dtype tf float32 def run step input return model input see length set for x in dataset length x shape 1 start time time run step x end time time print length in see length end start see length add length other info log the above code produce the follow log when run on cpu text false 0 43003296852111816 false 0 11496973037719727 false 0 11308979988098145 false 0 11620664596557617 false 0 11439895629882812 false 0 11322546005249023 true 0 095062255859375 false 0 11357808113098145 false 0 11438512802124023 false 0 11338496208190918 false 0 1123197078704834 false 0 11295366287231445 false 0 11250948905944824 false 0 11576318740844727 false 0 1139533519744873 false 0 11278915405273438 false 0 11090493202209473 true 0 09256935119628906 false 0 11287093162536621 false 0 11374545097351074 false 0 11446619033813477 false 0 11277508735656738 false 0 11354255676269531 false 0 11325383186340332 false 0 1137855052947998 false 0 11451315879821777 false 0 11423110961914062 true 0 09340834617614746 false 0 1146705150604248 false 0 11285781860351562 false 0 11371898651123047 true 0 09309053421020508 true 0 09239482879638672 true 0 09140896797180176 false 0 11467862129211426 false 0 11377716064453125 false 0 11178278923034668 false 0 11260485649108887 true 0 09450674057006836 true 0 09363818168640137 true 0 09272456169128418 false 0 11517977714538574 false 0 11325454711914062 true 0 09257698059082031 false 0 11360836029052734 true 0 09241485595703125 false 0 11343145370483398 true 0 09368515014648438 false 0 11366653442382812 true 0 09125065803527832 false 0 1126089096069336 false 0 11182904243469238 true 0 09548735618591309 true 0 09283709526062012 when the length be unseen it take about 0 113 but 0 092 after that on this example the effect be small but I m try to train a transformer model with tf function and it take very long for the training to reach full speed the cpu memory usage also keep grow during this warmup phase the same model work well when integrate with tf estimator as I m try to move from estimator to v2 custom loop
tensorflowtensorflow
empty trainable variable in keras model tf 2 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 2504 g2be59a5191 2 0 0 dev20190522 python version 3 6 8 cuda cudnn version cuda version 10 0 130 gpu model and memory nvidia geforce gtx 1080 ti describe the current behavior I want to create unet use the keras model subclasse api the current code be python class unet keras model unet architecture concatenate encoder and decoder example direct usage testcode x tf one 1 512 512 3 u net unet input re 512 min re 4 kernel size 4 initial filter 64 filter cap 512 channel 3 y u net x print y shape testoutput 1 512 512 3 def init self input re min re kernel size initial filter filter cap channel use dropout encoder true use dropout decoder true dropout prob 0 3 encoder non linearity keras layers leakyrelu decoder non linearity keras layers relu super init layer specification self use dropout encoder use dropout encoder self use dropout decoder use dropout decoder self dropout probability dropout prob self encoder non linearity encoder non linearity self decoder non linearity decoder non linearity self kernel size kernel size encoder layer be a list of list each list be a block this make easy the creation of decoder self encoder layer self decoder layer self concat layer encoder creation encoder layer spec 128 256 512 512 512 512 512 512 decoder layer spec for I filter in enumerate encoder layer spec self encoder layer append self get encoder block filter use bn I 0 decoder creation decoder layer spec 512 512 512 512 512 256 128 for I filter in enumerate decoder layer spec self concat layer append keras layers concatenate self decoder layer append self get decoder block filter use dropout I 3 final layer initializer tf random normal initializer 0 0 0 02 self final layer keras layer conv2dtranspose channel self kernel size stride 2 2 padding same activation kera activation tanh kernel initializer initializer def get block self filter conv layer none use bn true use dropout false non linearity keras layers leakyrelu initializer tf random normal initializer 0 0 0 02 conv2d block conv layer filter self kernel size stride 2 2 padding same use bias false kernel initializer initializer batch normalization if use bn block append keras layers batchnormalization dropout if use dropout block append keras layer dropout self dropout probability non linearity block append non linearity return block def get encoder block self filter use bn true return self get block filter conv layer keras layer conv2d use bn use bn use dropout self use dropout encoder non linearity self encoder non linearity def get decoder block self filter use bn true use dropout false return self get block filter conv layer keras layer conv2dtranspose use bn use bn use dropout self use dropout decoder and use dropout non linearity self decoder non linearity def call self input training true encoder evaluate encoder layer eval x input for block in self encoder layer for layer in block if isinstance layer keras layer batchnormalization or isinstance layer keras layers dropout x layer x training training else x layer x encoder layer eval append x encoder layer eval encoder layer eval 1 for I block in enumerate self decoder layer for layer in block if isinstance layer keras layer batchnormalization or isinstance layer keras layers dropout x layer x training training else x layer x x self concat layer I x encoder layer eval 1 I x self final layer x return x when I evaluate the model use an input and check the trainable variable they be empty python x tf one 1 512 512 3 u net unet input re 512 min re 4 kernel size 4 initial filter 64 filter cap 512 channel 3 y u net x print u net trainable variable it print describe the expect behavior the code should print the list of trainable variable of the net the output be correct so the call method be correctly call
tensorflowtensorflow
document the ref keyword from operation input output type
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide description of issue what need change clear description some operation like assign use a ref keyword for their input and or output example tensorflow core op state op cc this keyword be not document anywhere additionally I haven t find any way to combine ref with list I e make an operation take a list of mutable tensor as input
tensorflowtensorflow
what do gradienttape gradient target when target be a list of tensor and not a single tensor
Bug
url s with the issue please provide a link to the documentation entry for example gradient description of issue what need change clear description actually there be no way to know what do the gradient method when target be not a tensor but a list of tensor like target loss 1 loss 2 do it compute the sum of the gradient of loss 1 and loss 2 it seem like no from some test I do what do it do then I try to follow the source code without success gradienttape gradient call tensorflow python eager imperative grad imperative grad which call tensorflow python pywrap tensorflow tfe py tapegradient which be a c function call computegradient but I couldn t find the code of computegradient I just find it mention in tensorflow c eager tape h but I don t find tape cc usage example there be no usage example for this case
tensorflowtensorflow
cache iterator be in an invalid state error
Bug
system information os platform and distribution macos high sierra 10 13 6 tensorflow for cpu instal from pypi tensorflow version v1 13 0 rc2 5 g6612da8951 1 13 1 python version 3 6 6 describe the current behavior minimal not work example python import tensorflow as tf from tensorflow python framework error impl import outofrangeerror dataset tf datum dataset range 10 dataset dataset cache cache1 dataset dataset map lambda a a dataset dataset batch 4 batch dataset make one shot iterator get next with tf session as sess while true try re sess run batch print re except outofrangeerror print out of range break the code above properly iterate through the dataset only the first run when a cache doesn t exist but when it load the dataset from the cache file it crash with an error tensorflow python framework error impl internalerror cache iterator be in an invalid state perhaps getnext call after end of sequence node iteratorgetnext a workaround it happen because the map operation follow right after the cache it start work as expect if some other dataset operation be add between cache and map step for example python dataset tf datum dataset range 10 dataset dataset cache cache1 dataset dataset filter lambda x true a fake filter be add dataset dataset map lambda a a dataset dataset batch 4
tensorflowtensorflow
tflite gpu delegate model inconsistency mobilenet v1 1 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 0 0 1 gpu experimental python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior the mobilenet v1 1 0 model in the guide contain a squeeze operation that isn t support by gpu but the mobilenet v1 1 0 at tensorflow model do afaik squeeze isn t a support operation but both of they contain at least one how come the operation be different even when the model be of the same version be the one in the guide deliberately modify if so would it be a well idea to have it note in the guide the page on tensor model claim 569mil mac but the number of mac this modify 1 0 have be unclear what I ve find be that the one provide in the guide contain 89 tensor but the one in tensorflow model 88 describe the expect behavior model of the same version should be identical any modification should be explicitly document
tensorflowtensorflow
arithmeticoptimizer fail for stack with axis r and axis r 1
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os 10 14 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 13 1 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a describe the current behavior in the stack op the allow range for axis be r 1 r 1 however if you take a strided slice over the result with axis r 1 or axis r the arithmeticoptimizer will fail with a warning it also fail for scalar with any value in axis the result be still correct but this disable the arithmetic optimizer for some all of the code see the repro code for an example describe the expect behavior I would not expect arithmeticoptimizer to fail code to reproduce the issue scalar example import tensorflow as tf with tf session as sess a tf variable tf constant 0 b tf variable tf constant 1 sess run tf initializer global variable sess run tf stack a b 2 vector example import tensorflow as tf with tf session as sess a tf variable tf constant 0 1 b tf variable tf constant 2 3 sess run tf initializer global variable for axis in range 2 2 print axis sess run tf stack a b axis 2 execute sucessfully but the arithmeticoptimizer fail with a warning 2 2019 05 27 11 35 54 164917 w tensorflow core grappler optimizer graph optimizer stage h 241 fail to run optimizer arithmeticoptimizer stage removestackstridedslicesameaxis node strided slice error pack node stack axis attribute be out of bound 2 2019 05 27 11 35 54 166279 w tensorflow core grappler optimizer graph optimizer stage h 241 fail to run optimizer arithmeticoptimizer stage removestackstridedslicesameaxis node strided slice error pack node stack axis attribute be out of bound 2 1 0 1 2019 05 27 11 35 54 183014 w tensorflow core grappler optimizer graph optimizer stage h 241 fail to run optimizer arithmeticoptimizer stage removestackstridedslicesameaxis node stride slice 3 error pack node stack 3 axis attribute be out of bound 1 2019 05 27 11 35 54 184915 w tensorflow core grappler optimizer graph optimizer stage h 241 fail to run optimizer arithmeticoptimizer stage removestackstridedslicesameaxis node stride slice 3 error pack node stack 3 axis attribute be out of bound 1
tensorflowtensorflow
the output of tf sigmoid be abnormal when input have nan
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 64bit aw ec2 p3 2xlarge instance tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below 1 13 1 python version 3 6 7 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version 10 1 gpu model and memory nvidia tesla v100 16 gb describe the current behavior the output of the tf sigmoid function seem abnormal when the input have nan python in 1 import tensorflow as tf in 2 tf enable eager execution in 3 a tf constant float nan 5 in 4 tf sigmoid a out 4 describe the expect behavior python in 4 tf sigmoid a out 4 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf tf enable eager execution a tf constant float nan 5 b tf sigmoid a print b other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
one 404 page need to be fix
Bug
at the beginning of sub title train from tf data dataset train from tfdata dataset there be a super link point to dataset api but when you click it out only 404 page be what you get since this api be very often be use it would be well to correct this as soon as possible
tensorflowtensorflow
website claim that there be no internet connection
Bug
javascript on the website run some sort of detection to see if there be network connectivity or try to establish a connection in a surprising way this fail and I get a message there be no internet connection which be clearly wrong I be write this issue with the same internet connection the js console say fail to load a serviceworker pass a promise to fetchevent respondwith that reject with typeerror networkerror when attempt to fetch resource fail to load a serviceworker pass a promise to fetchevent respondwith that reject with typeerror networkerror when attempt to fetch resource fail to load a serviceworker pass a promise to fetchevent respondwith that reject with typeerror networkerror when attempt to fetch resource
tensorflowtensorflow
tf2 0 tf feature column share embedding trace twice and throw exception with tf function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 osx 10 13 1 17b1003 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below tf nightly 2 0 preview 2 0 0 dev20190506 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior share embedding not surport eager mode so I call it with tf function but still throw exception valueerror variable color color2 color3 share embed already exist disallow do you mean to set reuse true or reuse tf auto reuse in varscope find the reason be that the function will be trace twice after add line df tf keras layer densefeature color column embe color datum the print will run twice first time stack embed weight feature column v2 py 3247 get dense tensor internal feature column v2 py 3326 get dense tensor feature column v2 py 3349 call unconverted api py 173 convert call api py 271 loop body tmpf4q73us7 py 51 py for stmt control flow py 111 for stmt control flow py 102 tf call tmpf4q73us7 py 70 convert call api py 375 wrapper api py 89 call base layer py 632 call unconverted api py 173 convert call api py 271 tf share embed column with hash bucket tmp5d py4a8 py 16 convert call api py 375 wrapper func graph py 705 wrap fn def function py 292 func graph from py func func graph py 713 create graph function function py 1529 maybe define function function py 1596 get concrete function internal garbage collect function py 1333 initialize def function py 342 call def function py 399 main py 43 second time call stack embed weight feature column v2 py 3247 get dense tensor internal feature column v2 py 3326 get dense tensor feature column v2 py 3349 call unconverted api py 173 convert call api py 271 loop body tmpf4q73us7 py 51 py for stmt control flow py 111 for stmt control flow py 102 tf call tmpf4q73us7 py 70 convert call api py 375 wrapper api py 89 call base layer py 632 call unconverted api py 173 convert call api py 271 tf share embed column with hash bucket tmp5d py4a8 py 16 convert call api py 375 wrapper func graph py 705 wrap fn def function py 292 func graph from py func func graph py 713 create graph function function py 1529 maybe define function function py 1596 call function py 1307 call def function py 411 main py 43 the only difference be before maybe define function function py 1596 the code in second time fail because it be new obj so there s no cache in self embed weight then will run variable scope get variable with no reuse property def embed weight self key op get default graph graph key pylint disable protect access if key not in self embed weight embed shape self num bucket self dimension var variable scope get variable name self name shape embed shape dtype dtype float32 initializer self initializer trainable self trainable if self ckpt to load from be not none to restore var if isinstance to restore variable partitionedvariable to restore to restore get variable list pylint disable protect access checkpoint util init from checkpoint self ckpt to load from self tensor name in ckpt to restore self embed weight key var return self embed weight key describe the expect behavior could use the share embedding code to reproduce the issue import tensorflow as tf from tensorflow import feature column tf function def share embed column with hash bucket color datum color 2 5 1 0 color2 2 2 5 5 0 1 0 0 color3 2 2 2 5 5 5 1 1 1 0 0 0 color column feature column categorical column with hash bucket color 5 dtype tf int32 color column2 feature column categorical column with hash bucket color2 5 dtype tf int32 color column3 feature column categorical column with hash bucket color3 5 dtype tf int32 color column embe tf feature column share embedding color column color column2 color column3 4 combiner sum print color column embe print type color column embed print use input layer 40 df tf keras layer densefeature color column embe color datum return df dense share embed column with hash bucket print dense other info log traceback most recent call last file application pycharm app content helper pydev pydevd py line 1758 in main file application pycharm app content helper pydev pydevd py line 1752 in main global debugg run setup file none none be module file application pycharm app content helper pydev pydevd py line 1147 in run pydev import execfile file global local execute the script file application pycharm app content helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file user lqk project pycharmproject tf2 main py line 40 in dense share embed column with hash bucket file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python eager def function py line 411 in call return self stateless fn args kwd file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python eager function py line 1307 in call graph function args kwargs self maybe define function args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python eager function py line 1596 in maybe define function graph function self create graph function args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python eager function py line 1529 in create graph function capture by value self capture by value file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python framework func graph py line 713 in func graph from py func func output python func func args func kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python eager def function py line 292 in wrap fn return weak wrap fn wrap args kwd file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python framework func graph py line 705 in wrapper args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 375 in convert call result convert f effective args kwargs file var folder 4x njl257j95bx4mt08j65fxqg00000gp t tmp5y1g76hx py line 16 in tf share embed column with hash bucket df ag convert call tf keras layer densefeature color column embe none ag conversionoption recursive true force conversion false optional feature internal convert user code true color datum none file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 271 in convert call return call unconverted f args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 173 in call unconverted return f args file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python keras engine base layer py line 632 in call output call fn input args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 89 in wrapper args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 375 in convert call result convert f effective args kwargs file var folder 4x njl257j95bx4mt08j65fxqg00000gp t tmpshzfamoc py line 70 in tf call ag for stmt self feature column none loop body file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph operators control flow py line 102 in for stmt return py for stmt iter extra test body init state file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph operators control flow py line 111 in py for stmt state body target state file var folder 4x njl257j95bx4mt08j65fxqg00000gp t tmpshzfamoc py line 51 in loop body tensor ag convert call get dense tensor column ag conversionoption recursive true force conversion false optional feature internal convert user code true transformation cache self state manager none file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 271 in convert call return call unconverted f args kwargs file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python autograph impl api py line 173 in call unconverted return f args file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python feature column feature column v2 py line 3349 in get dense tensor return self get dense tensor internal transformation cache state manager file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python feature column feature column v2 py line 3326 in get dense tensor internal embed weight self share embed column creator embed weight file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python feature column feature column v2 py line 3253 in embed weight trainable self trainable file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python ops variable scope py line 1496 in get variable aggregation aggregation file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python ops variable scope py line 1239 in get variable aggregation aggregation file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python ops variable scope py line 562 in get variable aggregation aggregation file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python ops variable scope py line 514 in true getter aggregation aggregation file user lqk anaconda2 envs p36t2 lib python3 6 site package tensorflow python ops variable scope py line 856 in get single variable raise valueerror err msg valueerror variable color color2 color3 share embed already exist disallow do you mean to set reuse true or reuse tf auto reuse in varscope
tensorflowtensorflow
cpu memory leak when use tf function with gpu model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 2319 g81f2165 2 0 0 dev20190520 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 1 gpu model and memory describe the current behavior when use tf function the cpu memory usage increase forever describe the expect behavior no cpu memory leak when training model on gpu device use tf function code to reproduce the issue a part of my code since the whole source code be huge and the dataset train in the follow code be a tf datum dataset object datum train spec tf tensorspec shape none image fix height none 3 dtype tf float32 tf tensorspec shape none dtype tf int32 tf tensorspec shape none dtype tf int32 tf tensorspec shape none dtype tf int32 tf function input signature datum train spec def train step batch datum imgs label imgs width label len batch datum with tf gradienttape as tape logit model imgs training true logit len tf cast tf math ceil tf divide imgs width block size tf int32 loss accuracy loss fn label logit label len logit len class grad tape gradient loss model trainable variable optimizer apply gradient zip grad model trainable variable return loss accuracy def train epoch epoch print f epoch epoch training start for step batch datum in enumerate dataset train take num loss accuracy train step batch datum if step 1 args show per iteration 0 loss accuracy get numpy loss accuracy loss mean np mean loss accuracy mean np mean accuracy 100 prefix f epoch epoch step step 1 training print f loss of prefix be 6f loss mean print f accuracy of prefix be 3f accuracy mean print f epoch epoch training finish n
tensorflowtensorflow
lstm with sample weight fail with batch size 1
Bug
system information have I write custom code yes os platform and distribution debian 9 9 tensorflow instal from pip tensorflow version 1 13 1 python version 3 7 3 gpu model and memory n a test in cpu mode describe the current behavior an unexpected error occur when train an lstm with sample weight and batch size 1 the error do not occur if batch size 1 or if omit sample weight describe the expect behavior I would expect to be able to train an lstm use sample weight and batch size 1 as far as I understand it sample weight can be use for weight the loss function and accord to the doc it can be a flat 1d numpy array with the same length as the input sample I e same length as batch size code to reproduce the issue here s a minimal example python import numpy as np import tensorflow as tf batch size 32 sequence len 1 embed size 100 x train np random randn batch size sequence len embed size y train np random randn batch size embed size sample weight np random randn batch size train input tf keras input shape sequence len embed size batch size batch size lstm layer tf keras layers lstm 200 return sequence false train input dense layer tf keras layer dense embed size lstm layer model tf keras model model inputs train input output dense layer model summary model compile optimizer tf keras optimizer rmsprop lr 0 001 loss tf loss mean square error loss model train on batch x train y y train sample weight sample weight other info log traceback traceback most recent call last file bug minimal example py line 35 in sample weight sample weight file home john miniconda3 envs py main lib python3 7 site package tensorflow python keras engine training py line 1188 in train on batch output self train function in pylint disable not callable file home john miniconda3 envs py main lib python3 7 site package tensorflow python keras backend py line 3076 in call run metadata self run metadata file home john miniconda3 envs py main lib python3 7 site package tensorflow python client session py line 1439 in call run metadata ptr file home john miniconda3 envs py main lib python3 7 site package tensorflow python framework error impl py line 528 in exit c api tf getcode self status status tensorflow python framework error impl invalidargumenterror can not squeeze dim 0 expect a dimension of 1 get 32 node loss dense loss squeeze
tensorflowtensorflow
be there any c api example for tensorflow lite
Bug
I do not like java and swift I prefer c however I seldom see any c api help
tensorflowtensorflow
unable to get prediction from tflite graph
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow instal use pip tensorflow version use command below 1 13 1 python version python 3 6 bazel version if compile from source 0 25 2 cuda cudnn version 10 0 gpu model and memory nvidia geforce gtx 1080 ti describe the current behavior I have fine tune ssdlite mobilenet v1 model and have the frozen graph post completion of training process use the below mention code to convert the frozen pb grpah into tflite version tflite convert output file test tflite graph def file tflite graph pb input array normalize input image tensor output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 input shape 1 300 300 3 allow custom op I want to confirm if the result on test datum of tflite model match the frozen pb file however the model would not give out any output at all interpreter tf lite interpreter model path path to test tflite interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail image be of format 1 x300x300x3 and the avlue be between 1 to 1 interpreter set tensor input detail 0 index image output data1 interpreter get tensor output detail 0 index class interpreter get tensor output detail 1 index confidence score interpreter get tensor output detail 2 index print output data1 print class print confidence score the print statement give no result out any help provide would be greatly appreciate
tensorflowtensorflow
translation translation for korean
Bug
we ve translate this part of guideline to korean on our repository with github page this be url of korean repository
tensorflowtensorflow
tf2 0 tf keras estimator model to estimator do not store input name from tf keras layer densefeature
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary na tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 6 7 default oct 22 2018 11 32 17 gcc 8 2 0 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I create a keras model that take as input 2 feature column name store and loc convert it to an estimator and then train it throw an exception because it assume the input name be input 1 and input 2 describe the expect behavior train the keras estimator without error code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem please see this gist other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow lite elementwise operation not work in gpu delegate add sub
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow nil os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus 3 android 8 0 0 tensorflow instal from source or binary source tensorflow version use command below tf 1 13 python version 3 5 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 0 cuda cudnn version 5 4 0 gpu model and memory nil describe the current behavior when we be try to run a tflite model with element wise operation like add and sub append via custom code on an android application org tensorflow tensorflow lite 0 0 1 gpu experimental it throw an error message and crash the application also it fail during android benchmark test in gpu mode with the same error message however the same model run in cpu mode in the same android application and also clear the android benchmark test in cpu mode without error we also try the same model with tensorflow lite gpu nightly version but it do not give any any information about where the node be be run I e cpu or gpu the problem be with the last couple of elementwise operation at the end of the model and it look like they automatically fall back to cpu with org tensorflow tensorflow lite 0 0 0 nightly even the cast operator not support by gpu run in nighlty tflite gpu delegate without any warning we believe it be run in cpu base on the speed or time take to execute those node tf lite nightly doesn t give any info of whether it fall back to cpu or run in gpu unlike experimental version when it encounter an unsupported op in gpu also it look like error and warning be suppress in nightlty version describe the expect behavior the tensorflow lite model should run without error in gpu and cpu with element wise operator code to reproduce the issue here be the snapshot of the model which produce the aforementioned error 1 model 1 2 2 model 2 3 model model zip both run in cpu mode but fail in gpu other info log 1 model 1 error log with add and sub adb shell datum local tmp benchmark model gpu graph datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd6 tflite use gpu true adb opt intel intelpython27 lib libcrypto so 1 0 0 no version information available require by adb start min num run 50 min run duration second 1 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd6 tflite input layer input shape use nnapi 0 use legacy nnapi 0 use gpu 1 allow fp16 0 nnapi error require android sdk version to be at least 27 load model datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd6 tflite resolve reporter info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error next operation be not support by gpu delegate sub incorrect operation type pass first 77 operation will run on the gpu and the remain 2 on the cpu error tflitegpudelegate prepare fuse auto input fail error node number 79 tflitegpudelegate fail to prepare fail to apply gpu delegate abort 2 model 2 error log with add only adb shell datum local tmp benchmark model gpu graph datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd8 tflite use gpu true adb opt intel intelpython27 lib libcrypto so 1 0 0 no version information available require by adb start min num run 50 min run duration second 1 int run delay second 1 num thread 1 benchmark name output prefix min warmup run 1 min warmup run duration second 0 5 graph datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd8 tflite input layer input shape use nnapi 0 use legacy nnapi 0 use gpu 1 allow fp16 0 nnapi error require android sdk version to be at least 27 load model datum local tmp work test trial fullt 257op dm05 257 blend cast dbadd8 tflite resolve reporter info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error tflitegpudelegate prepare fuse auto input fail error node number 76 tflitegpudelegate fail to prepare fail to apply gpu delegate abort also refer issue 28606 be the issue fix in latest nightly or experimental version we try the follow gradle dependency in app 1 org tensorflow tensorflow lite gpu 0 0 0 nightly 2 org tensorflow tensorflow lite 0 0 1 gpu experimental but it do not run properly with either of the version
tensorflowtensorflow
distribute strategy wrong reduction
Bug
url s with the issue use tfdistributestrategy with custom training loop description of issue what need change in the documentation the snippet python def train step def step fn input feature label input logit model feature cross entropy tf nn softmax cross entropy with logit logit logit label label loss tf reduce sum cross entropy 1 0 global batch size train op optimizer minimize loss with tf control dependency train op return tf identity loss per replica loss mirror strategy experimental run step fn input iterator mean loss mirror strategy reduce tf distribute reduceop mean per replica loss return mean loss calculate the loss use the distribution strategy however the loss for each replica be reduce use tf distribute reduceop mean I think that the correct loss to return be the loss reduce use tf distribute reduceop sum since every loss be already reduce use the partial mean over the global batch size tf reduce sum cross entropy 1 0 global batch size I think a well snippet would be python def train step def step fn input feature label input logit model feature cross entropy tf nn softmax cross entropy with logit logit logit label label loss tf reduce sum cross entropy 1 0 global batch size train op optimizer minimize loss with tf control dependency train op return tf identity loss per replica loss mirror strategy experimental run step fn input iterator loss mirror strategy reduce tf distribute reduceop sum per replica loss return loss be I wrong submit a pull request maybe I can submit a pull request to fix this be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
update doc go
Bug
update a link that be not work the issue be relate to 28514
tensorflowtensorflow
link for migrate from deprecate dnnlinearcombinedregressor be break
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue l849 description of issue what need change link be break why be it deprecate what should I do about it
tensorflowtensorflow
can t use padded batch on dataset with distribute strategy make dataset iterator
Bug
I can t use padded batch when I m try to create a distribute iterator use the follow code import tensorflow dataset as tfds import tensorflow as tf tf enable v2 behavior strategy tf distribute mirroredstrategy def encode lang1 lang2 lang1 tokenizer pt vocab size tokenizer pt encode lang1 numpy tokenizer pt vocab size 1 lang2 tokenizer en vocab size tokenizer en encode lang2 numpy tokenizer en vocab size 1 return lang1 lang2 def tf encode pt en return tf py function encode pt en tf int64 tf int64 with strategy scope example metadata tfds load ted hrlr translate pt to en with info true as supervise true train example test example example train example test tokenizer en tfds feature text subwordtextencoder build from corpus en numpy for pt en in train example target vocab size 10000 tokenizer pt tfds feature text subwordtextencoder build from corpus pt numpy for pt en in train example target vocab size 10000 train dataset train example map tf encode train dataset train dataset shuffle 200000 train dataset train dataset padded batch 64 padded shape 1 1 train dataset strategy make dataset iterator train dataset it throw this exception valueerror unable to get batch dataset from the input dataset batch map and batch need to be the last operation on the dataset the batch operation can be follow by a prefetch system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 13 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 7 3 1 gpu model and memory nvidia titan v
tensorflowtensorflow
typeerror in rnns tutorial
Bug
url s with the issue description of issue what need change build the model clear description the model build model function result in a typeerror and give the follow trace typeerror traceback most recent call last in 1 model build model in build model 3 layer dense 64 activation tf nn relu input shape 64 4 layer dense 64 activation tf nn relu 5 layer dense 1 6 7 anaconda3 lib python3 7 site package kera engine sequential py in init self layer name 91 if layer 92 for layer in layer 93 self add layer 94 95 property anaconda3 lib python3 7 site package kera engine sequential py in add self layer 130 raise typeerror the add layer must be 131 an instance of class layer 132 find str layer 133 self build false 134 if not self layer typeerror the add layer must be an instance of class layer find parameter define be all parameter define and format correctly yes this method do not use any parameter apart from train dataset which have be replace with a constant in my above example raise list and define this raise a typeerror
tensorflowtensorflow
runtimeerror when use distribution strategy distribute training in tensorflow keras tutorial throw error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux cent os 7 tensorflow instal from source or binary conda tensorflow version use command below 1 13 1 python version 3 7 3 cuda cudnn version 10 0 7 3 1 conda instal cudnn gpu model and memory 8x geforce rtx 2080 7951mib each describe the current behavior model fail to train raise a runtimeerror replica local variable may only be assign in a replica context I be able to reproduce this issue just by use the official tutorial so that s the code give below rather than mine describe the expect behavior this code should correctly utilize my gpu code to reproduce the issue this code be take straight from the tutorial python usr bin env python import tensorflow as tf import tensorflow dataset as tfds import os dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test strategy tf distribute mirroredstrategy num train example info split train num example num test example info split test num example buffer size 10000 batch size per replica 64 batch size batch size per replica strategy num replicas in sync def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale shuffle buffer size batch batch size eval dataset mnist test map scale batch batch size with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy define the checkpoint directory to store the checkpoint checkpoint dir training checkpoint name of the checkpoint file checkpoint prefix os path join checkpoint dir ckpt epoch callback tf keras callbacks tensorboard log dir log tf keras callbacks modelcheckpoint filepath checkpoint prefix save weight only true tf keras callbacks learningratescheduler decay model fit train dataset epoch 10 callback callback other info log full output include traceback sh cuda visible device 2 3 4 5 6 7 python m mnist tf check warn the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see if you depend on functionality not list there please file an issue warn tensorflow from home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python op control flow op py 423 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer warning log before flag parsing go to stderr w0522 08 53 17 781122 140396075140928 deprecation py 323 from home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python op control flow op py 423 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 05 22 08 53 17 916369 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 fma 2019 05 22 08 53 17 931970 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200015000 hz 2019 05 22 08 53 17 934971 I tensorflow compiler xla service service cc 150 xla service 0x55cb744b5090 execute computation on platform host device 2019 05 22 08 53 17 935018 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 05 22 08 53 20 146420 I tensorflow compiler xla service service cc 150 xla service 0x55cb74533ed0 execute computation on platform cuda device 2019 05 22 08 53 20 146509 I tensorflow compiler xla service service cc 158 streamexecutor device 0 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 146535 I tensorflow compiler xla service service cc 158 streamexecutor device 1 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 146556 I tensorflow compiler xla service service cc 158 streamexecutor device 2 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 146591 I tensorflow compiler xla service service cc 158 streamexecutor device 3 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 146613 I tensorflow compiler xla service service cc 158 streamexecutor device 4 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 146634 I tensorflow compiler xla service service cc 158 streamexecutor device 5 geforce rtx 2080 compute capability 7 5 2019 05 22 08 53 20 148829 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 08 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 149286 I tensorflow core common runtime gpu gpu device cc 1433 find device 1 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 09 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 149717 I tensorflow core common runtime gpu gpu device cc 1433 find device 2 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 84 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 150156 I tensorflow core common runtime gpu gpu device cc 1433 find device 3 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 85 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 150586 I tensorflow core common runtime gpu gpu device cc 1433 find device 4 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 88 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 151019 I tensorflow core common runtime gpu gpu device cc 1433 find device 5 with property name geforce rtx 2080 major 7 minor 5 memoryclockrate ghz 1 71 pcibusid 0000 89 00 0 totalmemory 7 77gib freememory 7 62gib 2019 05 22 08 53 20 152662 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 1 2 3 4 5 2019 05 22 08 53 20 169207 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 05 22 08 53 20 169256 I tensorflow core common runtime gpu gpu device cc 990 0 1 2 3 4 5 2019 05 22 08 53 20 169281 I tensorflow core common runtime gpu gpu device cc 1003 0 n n n n n n 2019 05 22 08 53 20 169326 I tensorflow core common runtime gpu gpu device cc 1003 1 n n n n n n 2019 05 22 08 53 20 169345 I tensorflow core common runtime gpu gpu device cc 1003 2 n n n n n n 2019 05 22 08 53 20 169379 I tensorflow core common runtime gpu gpu device cc 1003 3 n n n n n n 2019 05 22 08 53 20 169398 I tensorflow core common runtime gpu gpu device cc 1003 4 n n n n n n 2019 05 22 08 53 20 169416 I tensorflow core common runtime gpu gpu device cc 1003 5 n n n n n n 2019 05 22 08 53 20 171379 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7416 mb memory physical gpu device 0 name geforce rtx 2080 pci bus i d 0000 08 00 0 compute capability 7 5 2019 05 22 08 53 20 171960 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 7416 mb memory physical gpu device 1 name geforce rtx 2080 pci bus i d 0000 09 00 0 compute capability 7 5 2019 05 22 08 53 20 172517 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 7416 mb memory physical gpu device 2 name geforce rtx 2080 pci bus i d 0000 84 00 0 compute capability 7 5 2019 05 22 08 53 20 173011 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 7416 mb memory physical gpu device 3 name geforce rtx 2080 pci bus i d 0000 85 00 0 compute capability 7 5 2019 05 22 08 53 20 173526 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 4 with 7416 mb memory physical gpu device 4 name geforce rtx 2080 pci bus i d 0000 88 00 0 compute capability 7 5 2019 05 22 08 53 20 174043 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 5 with 7416 mb memory physical gpu device 5 name geforce rtx 2080 pci bus i d 0000 89 00 0 compute capability 7 5 2019 05 22 08 53 20 184131 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 1 2 3 4 5 2019 05 22 08 53 20 185576 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 05 22 08 53 20 185617 I tensorflow core common runtime gpu gpu device cc 990 0 1 2 3 4 5 2019 05 22 08 53 20 185638 I tensorflow core common runtime gpu gpu device cc 1003 0 n n n n n n 2019 05 22 08 53 20 185656 I tensorflow core common runtime gpu gpu device cc 1003 1 n n n n n n 2019 05 22 08 53 20 185673 I tensorflow core common runtime gpu gpu device cc 1003 2 n n n n n n 2019 05 22 08 53 20 185692 I tensorflow core common runtime gpu gpu device cc 1003 3 n n n n n n 2019 05 22 08 53 20 185709 I tensorflow core common runtime gpu gpu device cc 1003 4 n n n n n n 2019 05 22 08 53 20 185749 I tensorflow core common runtime gpu gpu device cc 1003 5 n n n n n n 2019 05 22 08 53 20 187543 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 0 with 7416 mb memory physical gpu device 0 name geforce rtx 2080 pci bus i d 0000 08 00 0 compute capability 7 5 2019 05 22 08 53 20 188006 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 1 with 7416 mb memory physical gpu device 1 name geforce rtx 2080 pci bus i d 0000 09 00 0 compute capability 7 5 2019 05 22 08 53 20 188405 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 2 with 7416 mb memory physical gpu device 2 name geforce rtx 2080 pci bus i d 0000 84 00 0 compute capability 7 5 2019 05 22 08 53 20 188747 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 3 with 7416 mb memory physical gpu device 3 name geforce rtx 2080 pci bus i d 0000 85 00 0 compute capability 7 5 2019 05 22 08 53 20 189115 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 4 with 7416 mb memory physical gpu device 4 name geforce rtx 2080 pci bus i d 0000 88 00 0 compute capability 7 5 2019 05 22 08 53 20 189446 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device device gpu 5 with 7416 mb memory physical gpu device 5 name geforce rtx 2080 pci bus i d 0000 89 00 0 compute capability 7 5 traceback most recent call last file home stilljm conda envs tf 1 13 lib python3 7 runpy py line 193 in run module as main main mod spec file home stilljm conda envs tf 1 13 lib python3 7 runpy py line 85 in run code exec code run global file home stilljm tensorflow be terrible mnist tf check py line 38 in metric accuracy file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python train checkpointable base py line 442 in method wrapper method self args kwargs file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras engine training py line 499 in compile sample weight self sample weight file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras engine training py line 1844 in handle metric return stateful result return stateful result file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras engine training py line 1800 in handle per output metric stateful metric result call stateful fn stateful fn file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras engine training py line 1773 in call stateful fn fn y true y pre weight weight mask mask file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras engine training util py line 852 in call metric function return metric fn y true y pre sample weight weight file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras metrics py line 438 in call update op self update state args kwargs file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras metrics py line 98 in decorate update op update state fn args kwargs file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras metrics py line 651 in update state match sample weight sample weight file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python keras metrics py line 604 in update state update total op state op assign add self total value file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python op state op py line 191 in assign add return ref assign add value file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python distribute value py line 911 in assign add assert replica context file home stilljm conda envs tf 1 13 lib python3 7 site package tensorflow python distribute value py line 894 in assert replica context replica local variable may only be assign in a replica context runtimeerror replica local variable may only be assign in a replica context
tensorflowtensorflow
limit of class label map txt or label map pbtxt
Bug
be there any limit in label map class use in the training config file
tensorflowtensorflow
libtensorflowlite so crash when load model just crash on sessionoption destructor
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
unexpected behaviour of tf keras model predict use tf datum dataset as input
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution macos x ubuntu 16 04 mobile device none tensorflow instal from source or binary from source tensorflow version use command below v1 13 1 0 g6612da8951 python version 3 7 3 macos and 3 6 7 ubuntu bazel version 0 20 0 homebrew macos and 0 20 0 ubuntu gcc compiler version clang 1000 11 45 5 macos and 7 3 0 ubuntu cuda cudnn version none macos and 10 0 ubuntu gpu model and memory none macos and geforce gtx 1080 ti ubuntu 11178 mib memory describe the current behavior call model predict x step n on a tf keras model instance with two input fail with many error 1 if the model be not compile I get a runtimeerror as follow runtimeerror you must compile a model before training testing use model compile optimizer loss 2 if I do compile the model even though this be not necessary since predict should not compute any loss I get a valueerror see ds1 below valueerror error when check model input the list of numpy array that you be pass to your model be not the size the model expect expect to see 2 array s but instead get the follow list of 1 array 3 if I far provide a target in the dataset see ds2 below the predict method work and produce an output of the correct shape describe the expect behavior call model predict x step n on both an uncompiled 1 and compile 2 tf keras model should produce a list of numpy array correspond to the output of the model without any error code to reproduce the issue this test case be modify from 23702 to include multiple model input the issue describe there seem to be relate but not identical to this one in particular the fix introduce in tf nightly 1 13 0 dev20190213 that be mention do not resolve the issue for this test case python import numpy as np import tensorflow as tf x train y train x test y test tf keras datasets mnist load datum x test x test reshape 10000 28 28 1 astype np float32 y test tf keras util to categorical y y test x1 tf keras layers input 28 28 1 dtype tf float32 conv1 tf keras layer conv2d 8 kernel size 3 activation relu flat1 tf keras layer flatten x2 tf keras layers input 28 28 1 dtype tf float32 conv2 tf keras layer conv2d 8 kernel size 3 activation relu flat2 tf keras layer flatten concat tf keras layers concatenate proj tf keras layer dense 10 activation softmax m tf keras model model x1 x2 proj concat flat1 conv1 x1 flat2 conv2 x2 m compile adam mse ds tf datum dataset from tensor slice x test ds1 tf datum dataset zip ds ds ds2 tf datum dataset zip ds1 tf datum dataset from tensor slice y test output m predict x ds1 batch batch size 10 step 1000 verbose true fail with valueerror output m predict x ds2 batch batch size 10 step 1000 verbose true work other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach valueerror traceback most recent call last in 23 ds ds batch batch size 10 24 datum ds make one shot iterator 25 output m predict x datum step 1000 verbose true virtualenvs python3 6 lib python3 6 site package tensorflow python keras engine training py in predict self x batch size verbose step max queue size worker use multiprocesse 1094 batch size 1095 x self standardize user datum 1096 x check step true step name step step step 1097 1098 if self run eagerly or isinstance x iterator op eageriterator and virtualenvs python3 6 lib python3 6 site package tensorflow python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle 2380 feed input shape 2381 check batch axis false don t enforce the batch size 2382 exception prefix input 2383 2384 if y be not none virtualenvs python3 6 lib python3 6 site package tensorflow python keras engine training util py in standardize input data datum name shape check batch axis exception prefix 321 expect to see str len name array s 322 but instead get the follow list of 323 str len data array str datum 200 324 elif len name 1 325 raise valueerror valueerror error when check model input the list of numpy array that you be pass to your model be not the size the model expect expect to see 2 array s but instead get the follow list of 1 array
tensorflowtensorflow
receive invalidargument error when feed input of uneven length
Bug
system information I have write custom albeit straightforward code use keras sequential platform google colab cpu runtime python 3 tensorflow instal use pip install q tensorflow 2 0 0 alpha0 current behavior invalidargument error occur expect behavior I should be able to train the network use input of uneven length code to reproduce the issue pip install tensorflow 2 0 0 alpha0 from tensorflow import kera from tensorflow keras import layer from tensorflow keras optimizer import adam import numpy as np import pickle model keras sequential layer embed input dim 48191 output dim 300 layer lstm 300 layer dense 1 activation sigmoid model compile loss binary crossentropy optimizer adam metric accuracy dataset pickle load open dataset pkl rb datum generator np asarray x np asarray y for x y in dataset model fit generator datum generator step per epoch len dataset error message usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror try to stack element of an empty list with non fully define element shape 300 node unified lstm 2 tensorarrayv2stack tensorliststack op inference keras scratch graph 6215 the dataset be require to reproduce the issue
tensorflowtensorflow
xla gpu jit slow down gnmt when large cluster be form
Bug
system information have I write custom code use model from nvidia openseq2seq os platform and distribution ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below upstream base 779 g83909d2 1 13 1 python version 3 5 2 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 0 20160609 cuda cudnn version 10 0 gpu model and memory nvidia gv100 code to reproduce the issue note that let s assume a context where the deadness analysis be disabled for form large cluster in such scenario xla gpu jit slow down this report gnmt implementation by around 30 reproduction step 1 download the training dataset from the following link assume put it under wmt16 2 git clone 3 cd openseq2seq 4 git apply gnmt config txt file here assume the dataset be place under wmt16 5 run command tf xla flag tf xla disable deadness safety check for debug true cuda visible device 0 python run py config file example config text2text en de en de gnmt like 4gpu py benchmark bench start 50 bench step 100 use xla jit 6 remove use xla jit to run with native tensorflow observe slowdown the performance measure on my gv100 1 gpu with xla jit avg time per step 1 000s with native tensorflow avg time per step 0 760 this be around 30 slowdown compare xla jit to native tf a theory of why such a slowdown my theory of why xla slow down be as follow ideally the tf runtime host should be able to run ahead of the device so that the host overhead can be hide to achieve that the tf runtime usually execute the dataflow op in parallel on multiple cpu to feed computation to one gpu stream for example when op relate to an lstm cell be execute the tf while op e g merge switch etc could be execute in parallel so that the loop relate latency be hide this however be not the case observe in xla jit on the forward path of the gnmt a possible explanation of why the host do not run ahead be a result of interaction between host device synchronization need by tf while and long latency xlarun op the tf while op involve synchronization between host and device as they need to copy some computation result back to the host to make decision about when to exit the loop for example a typical op sequence of tf while be add less loopcond and switch the result of add be on device but tf compute less loopcond and switch on host these synchronization latency be well hide in pure tf execution as there be more parallel op and each op be of low latency in contrast scheduling a long latency op such as xlarun along with the op sequence of tf while on the same gpu stream can introduce extra dependency establish through the runtime scheduling order on the single gpu stream and force the host to wait for the completion of this long latency op even if they have no dependency in the dataflow graph often this be the case observe other info further reference for script option
tensorflowtensorflow
tfliteconverterv2 have no attribute from save model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 9 0 7 1 4 gpu model and memory describe the current behavior when run tf lite tfliteconverter from save model save model path an error occur as describe in the title describe the expect behavior the tf lite converter should work properly code to reproduce the issue just follow the sample code in here import tensorflow as tf root tf train checkpoint root v1 tf variable 3 root v2 tf variable 2 root f tf function lambda x root v1 root v2 x export dir tmp test save model input datum tf constant 1 shape 1 1 to save root f get concrete function input datum tf save model save root export dir to save converter tf lite tfliteconverter from save model export dir tflite model converter convert
tensorflowtensorflow
python3 type annotation do not work with tf function for loop tf while loop conversion
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 alpha python version 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior if you use python3 type annotation such as x tf tensor tf constant 0 I aliase tf tensor for various shape to keep my sanity for reinforcement learning problem in a tf function and the function contain a for loop to be translate to tf while loop that doesn t even have to use the tensor that s annotate the code will fail as if you do not turn on eager execution describe the expect behavior python3 type hinting should not fail the code code to reproduce the issue tf function def tf for tf break x tf tensor tf constant 0 for I in tf range 5 x I return x print tf for tf break other info log warn log before flag parsing go to stderr w0519 21 35 32 992043 140297958307648 tf log py 161 entity could not be transform and will be stage without change error detail can be find in the log when run with the env variable autograph verbosity 1 please report this to the autograph team cause attributeerror during conversion nonetype object have no attribute field traceback most recent call last file home jackshi local lib python3 6 site package tensorflow python autograph impl conversion py line 393 in function to graph node node to graph node context file home jackshi local lib python3 6 site package tensorflow python autograph impl conversion py line 436 in node to graph node converter standard analysis node context be initial true file home jackshi local lib python3 6 site package tensorflow python autograph core converter py line 493 in standard analysis graph cfg build node file home jackshi local lib python3 6 site package tensorflow python autograph pyct cfg py line 813 in build visitor visit node file usr lib python3 6 ast py line 253 in visit return visitor node file home jackshi local lib python3 6 site package tensorflow python autograph pyct cfg py line 672 in visit functiondef self visit stmt file usr lib python3 6 ast py line 253 in visit return visitor node file usr lib python3 6 ast py line 257 in generic visit for field value in iter field node file usr lib python3 6 ast py line 171 in iter field for field in node field attributeerror nonetype object have no attribute field during handling of the above exception another exception occur traceback most recent call last file home jackshi local lib python3 6 site package tensorflow python autograph impl api py line 369 in convert call experimental partial type partial type file home jackshi local lib python3 6 site package tensorflow python autograph impl api py line 513 in to graph arg value arg type file home jackshi local lib python3 6 site package tensorflow python autograph impl conversion py line 190 in entity to graph node name n function to graph o program ctx arg value arg type file home jackshi local lib python3 6 site package tensorflow python autograph impl conversion py line 396 in function to graph raise error internalerror conversion e tensorflow python autograph pyct error internalerror attributeerror during conversion nonetype object have no attribute field during handling of the above exception another exception occur traceback most recent call last file home jackshi magneticaccelerator descrete optimization tf scratch py line 12 in print tf for tf break file home jackshi local lib python3 6 site package tensorflow python eager def function py line 426 in call self initialize args kwd add initializer to initializer map file home jackshi local lib python3 6 site package tensorflow python eager def function py line 370 in initialize args kwd file home jackshi local lib python3 6 site package tensorflow python eager function py line 1313 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home jackshi local lib python3 6 site package tensorflow python eager function py line 1580 in maybe define function graph function self create graph function args kwargs file home jackshi local lib python3 6 site package tensorflow python eager function py line 1512 in create graph function capture by value self capture by value file home jackshi local lib python3 6 site package tensorflow python framework func graph py line 694 in func graph from py func func output python func func args func kwargs file home jackshi local lib python3 6 site package tensorflow python eager def function py line 317 in wrap fn return weak wrap fn wrap args kwd file home jackshi local lib python3 6 site package tensorflow python framework func graph py line 686 in wrapper args kwargs file home jackshi local lib python3 6 site package tensorflow python autograph impl api py line 390 in convert call return call unconverted f args kwargs file home jackshi local lib python3 6 site package tensorflow python autograph impl api py line 188 in call unconverted return f args kwargs file home jackshi magneticaccelerator descrete optimization tf scratch py line 7 in tf for tf break for I in tf range 5 file home jackshi local lib python3 6 site package tensorflow python framework op py line 449 in iter tensor object be only iterable when eager execution be typeerror tensor object be only iterable when eager execution be enable to iterate over this tensor use tf map fn
tensorflowtensorflow
tf2 0 docsprint prep error with dataset example
Bug
url s with the issue description of issue what need change the example for from generator fail clear description this example use tf enable eager execution which give the follow error attributeerror module tensorflow have no attribute enable eager execution if you remove this line of code the example run but identify this code be deprecate in tf v2 correct link I do not see a link to the source parameter define the parameter be define but the example only use three of the four parameter the last parameter be optional but not show how to be use return define the return code be define correctly as a dataset raise list and define be the error define for example raise usage example there be a usage example this pr be regard the example request visual if applicable there be not example submit a pull request yes be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide yes
tensorflowtensorflow
tf dataset map only process one example in my dataset
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 12 2 python version 3 5 cuda cudnn version v8 0 61 gpu model and memory tesla p100 pcie 12193mib describe the current behavior I have a dataset contain 592 example but tf dataset map only process one of those as evidence by a global counter which I increment in the function give to map why be it not process all example in the dataset I run in eager execution but code inside map be not execute eagerly describe the expect behavior all 592 example in my dataset should be process with the counter be 592 afterwards code to reproduce the issue my dataset from future import absolute import division print function import tensorflow as tf import numpy as np import sys import utility counter 0 def decode png mask image buffer take string of byte encode a png and produce a tensor image image tf squeeze tf image decode image image buffer channel 1 axis 2 image set shape none none image tf great image 0 image tf cast image dtype tf uint8 return image def mask to onehot tag mask tag class indice num class def onehotify pixel tag mask tag mask size suppress tf where tf not equal tag mask size 0 tag mask size tag mask size 9999999 small mask index tf argmin tag mask size suppress onehot tf one hot small mask index depth num class dtype tf uint8 return onehot tag mask size tf reduce sum tag mask axis 1 2 image mask tf transpose tag mask perm 1 2 0 onehot tf map fn lambda x tf map fn onehotify x image mask return onehot def parse example example proto width height num class feature image encode tf fixedlenfeature tf string image height tf fixedlenfeature tf int64 image width tf fixedlenfeature tf int64 image filename tf fixedlenfeature tf string image object bbox xmin tf varlenfeature tf float32 image object bbox xmax tf varlenfeature tf float32 image object bbox ymin tf varlenfeature tf float32 image object bbox ymax tf varlenfeature tf float32 image object class label tf varlenfeature tf int64 image object class text tf varlenfeature tf string image object mask tf varlenfeature tf string image depth tf fixedlenfeature tf string global counter counter counter 1 parse example tf parse single example example proto feature decode image image tf image decode jpeg parse example image encode parse example image encode image tag mask tf sparse to dense parse example image object mask default value tag mask tf map fn decode png mask tag mask dtype tf uint8 tag mask tf reshape tag mask shape tf stack 1 height width name tag mask all segmentation now have their mask in mask their labelmap index in class index and their tagname in class text tag class indice tf sparse to dense parse example image object class label tag class name tf sparse to dense parse example image object class text default value onehot mask to onehot tag mask tag class indice num class parse example image label onehot return parse example tf enable eager execution num class 21 tfrecord train path to tf record dataset train tf datum tfrecorddataset tfrecord train read image widht height from the tfrecord file iterator dataset train make one shot iterator next element iterator get next parse element np fromstre next element numpy dtype np uint8 example tf train example fromstre parse element height example feature feature image height int64 list value 0 width example feature feature image width int64 list value 0 dataset train dataset train map lambda x parse example x width height num class print counter
tensorflowtensorflow
tf nightly gpu 2 0 very slow on tape gradient
Bug
while migrate to tensorflow 2 0 I find a big performance issue tf function def ontrainstep self datum training true image label datum with tf gradienttape as tape loss prediction self seq2seq image label training calculate the total probability of the output string probability tf nn softmax prediction ggn 0 if training param self seq2seq trainable variable gradient tape gradient loss param gradient tf clip by global norm gradient 5 0 ggn tf linalg global norm gradient update op apply gradient self optimizer apply gradient zip gradient param return loss prediction probability ggn tensorflow gpu 1 13 1 take 0 7 second tensorflow gpu 2 0 nightly take 2 4 second remove gradient tape gradient loss param and ggn tf linalg global norm gradient take 0 3 second that mean gradient tape gradient loss param be consume most of the time
tensorflowtensorflow
tf 2 0 alpha tutorial use tfrecord and tf example can t run on tensorflow2 0 0 gpu
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue tensorflow version tensorflow 2 0 alpha doc link use tfrecord and tf example window 10 ltsc x64 python3 6 cuda 10 0 cudnn 7 5 description of issue what need change if I run the example code I will have an error in the cell tf serialize example f0 f1 f2 f3 clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define unknownerror traceback most recent call last in 1 tf serialize example f0 f1 f2 f3 in tf serialize example f0 f1 f2 f3 3 serialize example 4 f0 f1 f2 f3 pass these args to the above function 5 tf string the return type be tf string 6 return tf reshape tf string the result be a scalar f anaconda3 envs tf2 lib site package tensorflow python op script op py in eager py func func inp tout name 387 if func return none 388 389 return internal py func func func inp inp tout tout eager true name name 390 391 f anaconda3 envs tf2 lib site package tensorflow python op script op py in internal py func func inp tout stateful eager be grad func name 276 if eager 277 result gen script op eager py func 278 input inp token token tout tout name name 279 else 280 if stateful f anaconda3 envs tf2 lib site package tensorflow python ops gen script op py in eager py func input token tout name 64 else 65 message e message 66 six raise from core status to exception e code message none 67 add node to the tensorflow graph 68 token execute make str token token f anaconda3 envs tf2 lib site package six py in raise from value from value unknownerror runtimeerror error copy tensor to device cpu 0 can t copy 35 byte of a tensor into another with 32 byte buffer traceback most recent call last file f anaconda3 envs tf2 lib site package tensorflow python op script op py line 205 in call return func device token args file f anaconda3 envs tf2 lib site package tensorflow python op script op py line 107 in call ret self func args file line 10 in serialize example feature2 bytes feature feature2 file line 8 in byte feature value value numpy byteslist win t unpack a string from an eagertensor file f anaconda3 envs tf2 lib site package tensorflow python framework op py line 732 in numpy return self cpu nograd numpy pylint disable protect access file f anaconda3 envs tf2 lib site package tensorflow python framework op py line 899 in cpu nograd return self copy nograd context context cpu 0 file f anaconda3 envs tf2 lib site package tensorflow python framework op py line 847 in copy nograd new tensor self copy to device context ctx handle device device name runtimeerror error copy tensor to device cpu 0 can t copy 35 byte of a tensor into another with 32 byte buffer op eagerpyfunc
tensorflowtensorflow
signature def lose when call save model cli convert
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes use save model cli tool os platform and distribution e g linux ubuntu 16 04 cat etc os release name red hat enterprise linux server version 7 6 maipo i d rhel i d like fedora variant server variant i d server version i d 7 6 pretty name red hat enterprise linux server 7 6 maipo ansi color 0 31 cpe name cpe o redhat enterprise linux 7 6 ga server home url bug report url redhat bugzilla product red hat enterprise linux 7 redhat bugzilla product version 7 6 redhat support product red hat enterprise linux redhat support product version 7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below python c import tensorflow as tf print tf git version tf version 2019 05 17 20 37 38 121490 I tensorflow stream executor platform default dso loader cc 43 successfully open dynamic library libcudart so 10 1 v1 12 1 142 gcd15418 1 14 0 rc0 python version python version python 3 6 8 anaconda inc bazel version if compile from source 0 24 1 gcc compiler version if compile from source 7 3 0 cuda cudnn version cudatoolkit 10 1 152 cudnn 7 5 1 10 1 gpu model and memory tesla v100 sxm2 16130mib you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version python c import tensorflow as tf print tf git version tf version 2019 05 17 20 37 38 121490 I tensorflow stream executor platform default dso loader cc 43 successfully open dynamic library libcudart so 10 1 v1 12 1 142 gcd15418 1 14 0 rc0 describe the current behavior the predict signature definition be lose when run the save model cli convert command against a resnet model describe the expect behavior both the predict signature and the serve default signature definition should be in the save model code be remove near these line in trt convert py when the code be restructure in the r1 14 branch l254 the original code in the v1 13 1 tag version loop through all of the signature definition l292 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem the follow step will show the issue with the converted graph mkdir p home save model cd home save model wget tar no same owner xzvf resnet v1 fp16 savedmodel nchw tar gz cd home mkdir p home inference model resnet v1 50 fp16 save model cli convert dir home save model resnet v1 fp16 savedmodel nchw 1538686290 output dir home inference model resnet v1 50 fp16 cli tag set serve tensorrt precision mode fp16 max batch size 1 be dynamic op false save model cli show dir home save model resnet v1 fp16 savedmodel nchw 1538686290 all will show predict save model signature definition save model cli show dir home inference model resnet v1 50 fp16 cli all the predict signature definition do not exist in the converted model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
linux no module name tensorflow estimator python estimator tpu
Bug
I successfully develop and run a model use tensorflow probability with pruning in a windows machine but I get the follow error in linux ubuntu 16 04 file line 4 in import tensorflow probability as tfp file home ruben anaconda3 lib python3 6 site package tensorflow probability init py line 78 in from tensorflow probability python import pylint disable wildcard import file home ruben anaconda3 lib python3 6 site package tensorflow probability python init py line 21 in from tensorflow probability python import bijector file home ruben anaconda3 lib python3 6 site package tensorflow probability python bijector init py line 23 in from tensorflow probability python bijector absolute value import absolutevalue file home ruben anaconda3 lib python3 6 site package tensorflow probability python bijector absolute value py line 22 in from tensorflow probability python bijector import bijector file home ruben anaconda3 lib python3 6 site package tensorflow probability python bijectors bijector py line 31 in from tensorflow probability python internal import distribution util file home ruben anaconda3 lib python3 6 site package tensorflow probability python internal distribution util py line 26 in from tensorflow probability python internal import dtype util file home ruben anaconda3 lib python3 6 site package tensorflow probability python internal dtype util py line 24 in from tensorflow contrib import framework as contrib framework file home ruben anaconda3 lib python3 6 site package tensorflow contrib init py line 41 in from tensorflow contrib import distribution file home ruben anaconda3 lib python3 6 site package tensorflow contrib distribution init py line 44 in from tensorflow contrib distribution python ops estimator import file home ruben anaconda3 lib python3 6 site package tensorflow contrib distribution python ops estimator py line 21 in from tensorflow contrib learn python learn estimator head import compute weight loss file home ruben anaconda3 lib python3 6 site package tensorflow contrib learn init py line 93 in from tensorflow contrib learn python learn import file home ruben anaconda3 lib python3 6 site package tensorflow contrib learn python init py line 28 in from tensorflow contrib learn python learn import file home ruben anaconda3 lib python3 6 site package tensorflow contrib learn python learn init py line 40 in from tensorflow contrib learn python learn experiment import experiment file home ruben anaconda3 lib python3 6 site package tensorflow contrib learn python learn experiment py line 39 in from tensorflow contrib tpu python tpu import tpu estimator file home ruben anaconda3 lib python3 6 site package tensorflow contrib tpu init py line 77 in from tensorflow contrib tpu python tpu tpu config import file home ruben anaconda3 lib python3 6 site package tensorflow contrib tpu python tpu tpu config py line 22 in from tensorflow estimator python estimator tpu tpu config import modulenotfounderror no module name tensorflow estimator python estimator tpu all the package involve be up to date include panda matplotlib and numpy pip conda reinstall be not solve this issue I m run python 3 6 8 64bits qt 5 11 2 pyqt5 5 11 3 on linux spyder 3 3 1
tensorflowtensorflow
typeerror not json serializable while do tf keras model save and use keras variable in loss weight in tf keras model compile
Bug
system information have I write custom code na os platform and distribution ubuntu 16 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 1 12 0 python version 3 5 2 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version release 9 0 v9 0 176 gpu model and memory tesla k80 12 gb describe the current behavior when I try to save my model use model save where model be a tf keras model instance it throw a typeerror not json serializable I be use a tf keras backend variable in loss weight in model compile optimizer tf keras optimizer adam interestingly when I try to save my model weight only use model save weight where model be a tf keras model instance it work fine no error describe the expect behavior it should not throw any type of error during training code to reproduce the issue import tensorflow as tf import numpy as np import os import sys input layer tf keras layers input shape 4 3 batch size 1 layer1 tf keras layer dense 20 input layer layer2 tf keras layer dense 1 name output1 layer1 layer3 tf keras layer dense 1 name output2 layer1 model tf keras model input input layer output layer2 layer3 alpha tf keras backend variable 0 25 model compile optimizer tf keras optimizer adam lr 0 001 loss output1 tf keras metric binary crossentropy output2 tf keras metric binary crossentropy loss weight 1 0 alpha model fit be skip just to get straight to error model save weight h5 just execute this code as it be it will throw same error other info log traceback most recent call last file main late py line 45 in max queue size 10 file home tejal local lib python3 5 site package tensorflow python keras engine training py line 2177 in fit generator initial epoch initial epoch file home tejal local lib python3 5 site package tensorflow python keras engine training generator py line 216 in fit generator callback on epoch end epoch epoch log file home tejal local lib python3 5 site package tensorflow python keras callbacks py line 214 in on epoch end callback on epoch end epoch log file home tejal local lib python3 5 site package tensorflow python keras callbacks py line 601 in on epoch end self model save filepath overwrite true file home tejal local lib python3 5 site package tensorflow python keras engine network py line 1363 in save save model self filepath overwrite include optimizer file home tejal local lib python3 5 site package tensorflow python keras engine save py line 134 in save model default serialization get json type encode utf8 file usr lib python3 5 json init py line 237 in dump kw encode obj file usr lib python3 5 json encoder py line 198 in encode chunk self iterencode o one shoot true file usr lib python3 5 json encoder py line 256 in iterencode return iterencode o 0 file home tejal local lib python3 5 site package tensorflow python util serialization py line 64 in get json type raise typeerror not json serializable obj typeerror not json serializable typeerror
tensorflowtensorflow
tf2 0 can t use keras validation datum with distribution strategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary pip tf nightly gpu 2 0 preview tensorflow version use command below 2 0 0 dev20190517 python version 3 7 3 cuda cudnn version 10 7 4 2 24 gpu model and memory 4 x v100 describe the current behavior use keras validation datum and tf distribute mirroredstrategy will fail with batchdataset object be not subscriptable in tf 2 0 nightly describe the expect behavior without distribution strategy everything work fine code to reproduce the issue python import tensorflow as tf import tensorflow dataset as tfds mnist train mnist test tfds load name mnist split tfds split train tfds split test as supervise true strategy tf distribute mirroredstrategy def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale shuffle 1000 batch 256 test dataset mnist test map scale batch 256 with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy model fit train dataset validation datum test dataset epoch 10 other info log python traceback file tf bug py line 30 in model fit train dataset validation datum test dataset epoch 10 file usr local lib python3 7 dist package tensorflow python keras engine training py line 649 in fit validation freq validation freq file usr local lib python3 7 dist package tensorflow python keras engine training distribute py line 143 in fit distribute step name step per epoch file usr local lib python3 7 dist package tensorflow python keras engine training array py line 145 in model iteration print train info input val input step per epoch verbose file usr local lib python3 7 dist package tensorflow python keras engine training array py line 450 in print train info hasattr input 0 shape and hasattr val input 0 shape typeerror batchdataset object be not subscriptable
tensorflowtensorflow
bug in tf keras layer conv2d when use dilation rate
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template python import tensorflow as tf tf2 class conv2d bn relu tf keras model conv2d bn relu def init self filter kernel size stride 1 padding same dilation rate 1 use bias false kernel initializer he normal kernel regularizer none bn param super conv2d bn relu self init self conv tf keras layer conv2d filter kernel size stride stride padding padding dilation rate dilation rate use bias use bias kernel initializer kernel initializer kernel regularizer kernel regularizer self bn tf keras layer batchnormalization bn param def call self x train none x self conv x x tf nn relu self bn x training training return x if name main import os os environ cuda visible device 6 net conv2d bn relu 64 1 1 dilation rate 6 x tf one 1 32 32 64 y net x x tf one 1 64 64 64 y net x system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow python framework error impl invalidargumenterror pad shape 0 68 be not divisible by block shape 0 6 op spacetobatchnd describe the expect behavior I think the second evaluation should also work because I use pad same code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf2 0 can t use tf datum dataset cache with distribution strategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary pip tf nightly gpu 2 0 preview tensorflow version use command below 2 0 0 dev20190514 python version 3 7 3 cuda cudnn version 10 7 4 2 24 gpu model and memory v100 describe the current behavior use tf datum dataset cache and tf distribute mirroredstrategy will fail with cache should only be read after it have be complete op multideviceiteratorinit in tf 2 0 nightly describe the expect behavior use tf nightly gpu 1 14 1 dev20190514 graph mode tf data cache work use distribution strategy and doesn t throw an error code to reproduce the issue python import tensorflow as tf import tensorflow dataset as tfds mnist train mnist test tfds load name mnist split tfds split train tfds split test as supervise true datum dir gs plumerai datum strategy tf distribute mirroredstrategy def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train cache repeat map scale shuffle 1000 batch 256 test dataset mnist test cache repeat map scale batch 256 with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy model fit train dataset validation datum test dataset step per epoch 100 validation step 10 epoch 10 other info log python file usr local lib python3 7 dist package tensorflow python keras engine training py line 646 in fit validation freq validation freq file usr local lib python3 7 dist package tensorflow python keras engine training distribute py line 143 in fit distribute step name step per epoch file usr local lib python3 7 dist package tensorflow python keras engine training array py line 194 in model iteration val iterator get iterator val input model distribution strategy file usr local lib python3 7 dist package tensorflow python keras engine training array py line 512 in get iterator input distribution strategy file usr local lib python3 7 dist package tensorflow python keras distribute distribute training util py line 529 in get iterator initialize iterator iterator distribution strategy file usr local lib python3 7 dist package tensorflow python keras distribute distribute training util py line 535 in initialize iterator init op control flow op group iterator initialize file usr local lib python3 7 dist package tensorflow python distribute input lib py line 274 in initialize return super distributediteratorv1 self initializer file usr local lib python3 7 dist package tensorflow python distribute input lib py line 259 in initializer init op extend it initialize file usr local lib python3 7 dist package tensorflow python distribute input lib py line 660 in initialize self iterator eager reset pylint disable protect access file usr local lib python3 7 dist package tensorflow python data op multi device iterator op py line 333 in eager reset max buffer size self max buffer size file usr local lib python3 7 dist package tensorflow python ops gen dataset op py line 3118 in multi device iterator init six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror cache should only be read after it have be complete op multideviceiteratorinit
tensorflowtensorflow
conv2d break when use with keras model subclasse
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 alpha gpu python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory describe the current behavior I be try to use tf keras model subclasse to define my model but it be fail on the first conv layer here be the code I be use python class custommodel tf keras model def init self kwargs super custommodel self init kwargs self conv1 conv2d 32 3 3 padding same self conv2 conv2d 64 3 3 padding same self pool maxpooling2d pool size 2 2 self bn batchnormalization self relu activation relu self softmax activation softmax self drop1 dropout 0 25 self drop2 dropout 0 5 self dense1 dense 512 self dense2 dense 10 self flat flatten def call self input train z self conv1 input z self bn z training train z self relu z z self conv1 z z self bn z training train z self relu z z self pool z z self drop1 z training train z self conv2 z z self bn z training train z self relu z z self conv2 z z self bn z training train z self relu z z self pool z z self drop1 z training train z self flat z z self dense1 z z self relu z z self drop2 z training train z self dense2 z z self softmax z return z in order to check if the model be work fine I be pass a random input to the model in this way random input np random rand 32 32 3 astype np float32 random input np expand dim random input axis 0 pred model random input train false but this throw the follow error invalidargumenterror input depth must be evenly divisible by filter depth 32 vs 3 op conv2d the same model work fine with sequential functional api so either I be miss out on something or there be something wrong with the subclassing can you please look into it
tensorflowtensorflow
batch util copyslicetoelement do not support uint32 uint64 dtype
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below master python version 3 6 bazel version if compile from source 0 25 1 gcc compiler version if compile from source 5 4 0 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when batch util copyslicetoelement get a dtype of uint32 or uint64 it fail with error unimplemente copyslicetoelement unhandled data type 22 this be uint32 this be an issue for use tensor slice dataset op which can call this function and should also support these dtype describe the expect behavior this function should handle these primitive type code to reproduce the issue unit test in tensor slice dataset op test cc fail when these dtype be add other info log na
tensorflowtensorflow
tensorflow 1 14 change keras callback order relative to model build
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 rhel 7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary r1 14 branch build from source tensorflow version use command below r1 14 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow 1 14 change the relative call order of build the model and the set model callback in the tf keras fit generator path as of commit cd701ec in the r1 14 branch the keras engine training py make train function add the optimizer update op aka the backward prop phase and fill out the model s graph callback that need full graph visibility need their set model function call after the full graph be create the order of the operation be as follow tf keras fit 1 make train function 2 set model tf keras fit generator 1 set model 2 make train function keras team keras fit 1 make train function 2 set model keras team keras fit generator 1 make train function 2 set model the tf keras fit generator path stand out as be different from the rest commit fix this for the non eager mode case and add a test case to help prevent future regression commit diff 6561418ac6882a842d78dad52731895b regress this order and also remove the test case that be intend to catch regression another side effect of the current code be that the make train function be call from model train on batch so in the fit generator path this be now be call on every iteration while the majority of the make train function be guard by a check that win t do actual work there be something in that method that will run and waste cycle on every iteration describe the expect behavior the keras callback set model method should be call after the whole model be populate code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
statefulness in text generation use a rnn with eager execution
Bug
url s with the issue description of issue what need change clear description two question regard the stateful define in the tutorial the input datum be originally a long text it get cut into sequence with length 100 shuffle and pack into batch of size 64 however accord to the documentation for the statefulness you can set rnn layer to be stateful which mean that the state compute for the sample in one batch will be reuse as initial state for the sample in the next batch this assume a one to one mapping between sample in different successive batch after pre process the datum as discuss above there be no such one to one mapping be not there I e the sample in different batch be independent instead of related this be confirm by run the follow code after the code for create training batch which print out the successive sample with index 0 in the first two batch python for input example target example in dataset take 2 print input datum repr join idx2char input example 0 numpy output python input datum n that perish vessel the dowry of his nsister but mark how heavily this befell to the npoor gentlew input datum y enrich d nwith politic grave counsel then the king nhad virtuous uncle to protect his grace n nfir in addition if stateful true it be suggest to set shuffle false in the model fit which be also miss in the tutorial specify shuffle false when call fit the problem above be really confusing and any discussion be welcome
tensorflowtensorflow
name of loss reduction sum by nonzero weight be mislead
Bug
describe the current behavior loss reduction sum by nonzero weight calculate the mean not the sum the behavior be in sync with the doc describe the expect behavior I would expect loss reduction sum by nonzero weight to calculate the sum not the mean
tensorflowtensorflow
tensorflow doc translate variable md from english to chinese
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
can t set an initial state for the bidirectional lstm layer of tf keras 2 0 under eager execution mode
Bug
system information have I write custom code no os platform and distribution macos 10 14 4 tensorflow instal from pip install tensorflow 2 0 0 alpha0 tensorflow version v1 12 0 9492 g2c319fb415 2 0 0 alpha0 python version 3 7 3 describe the current behavior get an error when set an initial state for the bidirectional lstm layer of tf keras 2 0 under eager execution mode describe the expect behavior the code should work with eager execution mode code to reproduce the issue import tensorflow as tf import numpy as np hide unit 64 time step 50 vocab size 100 embed size 64 batch size 10 embed tf keras layer embed vocab size embed size input length time step trainable false lstm tf keras layers lstm hide unit return sequence false return state true bilstm 1 tf keras layers bidirectional lstm name bilstm1 bilstm 2 tf keras layers bidirectional lstm name bilstm2 concat tf keras layers concatenate dense tf keras layer dense vocab size activation softmax def mod input 1 input 2 input 1 embed input 1 input 2 embed input 2 output 1 forward h forward c backward h backward c bilstm 1 input 1 output 2 bilstm 2 input 2 initial state forward h forward c backward h backward c output dense concat output 1 output 2 return output input 1 np random randint 0 99 size batch size time step input 2 np random randint 0 99 size batch size time step target np random randint 2 size batch size test the model mod input 1 input 2 other info log the error detail invalidargumenterror incompatible shape 2 256 vs 50 256 op add name bilstm2 add
tensorflowtensorflow
tf2 0 custom dynamic rnn with autograph and tensorflow dataset fail to run
Bug
when I follow the effective tensorflow2 instruction to try custom dynamic loop optimize with autograph I encounter the follow error it seem the sequence length use in dynamic rnn become none everything be fine if I disable tf function and replace dynamicrnn with dynamicrnnv2 which use eager execution python model py import tensorflow as tf from tensorflow keras import layer class model tf keras model def init self vocab size emb size rnn size super model self init self embed layer embed vocab size emb size self rnn dynamicrnn rnn size def call self x x self embed x x self rnn x return x class dynamicrnn layer layer def init self rnn size super dynamicrnnv1 self init self rnn size rnn size self cell layer gru rnn size return state true tf function def call self input datum output tf tensorarray tf float32 size 0 dynamic size true state tf zeros input data shape 0 self rnn size dtype tf float32 for I in tf range input datum shape 1 print input datum output state self cell tf expand dim input datum I 1 state output output write I output return tf transpose output stack 1 0 2 state class dynamicrnnv2 layer layer def init self rnn size super dynamicrnnv2 self init self rnn size rnn size self cell layer gru rnn size return state true def call self input datum state tf zeros input data shape 0 self rnn size dtype tf float32 output for I in range input data shape 1 output state self cell tf expand dim input datum I 1 state output append tf expand dim output 1 return tf concat output axis 1 state 2019 05 16 08 15 39 248597 I tensorflow stream executor platform default dso loader cc 43 successfully open dynamic library libcubla so 10 0 i0516 08 15 39 560903 140231802230528 train py 55 256 120 16 tensor input datum 0 shape 256 160 16 dtype float32 i0516 08 15 40 161331 140231802230528 train py 55 256 160 16 tensor input datum 0 shape 256 none 16 dtype float32 traceback most recent call last file train py line 59 in app run main file opt conda lib python3 6 site package absl app py line 300 in run run main main args file opt conda lib python3 6 site package absl app py line 251 in run main sys exit main argv file train py line 54 in main output model question file opt conda lib python3 6 site package tensorflow python keras engine base layer py line 678 in call output self call input args kwargs file workspace code tf workspace dataset test model py line 36 in call x self rnn x file opt conda lib python3 6 site package tensorflow python keras engine base layer py line 678 in call output self call input args kwargs file opt conda lib python3 6 site package tensorflow python eager def function py line 424 in call return self stateless fn args kwd pylint disable not callable file opt conda lib python3 6 site package tensorflow python eager function py line 1305 in call graph function args kwargs self maybe define function args kwargs file opt conda lib python3 6 site package tensorflow python eager function py line 1625 in maybe define function args kwargs override flat arg shape relax arg shape file opt conda lib python3 6 site package tensorflow python eager function py line 1527 in create graph function capture by value self capture by value file opt conda lib python3 6 site package tensorflow python framework func graph py line 713 in func graph from py func func output python func func args func kwargs file opt conda lib python3 6 site package tensorflow python eager def function py line 329 in wrap fn return weak wrap fn wrap args kwd file opt conda lib python3 6 site package tensorflow python eager function py line 2126 in bind method wrapper return wrap fn args kwargs file opt conda lib python3 6 site package tensorflow python framework func graph py line 705 in wrapper args kwargs file opt conda lib python3 6 site package tensorflow python autograph impl api py line 360 in convert call result convert f effective args kwargs file tmp tmppfaea7ty py line 20 in tf call output state ag for stmt ag convert call range tf ag conversionoption recursive true force conversion false optional feature internal convert user code true input data shape 1 none none loop body output state file opt conda lib python3 6 site package tensorflow python autograph impl api py line 264 in convert call return call unconverted f args kwargs file opt conda lib python3 6 site package tensorflow python autograph impl api py line 173 in call unconverted return f args file opt conda lib python3 6 site package tensorflow python op math op py line 1305 in range limit op convert to tensor limit dtype dtype name limit file opt conda lib python3 6 site package tensorflow python framework op py line 1086 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file opt conda lib python3 6 site package tensorflow python framework op py line 1144 in convert to tensor v2 as ref false file opt conda lib python3 6 site package tensorflow python framework op py line 1223 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file opt conda lib python3 6 site package tensorflow python framework constant op py line 305 in constant tensor conversion function return constant v dtype dtype name name file opt conda lib python3 6 site package tensorflow python framework constant op py line 246 in constant allow broadcast true file opt conda lib python3 6 site package tensorflow python framework constant op py line 284 in constant impl allow broadcast allow broadcast file opt conda lib python3 6 site package tensorflow python framework tensor util py line 455 in make tensor proto raise valueerror none value not support valueerror none value not support python train py from future import absolute import from future import division from future import print function import os import sys from absl import app from absl import flag from absl import log import tensorflow as tf import tensorflow dataset as tfds from tensorflow python op import control flow util from model import model sys path append os path dirname os path dirname sys path 0 control flow util enable control flow v2 true flag flag flag flag define integer batch size 256 batch size def main dataset train info tfds load name squad byte split tfds split train with info true datum dir tensorflow dataset batch size flag batch size vocab size info feature question encoder vocab size model model vocab size vocab size emb size 16 rnn size 16 for I in range 1 for feature in dataset train question feature question output model question log info output shape if name main app run main
tensorflowtensorflow
voice activity detection implementation
Bug
I find that voice activity detection be implement in the paper small footprint keyword spot use deep neural network but I do not find anything relate to voice activity detection in the paper convolutional neural network for small footprint keyword spot my question be that be voice activity detection implement in this code or not
tensorflowtensorflow
count parameter from keras model doesn t work correctly with list of layer
Bug
below be an example that keras model be not able to count correctly the number of parameter when custom keras layer be inside a list that be define in an another custom layer import tensorflow as tf class testlayer2 tf keras layers layer def init self dim super testlayer2 self init self dense1 tf keras layer dense dim def call self x x self dense1 x return x class testlayer1 tf keras layers layer def init self dim super testlayer1 self init self list of dense testlayer2 dim for in range 2 def call self x for I in range len self list of dense x self list of dense I x return x class testmodel tf keras model def init self dim super testmodel self init self dense1 tf keras layer dense dim self layer testlayer1 dim 2 def call self x x self dense1 x x self layer x return x t model testmodel 512 tmp tf random normal 64 512 t model tmp t model summary return layer type output shape param dense 4728 dense multiple 262656 test layer1 13 testlayer1 multiple 0 total param 262 656 trainable param 262 656 non trainable param 0 which be not correct if the nest layer be outside a list then be count correctly import tensorflow as tf class testlayer2 tf keras layers layer def init self dim super testlayer2 self init self dense1 tf keras layer dense dim def call self x x self dense1 x return x class testlayer1 tf keras layers layer def init self dim super testlayer1 self init self dense1 testlayer2 dim self dense2 testlayer2 dim def call self x x self dense1 x x self dense2 x return x class testmodel tf keras model def init self dim super testmodel self init self dense1 tf keras layer dense dim self layer testlayer1 dim 2 def call self x x self dense1 x x self layer x return x t model testmodel 512 tmp tf random normal 64 512 t model tmp t model summary which give layer type output shape param dense 4731 dense multiple 262656 test layer1 14 testlayer1 multiple 1574912 total param 1 837 568 trainable param 1 837 568 non trainable param 0 system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 13 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 7 3 1 gpu model and memory nvidia titan v