repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf api docs ux tf io fixedlenfeature and tf fixedlenfeature | Bug | issue description in the tf data pipeline guide on tensorflow org call import datum under processing datum with dataset map there be an example parse tfexample protocol buffer message showcase how to parse tf example protocol buffer message python def parse function example proto feature image tf fixedlenfeature tf string default value label tf fixedlenfeature tf int64 default value 0 parse feature tf parse single example example proto feature return parse feature image parse feature label the config class for parse fix length input feature use in the example tf fixedlenfeature may not be easily identifiable in the api doc since its description be under the tf io module under the alias tf io fixedlenfeature ux figure out what tf fixedlenfeature do require use the search bar on tensorflow org vs the tf python api site for your reference 1 13 core doc r1 14 doc r2 0 beta doc feature request change the class in the guide to tf io fixedlenfeature or reference with a link to tf io for well ux |
tensorflowtensorflow | tf 2 0 api docs tf keras constraint minmaxnorm | Bug | exist url contain the issue description of the issue what need change correct link yes clear description no the description do not give recommendation of when and when not to use this symbol usage example no usage example parameter define parameter be well define return be not define raise list and define error be not define visual if applicable no visual be include |
tensorflowtensorflow | tf 2 0 api docs tf keras metric binaryaccuracy | Bug | tensorflow version 2 0 exist url contain the issue description of the issue what need change correct link yes clear description no the description do not give recommendation of when and when not to use this symbol usage example yes parameter define parameter be poorly define and not format appropriately return define return be not define raise list and define error be not define visual if applicable no visual be include |
tensorflowtensorflow | can not run a process under a thread when use tf set random seed | Bug | system information os platform and distribution linux ubuntu 16 and 18 tensorflow version use command below 1 13 1 python version 3 6 8 no issue on tf 1 5 0 python 3 6 8 and ubuntu 16 issue when use tf set random seed I can not run a process under a thread code python import tensorflow as tf from thread import thread from multiprocesse import process def misc print misc def do p process target misc p start p join def test a thread target do a start a join if name main print main start test print first test do tf set random seed 0 test print second test do this script ouput main start misc first test do and the process block expect output main start misc first test do misc second test do |
tensorflowtensorflow | tf layer dense deprecation update hint | Bug | reference the deprecation hint of description of issue what need change I wasn t able to find the reference tf keras layer dense but only tf keras layer dense could this be a simple typo and the latter be ment to be point to submit a pull request if this be a simple issue and not a functionality miss or a fault of I not find the reference method then yes I would pr it |
tensorflowtensorflow | can not find the placeholder op that be an input to readvariableop in tf lite conversion | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 beta python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 gpu model and memory geforce gtx 1060 describe the current behavior I have a pretty complicated model with three different input and I can save and load it with custom object as a keras model with model save and model load method I want to convert it to tf lite and I get this error valueerror can not find the placeholder op that be an input to the readvariableop code to reproduce the issue loss function get loss function model tf keras model load model save checkpoint address custom object customlayer customlayer loss function loss function converter tf lite tfliteconverter from keras model model tflite model converter convert open convert model tflite wb write tflite model other info log 2019 06 11 19 55 42 378746 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 06 11 19 55 42 378868 I tensorflow core grappler cluster single machine cc 359 start new session 2019 06 11 19 55 42 434841 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2019 06 11 19 55 42 434867 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer graph size after 2213 node 393 3499 edge 663 time 18 406ms 2019 06 11 19 55 42 434874 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 785ms traceback most recent call last file home siavash programming ximpa carim tensor convert to tflite py line 32 in convert save model file home siavash programming ximpa carim tensor convert to tflite py line 27 in convert save model tflite model converter convert file usr local lib python3 6 dist package tensorflow lite python lite py line 348 in convert self func 0 file usr local lib python3 6 dist package tensorflow python framework convert to constant py line 166 in convert variable to constant v2 raise valueerror can not find the placeholder op that be an input valueerror can not find the placeholder op that be an input to the readvariableop |
tensorflowtensorflow | the documentation have a cat instead of a bridge | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue fetch the image please provide a link to the documentation entry for example description of issue what need change there be two cat image but the second cat image should be replace by the bridge image as the image refer to a bridge clear description the second cat image should be change to the bridge as the url point a bridge but the cat image have be publish correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | bug in exception handling of tf histogram fix width bin | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta0 python version python3 from colab bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when value range for tf histogram fix width bin be 0 0 0 0 it output an index outside nbin see the code below nbin 5 value range 0 0 new value 1 0 0 0 1 5 2 0 5 0 15 indice tf histogram fix width bin new value value range nbin 5 print indice output be tf tensor 0 2147483648 4 4 4 4 shape 6 dtype int32 describe the expect behavior it should show index as 0 0 4 4 4 4 and throw a warning say that the range need to be update code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 api docs tf lite ophint ophintargumenttracker | Bug | url s with the issue description of issue what need change the link to the tf 2 0 api documentation be break it do not link to the documentation of the api so there be no documentation of tf 2 0 for this function |
tensorflowtensorflow | tf 2 0 api docs tf lite ophint | Bug | url s with the issue description of issue what need change the link to the tf 2 0 api documentation be break it do not link to the documentation of the api so there be no documentation of tf 2 0 for this function |
tensorflowtensorflow | tf 2 0 api docs tf data experimental counter | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description the description be not clear not provide as in the doc raise list and define error be not define at all request visual if applicable no visual be include submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf io write file | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description the description do not recommend at all in any way when and when not to use this symbol correct link yes parameter define yes return define yes raise list and define the return object be not well define and therefore not correct complete and appropriately format usage example yes request visual if applicable no single visual include in the symbol s description in some instance it will definitely clarify the content be present submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | tf 2 0 api docs tf datum experimental | Bug | exist url contain the issue description of the issue correct link all link be correct and well define clear description the description be not clear about when to use this symbol usage example no usage example be provide parameter define parameter be poorly define and not format appropriately return define return be not define raise list and define error be not define visual if applicable no visual be include |
tensorflowtensorflow | tf 2 0 api docs tf io fixedlensequencefeature | Bug | url s with the issue description of issue what need change clear description the description doesn t give recommendation on when to use and not use the symbol parameter define no parameter define return define no return define raise list and define be the error define for example no error define usage example no usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf io read file | Bug | url s with the issue description of issue what need change clear description the description do not give recommendation of when and when not to use this symbol correct link the link to the source code file python ops gen io op py be dormant raise list and define error be not define usage example no usage example provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf io write file | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | tf 2 0 api docs tf io fixedlenfeature | Bug | exist url with the issue description of issue what need change clear description with parameter return and usage example clear description correct link the link to the python script where the function be define be active parameter define no parameter define return define no return define raise list and define the raise be define far in this link be the error define for example error be not define usage example usage example provide in this link |
tensorflowtensorflow | setshapefn get nullptr for custom operator lead to crash | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 1 14 0rc1 python version 2 7 bazel version if compile from source na gcc compiler version if compile from source gcc 7 4 0 cuda cudnn version nr gpu model and memory nr describe the current behavior setshapefn doesn t work when call for register op for the custom operator whenever it be call by tf be get nullptr instead valid value of the inferencecontext argument describe the expect behavior setshapefn should get meaningful ptr code to reproduce the issue it doesn t work with repro attach just call run sh other info log na |
tensorflowtensorflow | iphone 6 crash exc bad instruction in v1 14 0 rc1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 compile on macos 10 14 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone6 io 11 4 tensorflow instal from source or binary source io tensorflow lib build with the script tensorflow contrib makefile build all io sh tensorflow version use command below v1 14 0 rc1 gcc compiler version if compile from source xcode 10 2 1 gcc version configure with prefix application xcode app content developer usr with gxx include dir application xcode app content developer platform macosx platform developer sdks macosx10 14 sdk usr include c 4 2 1 apple llvm version 10 0 1 clang 1001 0 46 4 target x86 64 apple darwin18 6 0 thread model posix installeddir application xcode app content developer toolchain xcodedefault xctoolchain usr bin describe the current behavior I have write an ios app that use tensorflow from c the app crash on session run with thread 6 exc bad instruction code 1 subcode 0xd53be053 xcode break and highlight an assembler line 329 0x1014e7f30 1308 mrs x19 cntvct el0 describe the expect behavior no crash the same app build with tensorflow v1 13 1 run successfully other info log this appear to have be introduce with commit 9a486811ca07b5ca3f60f6fd96e69fc1df889818 which be not in 1 13 1 but be in 1 14 0 rc0 it introduce the struct kerneltimer which call profile util cpuutil getcurrentclockcycle in cpu util h l58 on initialization on aarch64 this call asm volatile mrs 0 cntvct el0 r virtual timer value which be the line xcode break on if I exit that function with an early return diff 5f82d595cba7425ac17d71e7615343d9 then the session run without crash after consult the armv8 reference guide the instruction in question look right to I and can be find in other code sample with a google search unfortunately I don t have any other io device to test on currently I know little about assembler or arm architecture and have reach the limit of my debugging ability here could this be a hardware bug in iphone 6 any insight appreciate |
tensorflowtensorflow | switch default model with tf keras | Bug | url s with the issue I don t think it s document so no link available description of issue what need change there doesn t seem to be any documentation on how to change the default model for the tf keras backend get session I need to do this to properly save my model I think clear description I have an rnn that I m train in batch with tbtt use tf kera since my model be stateful I have to be explicit about the batch size when I create the model however at predict time I want to be able to do prediction on arbitrary length sequence and to have a batch size of 1 so I create a new nearly identical model and then copy the weight via predict model set weight train model get weight this all work as expect now I want to save my prediction model so I can do inference from c if I follow the direction one find on the internet e g on this page I need to do something like session tf keras backend get session graph session graph with graph as default graph do thing to serialize the graphdef to protobuf but if I do this the session graph be the graph I use for training not my prediction model it seem tf keras do some magic to switch session or default graph not sure which and I can t find any documentation about what it do or when I believe I could make one dummy prediction with my predict model to make it current but that seem super hacky be there any way to get the graph that correspond to a tf keras model if not be there any way to make a model be the current graph without do something super hacky like make a prediction for datum I don t care about I try trace through the model predict code to see where the session be use but didn t find it quickly and figure this should be document so I decide to ask here |
tensorflowtensorflow | on epoch end not trigger when worker 0 in fit generator | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta0 issue also happen on tf 1 12 and 1 13 python version python 3 6 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when run fit generator on the main thread I e with use multiprocesse false and worker 0 on epoch end doesn t get trigger describe the expect behavior have look at the source code I would expect on epoch end to be trigger at the end of each epoch code to reproduce the issue I create a simple generator by subclasse from tf keras util sequence class datagenerator sequence def init self split self split split def len self return 2 def getitem self index print f n split self split generator index index flush true y np random uniform low 0 high 1 size 2 000 1 y y 0 5 astype np int32 x np random normal loc 0 scale 1 size 2 000 10 x x astype np float32 return x y def on epoch end self print f on epoch end self split flush true I then create a simple model model sequential model add dense 20 activation relu input shape 10 model add dense 1 activation sigmoid optimizer adam lr 0 001 model compile optimizer optimizer loss binary crossentropy metric accuracy I set up my training and validation generator and run fit generator training generator datagenerator split training validation generator datagenerator split validation model fit generator generator training generator validation datum validation generator use multiprocesse false max queue size 10 epoch 4 shuffle false worker 0 verbose 0 I get the follow output split training generator index 0 split training generator index 1 split validation generator index 0 split validation generator index 1 split training generator index 0 split training generator index 1 split validation generator index 0 split validation generator index 1 split training generator index 0 split training generator index 1 split validation generator index 0 split validation generator index 1 split training generator index 0 split training generator index 1 split validation generator index 0 split validation generator index 1 if I change worker from 0 to 1 then on epoch end be trigger I e on epoch end print statement appear |
tensorflowtensorflow | tflite opengles delegate give inaccurate result | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device internal android 8 1 board tensorflow instal from source or binary source tensorflow version use command below 1 13 1 python version 2 7 bazel version if compile from source 0 22 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory mali t864 gpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I write a simple demo use tflite opengle delegate to run deeplab model from model zoo I have try your host model deeplabv3 mv2 257 gpu tflite it work perfect on my device on cpu and opengle delegate however when I try the deeplab mode with xception65 the tflite perform differently on cpu and opengle delegate my input layer be sub 7 output layer be resizebilinear 3 here be my result test image two human result from cpu correct hp nonflatten cpu result from opengle delegate hp flatten2 gpu I believe that this issue be relate to the operation batch to space nd space to batch nd that opengle not support thus fallback to cpu another issue of mine describe more detail flatten the unsupported op use graph transform also fail it give I the same inaccurate result or show message like info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error tflitegpudelegate prepare program be not properly link l0005 the number of compute uniform component 1261 be great than the maximum number allow 1024 error node number 199 tflitegpudelegate fail to prepare error node number 199 tflitegpudelegate fail to prepare describe the expect behavior all model work as good as your host model code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | can not import tensorflow summary since 2019 06 08 nightly | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 glinux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 3679 g3040de1372 2 0 0 dev20190608 python version 2 7 or 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior import tensorflow summary raise modulenotfounderror python c import tensorflow summary echo traceback most recent call last file line 1 in modulenotfounderror no module name tensorflow summary describe the expect behavior import that module should succeed python c import tensorflow summary echo 0 code to reproduce the issue activate virtualenv tf nightly 2 0 preview 20190606 py2 7 python c import tensorflow summary echo 0 activate virtualenv tf nightly 2 0 preview 20190607 py2 7 python c import tensorflow summary echo traceback most recent call last file line 1 in file usr local buildtool current sitecustomize sitecustomize py line 152 in setuppathsandimport return real import name global local fromlist level file homedir virtualenv tf nightly 2 0 preview 20190607 py2 7 local lib python2 7 site package tensorflow init py line 93 in from tensorflow core import attributeerror module object have no attribute compiler 1 activate virtualenv tf nightly 2 0 preview 20190608 py2 7 python c import tensorflow summary echo traceback most recent call last file line 1 in file usr local buildtool current sitecustomize sitecustomize py line 152 in setuppathsandimport return real import name global local fromlist level importerror no module name summary 1 other info log cc mihaimaruseac be this related to recent pip change |
tensorflowtensorflow | how do I extract information consistently such as accuracy etc from the distribute example context | Bug | I be use your distribute example I want to know how to properly extract some property such as accuracy in this custom training loop example I have try a couple of thing but the documentation regard this be not too specific something I be initially look at be pass in or return a tf keras metric object take the logit and label within the step fn and get prediction and then accuracy but I be run into issue of tensor parsing should I be use tf summary scalar in some way then a concrete reply would help |
tensorflowtensorflow | break link | Bug | this link be break |
tensorflowtensorflow | inconsistency in input pipeline code block for test dataset | Bug | url s with the issue description of issue what need change the code block below appear under the head input pipeline it appear to be an almost but not quite semantic copy of the code block above it for the train dataset except that in this code block for the test dataset we be shuffle the train dataset again test dataset tf datum dataset list file path test jpg shuffle so that for every epoch a different image be generate to predict and display the progress of our model train dataset train dataset shuffle buffer size test dataset test dataset map load image test test dataset test dataset batch 1 so I think that this code block should be test dataset tf datum dataset list file path test jpg shuffle so that for every epoch a different image be generate to predict and display the progress of our model test dataset test dataset shuffle buffer size test dataset test dataset map load image test test dataset test dataset batch 1 also note that the code block for the train dataset have train dataset train dataset map load image train num parallel call tf data experimental autotune but the code block for the test dataset only have test dataset test dataset map load image test the autotune be note here but not explain so I can t easily comment on what its use or lack of actually mean autotune submit a pull request I can if this be actually incorrect |
tensorflowtensorflow | the load csv with tf data tutorial create confusion about categorical datum | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue categorical datum please provide a link to the documentation entry for example description of issue what need change the new load csv with tf data tutorial be very nice the tutorial show user how to load a csv file into a tf datum dataset however there be a couple of issue in the tutorial first the tutorial show a very inconsistent and un scalable way to encode categorical datum use regex expression a simple way would be to use the already develop feature column api which be more consistent with the exist tensorflow api second the name of the tutorial be improper english the tutorial be about load tf datum with a csv not load a csv file with tf datum so that should be fix clear description 1 overly complicated and unscalable explanation of how to encode categorical feature the tutorial take the user through load a csv file into a tf datum dataset use the experimental make csv dataset function in tf 2 0 0 beta that be all very well do but the problem arise in the datum preprocesse section the section on categorical datum say that datum must be convert from text to numerical encoding before pass the datum to the model however the method describe look like this category sex male female class first second third deck a b c d e f g h I j embark town cherbourg southhampton queenstown alone y n then use this dictionary the user be ask to def process categorical data datum category return a one hot encode tensor represent categorical value remove lead datum tf string regex replace datum remove trail datum tf string regex replace datum r one hot encode reshape datum from 1d a list to a 2d a list of one element list datum tf reshape datum 1 1 for each element create a new list of boolean value the length of category where the truth value be element category label datum tf equal category datum cast booleans to float datum tf cast datum tf float32 the entire encoding can fit on one line datum tf cast tf equal category tf reshape datum 1 1 tf float32 return datum now this approach will work but there be a couple of really big problem first tensorflow have a feature column api already develop to handle this type of conversion if the user simply do something like create a feature column with vocabulary list and then wrap that column in an indicator column this same code could be resolve in 2 line instead of 14 line this change would also make the code easy to read and provide more insight into how to use the feature api col sex tf feature column categorical column with vocabulary list key sex vocabulary list male female fc sex tf feature column indicator column col sex 2 correct the name of the tutorial the name of the tutorial be load csv with tf datum this name be actually improper english and a bit confuse the current name make it sound like you be load a tf datum dataset into a csv which be the opposite of the intent the tutorial be about take a csv file and load the datum into a tf datum dataset so the proper name of the tutorial should be load a csv file into a tf datum dataset or populate a tf datum dataset with csv file this change might help reduce confusion about what the tutorial be try to demonstrate correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define for example raise error be not clearly define usage example be there a usage example there be a usage example but the usage example be very poorly design hence I submit the issue request visual if applicable be there currently visual if not will it clarify the content visual be okay submit a pull request I be happy to submit a pull request I guess let I see the response to this issue if the development team agree then I can work on a pull request to update the documentation be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | model fit do not reshuffle the dataset between epoch | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below version 2 0 0 dev20190606 git version v1 12 1 3447 g5a0f1bbfb7 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when call model fit dataset epoch 2 with a finite shuffle dataset the model be train on the same dataset order at each epoch describe the expect behavior I expect the dataset to be reshuffle after each epoch right now it s not even when I use reshuffle each iteration true in the dataset s shuffle method this argument seem to only shuffle between iteration within one epoch code to reproduce the issue python import tensorflow as tf from tensorflow import kera import numpy as np x np arange 6 astype np float32 reshape 1 1 y x 2 dataset tf datum dataset from tensor slice x y dataset dataset shuffle 100 reshuffle each iteration true dataset dataset repeat 2 dataset dataset batch 2 tf function def log input input tf print input return input model keras model sequential keras layers lambda log input keras layer dense 1 model compile loss mse optimizer sgd model fit dataset epoch 2 verbose 0 other info log the output be as follow I ve add comment python 5 first epoch first iteration first batch 2 3 second batch 1 0 third batch 4 0 first epoch second iteration first batch 3 1 second batch 5 4 third batch 2 5 second epoch first iteration first batch 2 3 1 0 4 0 second epoch second iteration first batch 3 1 5 4 2 as you can see the order of the datum be perfectly identical during the 1st and 2nd epoch it be only reshuffle at each iteration within the same epoch so the only way to ensure that the datum will be reshuffle at each epoch be to use dataset repeat n epoch then model fit dataset step per epoch epoch n epoch it feel like unnecessary complexity |
tensorflowtensorflow | subclasse tf keras model model save model save h5py but not h5 hdf5 file type | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary pip tensorflow version use command below 2 0 0b python version 3 6 7 gpu model and memory tesla t4 describe the current behavior use tf 2 0 0b gpu on google colab while use the subclasse api for a subclasse layer and model I be unable to use the model save model function for h5 or hdf5 file type but I could successfully save and load the model if it be save as h5py file type in the toy example be use it work correctly although this may not be the case note that the get config method be implement in the custom layer and custom model describe the expect behavior either the save model should always work I believe this be a feature goal and the documentation should reflect this or if the save be likely to produce incorrect result it should raise an error and the documentation should continue to suggest that custom model can only be save with the save weight feature code to reproduce the issue import tensorflow as tf from tensorflow import kera class resblock keras layers layer def init self n layer n neuron kwargs super init kwargs self hide kera layer dense n neuron activation elu kernel initializer he normal for in range n layer def call self input z input for layer in self hide z layer z return input z def get config self base config super get config return base config class re mod keras model model def init self output dim activation none kwargs super init kwargs self f1 kera layer flatten self hidden1 keras layer dense 100 activation elu kernel initializer he normal self b1 resblock 2 100 self b2 resblock 2 100 self output1 keras layer dense output dim activation kera activation get activation def call self input z self f1 input z self hidden1 z for in range 4 z self b1 z z self b2 z return self output1 z def get config self base config super get config return base config output dim output dim activation activation model re mod 10 activation softmax model compile loss sparse categorical crossentropy optimizer nadam metric accuracy model fit train epoch 25 validation datum test this be able to save and work correctly return the train model model save custom model h5py del model model keras model load model custom model h5py custom object resblock resblock other info log this will raise an error that only sequential or functional model can be save model save custom model h5 |
tensorflowtensorflow | doc should describe how to add a metagraphdef to an exist graph | Bug | url s with the issue save and restore model description of issue what need change clear description and usage example I ve already create several model train over several day each that we re ready to move from local testing to a serve environment the model be save use the function python def save graph to file sess graph graph file name save an graph to file create a valid quantize one if necessary output graph def graph util convert variable to constant sess graph as graph def final tensor name with gfile fastgfile graph file name wb as f f write output graph def serializetostring via the image retrain sample script l853 l859 now I m ready to move this to a serve environment via sagemaker but that just implement tensorflow serve the error be clear enough 2019 06 04 22 38 53 794056 I external org tensorflow tensorflow cc save model reader cc 54 read meta graph with tag serve 2019 06 04 22 38 53 798096 I external org tensorflow tensorflow cc save model loader cc 259 savedmodel load for tag serve status fail take 83297 microsecond 2019 06 04 22 38 53 798132 e tensorflow serve util retrier cc 37 loading servable name model version 1 fail not find could not find meta graph def matching supply tag serve to inspect available tag set in the savedmodel please use the savedmodel cli save model cli load up the graph by adapt the loader from the retrain script l270 l293 I try to just append the serve tag to the graph python def load graph model file code from v1 6 0 of tensorflow s label image py example graph tf graph graph def tf graphdef with open model file rb as f graph def parsefromstring f read with graph as default tf import graph def graph def return graph load the graph graph load graph modelpath import shutil if os path exist exportdir shutil rmtree exportdir add the serve metagraph tag builder tf save model builder savedmodelbuilder exportdir from tensorflow save model import tag constant with tf session graph graph as sess builder add meta graph and variable sess tag constant serve tag constant gpu strip default attrs true builder save print build a savedmodel which doesn t work still have the same error I m not give up on this but give that the code be write out with a tensorflow example this seem like an obvious use case to cover submit a pull request if I m able to solve this in a timely way I ll submit a pr for the doc at the moment it s a few step away from I be there however |
tensorflowtensorflow | loss of shape information when use dilation rate 1 in conv layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 beta0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 5 gpu model and memory 11 gb gtx1080ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be use the code below with 1 12 1 13 1 and tf2 0 alpha but it fail to run in the late tf2 0 beta as you can see there be nothing fancy go on in the code this be the block of code that produce this error def aspp tensor atrous spatial pyramid pooling dim k int shape tensor y pool averagepooling2d pool size dim 1 dim 2 name average pooling tensor y pool conv2d filter 256 kernel size 1 padding same kernel initializer he normal name pool 1x1conv2d use bias false y pool y pool batchnormalization name f bn 1 y pool y pool activation relu name f relu 1 y pool y pool upsample tensor y pool size dim 1 dim 2 y 1 conv2d filter 256 kernel size 1 dilation rate 1 padding same kernel initializer he normal name aspp conv2d d1 use bias false tensor y 1 batchnormalization name f bn 2 y 1 y 1 activation relu name f relu 2 y 1 y 6 conv2d filter 256 kernel size 3 dilation rate 6 padding same kernel initializer he normal name aspp conv2d d6 use bias false tensor y 6 batchnormalization name f bn 3 y 6 y 6 activation relu name f relu 3 y 6 y 12 conv2d filter 256 kernel size 3 dilation rate 12 padding same kernel initializer he normal name aspp conv2d d12 use bias false tensor y 12 batchnormalization name f bn 4 y 12 y 12 activation relu name f relu 4 y 12 y 18 conv2d filter 256 kernel size 3 dilation rate 18 padding same kernel initializer he normal name aspp conv2d d18 use bias false tensor y 18 batchnormalization name f bn 5 y 18 y 18 activation relu name f relu 5 y 18 y concatenate y pool y 1 y 6 y 12 y 18 name aspp concat y conv2d filter 256 kernel size 1 dilation rate 1 padding same kernel initializer he normal name aspp conv2d final use bias false y y batchnormalization name f bn final y y activation relu name f relu final y return y strangely shape of y pool y 1 be correctly infer complete code to reproduce the issue available below y concatenate y pool y 1 y 6 y 12 y 18 name aspp concat describe the expect behavior the code should work as it do in the early release ie tf2 0 alpha code to reproduce the issue import tensorflow as tf from tensorflow keras import backend as k from tensorflow keras model import model from tensorflow keras layer import averagepooling2d lambda conv2d conv2dtranspose activation reshape concatenate concatenate batchnormalization zeropadding2d from tensorflow keras application resnet50 import resnet50 def upsample tensor size bilinear upsampling name tensor name split 0 upsample def bilinear upsample x size resize tf image resize image x size size return resize y lambda lambda x bilinear upsample x size output shape size name name tensor return y def aspp tensor atrous spatial pyramid pooling dim k int shape tensor y pool averagepooling2d pool size dim 1 dim 2 name average pooling tensor y pool conv2d filter 256 kernel size 1 padding same kernel initializer he normal name pool 1x1conv2d use bias false y pool y pool batchnormalization name f bn 1 y pool y pool activation relu name f relu 1 y pool y pool upsample tensor y pool size dim 1 dim 2 y 1 conv2d filter 256 kernel size 1 dilation rate 1 padding same kernel initializer he normal name aspp conv2d d1 use bias false tensor y 1 batchnormalization name f bn 2 y 1 y 1 activation relu name f relu 2 y 1 y 6 conv2d filter 256 kernel size 3 dilation rate 6 padding same kernel initializer he normal name aspp conv2d d6 use bias false tensor y 6 batchnormalization name f bn 3 y 6 y 6 activation relu name f relu 3 y 6 y 12 conv2d filter 256 kernel size 3 dilation rate 12 padding same kernel initializer he normal name aspp conv2d d12 use bias false tensor y 12 batchnormalization name f bn 4 y 12 y 12 activation relu name f relu 4 y 12 y 18 conv2d filter 256 kernel size 3 dilation rate 18 padding same kernel initializer he normal name aspp conv2d d18 use bias false tensor y 18 batchnormalization name f bn 5 y 18 y 18 activation relu name f relu 5 y 18 y concatenate y pool y 1 y 6 y 12 y 18 name aspp concat y conv2d filter 256 kernel size 1 dilation rate 1 padding same kernel initializer he normal name aspp conv2d final use bias false y y batchnormalization name f bn final y y activation relu name f relu final y return y def deeplabv3plus img height img width base model resnet50 input shape img height img width 3 weight imagenet include top false image feature base model get layer activation 39 output x a aspp image feature x a upsample tensor x a size img height 4 img width 4 x b base model get layer activation 9 output x b conv2d filter 48 kernel size 1 padding same kernel initializer he normal name low level projection use bias false x b x b batchnormalization name f bn low level projection x b x b activation relu name low level activation x b x concatenate x a x b name decoder concat x conv2d filter 256 kernel size 3 padding same activation relu kernel initializer he normal name decoder conv2d 1 use bias false x x batchnormalization name f bn decoder 1 x x activation relu name activation decoder 1 x x conv2d filter 256 kernel size 3 padding same activation relu kernel initializer he normal name decoder conv2d 2 use bias false x x batchnormalization name f bn decoder 2 x x activation relu name activation decoder 2 x x upsample x img height img width x conv2d 1 1 1 name output layer x x activation sigmoid x model model input base model input output x name deeplabv3 plus return model other info log valueerror traceback most recent call last in 1 deeplabv3plus 512 512 6 frame usr local lib python3 6 dist package tensorflow python keras layer merge py in build self input shape 389 input with match shape 390 except for the concat axis 391 get input shape s input shape 392 393 def merge function self input valueerror a concatenate layer require input with match shape except for the concat axis get input shape none 32 32 256 none 32 32 256 none none none 256 none none none 256 none none none 256 |
tensorflowtensorflow | model trainable false do nothing in tensorflow kera | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 13 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 105 gpu model and memory m60 8 gb it seem set model trainable false in tensorflow kera do nothing except for to print a wrong model summary here be the code to reproduce the issue import tensorflow as tf import numpy as np img shape 160 160 3 create the base model from the pre train model mobilenet v2 base model tf keras applications mobilenetv2 input shape img shape include top false weight imagenet base model trainable false for layer in base model layer layer trainable false bc before compile ac after compile for layer in base model layer bc append layer trainable print np all bc true print base model summary this change to show no trainable parameter but that be wrong give the output to previous np all bc base model compile optimizer tf keras optimizer adam lr 0 001 loss categorical crossentropy metric accuracy for layer in base model layer ac append layer trainable print np all ac true print base model summary this change to show no trainable parameter but that be wrong give the output to previous np all ac |
tensorflowtensorflow | model trainable false and layer trainable false for each layer give very different performance | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 3 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 105 gpu model and memory m60 8 gb for layer in base model seq layer layer trainable false and base model seq trainable false be give very different result to my understanding both be the way to achieve same behavior datum pipeline this remain exactly same in both the case provide the code for reference import tensorflow as tf import panda as pd import numpy as np from sklearn model selection import train test split img height 224 img width 224 from random eraser import get random eraser image dir datum image traincsv pd read csv datum train csv testcsv pd read csv datum test apkow4 t csv traincsv category traincsv category astype str train batch size 32 val batch size 32 test batch size 32 seed 43 traincsv pd read csv datum train csv testcsv pd read csv datum test apkow4 t csv traincsv category traincsv category astype str df train df val train test split traincsv stratify traincsv category random state 42 test size 1 shuffle true train datagen tf keras preprocesse image imagedatagenerator rotation range 10 width shift range 0 3 height shift range 0 3 rescale 1 255 shear range 0 0 zoom range 0 3 horizontal flip true fill mode near preprocessing function get random eraser pixel level true s h 0 4 val datagen tf keras preprocesse image imagedatagenerator rescale 1 255 train generator train datagen flow from dataframe df train directory image dir x col image y col category class mode categorical target size img height img width batch size train batch size shuffle true validation generator val datagen flow from dataframe df val directory image dir x col image y col category class mode categorical target size img height img width batch size val batch size shuffle true step per epoch train generator n 32 validation step validation generator n 32 now the piece follow by be the only part I change in run the two experiment but I get drastically different result import tensorflow as tf from future import absolute import division print function import os import tensorflow as tf from tensorflow import keras print tensorflow version be tf version import numpy as np import panda as pd import matplotlib pyplot as plt import matplotlib image as mpimg from sklearn model selection import train test split from sklearn model selection import stratifiedkfold from tensorflow keras application import mobilenet inceptionresnetv2 from tensorflow keras application resnet50 import resnet50 preprocess input from tensorflow python keras application import densenet rootexpose import tensorflow as tf image dir datum image img height 224 img width 224 img shape img height img width 3 def step decay epoch lr print lr drop 0 96 return lr drop lrate tf keras callback learningratescheduler step decay callback lrate base model seq keras application densenet densenet201 input shape img shape include top false weight imagenet this be the only part I change while train the model base model seq trainable false for layer in base model seq layer layer trainable false model seq tf keras sequential base model seq keras layer flatten keras layer dropout 0 5 kera layer dense 2048 activation relu keras layer dropout 0 5 kera layer batchnormalization keras layer dense 5 activation softmax for layer in base model seq layer layer name layer name str 1729 print len model seq trainable variable model seq compile optimizer tf keras optimizer adam lr 0 001 loss categorical crossentropy metric accuracy print print model seq summary epoch 100 when I run it by freeze the model with base model seq trainable false the model start with 8 val accuracy and rise up to 91 in first 20 epoch but when I run it by freeze the model with for layer in base model seq layer layer trainable false the validation accuracy start with 72 and reach 82 in 20 epoch and then stall there I have repeat the experiment enough number of time to ensure this can not be attribute to chance initialization please point I to the source of inconsistency in the approach I have describe |
tensorflowtensorflow | tf 2 0 constant folding fail invalid argument unsupported type 21 | Bug | system information tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 2 0 preview 2 0 0 dev20190606 python version 3 6 5 code to reproduce the issue import numpy as np import tensorflow as tf class model tf keras model def init self super model self init self dense tf keras layer dense 10 def call self input return self dense input model model def forward x batch size x shape 0 ys tf tensorarray tf float32 size batch size for I in tf range batch size y model x I tf newaxis ys ys write I y return ys stack def train x forward func with tf gradienttape as tape ys forward func x loss tf reduce mean ys grad tape gradient loss model trainable weight return grad def big train x with tf gradienttape as tape batch size x shape 0 ys tf tensorarray tf float32 size batch size for I in tf range batch size y model x I tf newaxis ys ys write I y ys ys stack loss tf reduce mean ys grad tape gradient loss model trainable weight return grad x np random rand 10 5 astype np float32 code buggy tf function train x forward tf function big train x code normal tf function train x tf function forward train x tf function forward train x forward big train x def test code tf print tf print f code exec code test code buggy 0 test code buggy 1 test code normal 0 test code normal 1 test code normal 2 test code normal 3 other info log print tf function train x forward 2019 06 07 16 46 23 314712 e tensorflow core grappler optimizer meta optimizer cc 502 constant folding fail invalid argument unsupported type 21 2019 06 07 16 46 23 357137 e tensorflow core grappler optimizer meta optimizer cc 502 constant folding fail invalid argument unsupported type 21 2019 06 07 16 46 23 460568 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 tf function big train x 2019 06 07 16 46 24 139754 e tensorflow core grappler optimizer meta optimizer cc 502 constant folding fail invalid argument unsupported type 21 2019 06 07 16 46 24 180814 e tensorflow core grappler optimizer meta optimizer cc 502 constant folding fail invalid argument unsupported type 21 tf function train x tf function forward train x tf function forward train x forward big train x relate to 28626 |
tensorflowtensorflow | error when run make for tensorflow lite micro | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version bfc8733ffb gcc compiler version if compile from source arm none eabi g 8 2 1 describe the problem when run make f tensorflow lite experimental micro tool makefile generate project it result in the follow error tensorflow lite experimental micro examples micro vision makefile inc 22 miss separator stop revert commit d77ccda7569 remove that error but lead to the follow error make no rule to make target tensorflow lite experimental micro tool make gen linux x86 64 prj micro vision test make tensorflow lite experimental micro examples micro vision no person image datum cc need by generate micro vision test make project stop |
tensorflowtensorflow | tf 2 0 api docs tf io decode image | Bug | url s with the issue description of issue what need change usage example no usage example give submit a pull request yes |
tensorflowtensorflow | how to convert a tensorlfow spacetobatchnd conv2d batchtospacend to a single conv2d in tflite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version use command below 1 13 1 python version 2 7 bazel version if compile from source 0 22 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m try to train my own deeplab model use this code and convert it to tflite my target be to get a model similar to this however the model be obtain contain operation like image spacetobatchnd and batchtospacend operation be not support by tflite opengle backend they reduce the model s performance on my device in your host deeplab model those three op be replace by depthwise conv 2d v2 which have option to set dilation factor this would be the good solution for I but I m not sure how to convert spacetobatchnd conv2d batchtospacend into a singe depthwise conv 2d v2 dilation 2 fyi I have try the graph transform tool under tensorflow tool graph transform to flatten the atrous conv it upsample the kernel instead of space to batch batch to space but this transform lead to much more computation that I can not afford describe the expect behavior convert spacetobatchnd conv2d batchtospacend into a singe depthwise conv 2d v2 dilation 2 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem you can try any model under deeplab model zoo for example other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 api docs tf image crop and resize | Bug | url s with the issue description of issue what need change raise list and define raise be not define usage example no usage example be give submit a pull request yes |
tensorflowtensorflow | tf2 0 nightly gru lstm layers don t use cudnn properly | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 docker 18 09 6 ce arch linux 5 1 5 tensorflow instal from pip install tf nightly gpu 2 0 preview tensorflow version 2 0 0 dev20190606 but every nightly since 2 0 0 dev20190319 present the same behaviour python version 3 6 8 cuda cudnn version cuda v10 0 130 cudnn 7 5 0 56 gpu model and memory nvidia gtx 980ti 6 gb nivida gtx 1070 8 gb describe the current behavior gru lstm layers don t use the cudnn implementation properly result in much bad performance let s take for example this toy network import import numpy as np import tensorflow as tf tf executing eagerly print tensorflow version str tf version print check from tensorflow python eager import context print execute eagerly str context execute eagerly print number of gpu str context num gpu generate random datum x np random rand 6720 700 3 y x 1 1 print shape x shape y shape define toy network input shape x shape 2 rnn state size 1 timestep x shape 1 input tf keras layers input shape timestep input shape dtype np float32 output tf keras layers lstm rnn state size input model tf keras model input output model compile rmsprop mse print model summary fit model fit x y with the last nightly this be what we obtain tensorflow version 2 0 0 dev20190606 execute eagerly true 2019 06 06 12 52 23 635654 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 06 06 12 52 23 660930 I tensorflow core common runtime gpu gpu device cc 1658 find device 0 with property name geforce gtx 1070 major 6 minor 1 memoryclockrate ghz 1 7845 pcibusid 0000 42 00 0 2019 06 06 12 52 23 661142 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 06 12 52 23 661983 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 06 06 12 52 23 662749 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 06 06 12 52 23 662937 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 06 06 12 52 23 663896 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 06 06 12 52 23 664621 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 06 06 12 52 23 667023 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 06 06 12 52 23 667936 I tensorflow core common runtime gpu gpu device cc 1781 add visible gpu device 0 2019 06 06 12 52 23 668222 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 06 12 52 23 756255 I tensorflow compiler xla service service cc 168 xla service 0x55b18abf73d0 execute computation on platform cuda device 2019 06 06 12 52 23 756289 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce gtx 1070 compute capability 6 1 2019 06 06 12 52 23 758641 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3494060000 hz 2019 06 06 12 52 23 759820 I tensorflow compiler xla service service cc 168 xla service 0x55b18aff2990 execute computation on platform host device 2019 06 06 12 52 23 759845 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 06 06 12 52 23 760484 I tensorflow core common runtime gpu gpu device cc 1658 find device 0 with property name geforce gtx 1070 major 6 minor 1 memoryclockrate ghz 1 7845 pcibusid 0000 42 00 0 2019 06 06 12 52 23 760515 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 06 12 52 23 760527 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 06 06 12 52 23 760537 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 06 06 12 52 23 760547 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 06 06 12 52 23 760557 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 06 06 12 52 23 760567 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 06 06 12 52 23 760577 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 06 06 12 52 23 761521 I tensorflow core common runtime gpu gpu device cc 1781 add visible gpu device 0 2019 06 06 12 52 23 761549 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 06 12 52 23 762256 I tensorflow core common runtime gpu gpu device cc 1199 device interconnect streamexecutor with strength 1 edge matrix 2019 06 06 12 52 23 762272 I tensorflow core common runtime gpu gpu device cc 1205 0 2019 06 06 12 52 23 762280 I tensorflow core common runtime gpu gpu device cc 1218 0 n 2019 06 06 12 52 23 763253 I tensorflow core common runtime gpu gpu device cc 1344 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6407 mb memory physical gpu device 0 name geforce gtx 1070 pci bus i d 0000 42 00 0 compute capability 6 1 number of gpu 1 shape 6720 700 3 6720 model model layer type output shape param input 1 inputlayer none 700 3 0 lstm lstm none 1 20 total param 20 trainable param 20 non trainable param 0 none train on 6720 sample 2019 06 06 12 52 26 219667 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 6720 6720 114s 17ms sample loss 0 1441 which be much slow than what we obtain with version 2 0 0 dev20190319 and previous include version 2 0 alpha tensorflow version 2 0 0 dev20190319 execute eagerly true 2019 06 06 13 23 14 360714 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 06 06 13 23 14 379231 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 06 06 13 23 14 500580 I tensorflow compiler xla service service cc 168 xla service 0x558a01550ac0 execute computation on platform cuda device 2019 06 06 13 23 14 500637 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce gtx 1070 compute capability 6 1 2019 06 06 13 23 14 525050 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3494060000 hz 2019 06 06 13 23 14 526497 I tensorflow compiler xla service service cc 168 xla service 0x558a01662bb0 execute computation on platform host device 2019 06 06 13 23 14 526541 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 06 06 13 23 14 526816 I tensorflow core common runtime gpu gpu device cc 1551 find device 0 with property name geforce gtx 1070 major 6 minor 1 memoryclockrate ghz 1 7845 pcibusid 0000 42 00 0 totalmemory 7 92gib freememory 6 59gib 2019 06 06 13 23 14 526860 I tensorflow core common runtime gpu gpu device cc 1674 add visible gpu device 0 2019 06 06 13 23 14 526931 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 06 06 13 23 14 527880 I tensorflow core common runtime gpu gpu device cc 1082 device interconnect streamexecutor with strength 1 edge matrix 2019 06 06 13 23 14 527903 I tensorflow core common runtime gpu gpu device cc 1088 0 2019 06 06 13 23 14 527925 I tensorflow core common runtime gpu gpu device cc 1101 0 n 2019 06 06 13 23 14 528098 I tensorflow core common runtime gpu gpu device cc 1222 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6407 mb memory physical gpu device 0 name geforce gtx 1070 pci bus i d 0000 42 00 0 compute capability 6 1 number of gpu 1 shape 6720 700 3 6720 model model layer type output shape param input 1 inputlayer none 700 3 0 lstm lstm none 1 20 total param 20 trainable param 20 non trainable param 0 none 2019 06 06 13 23 16 864613 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 6720 6720 6s 884us sample loss 0 1065 other info log I have try in different computer and I be able to reproduce the issue with the modification from this pull request I obtain the same performance in the last nightly as in 2 0 0 dev20190319 but with the advantage of be able to use cudnn with masking which be add by qlzh727 in this commit diff a9f256601f2626075300a37eeb4cea5f I be willing to contribute to solve this issue in a well way if you would like I to thank |
tensorflowtensorflow | tf function fail to parse for loop in some circumstance | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 glinux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior python tf function def tf function with loop num iter digit list for I in tf range num iter digit list append I return tf add n digit list tf function with loop 5 fail with log post in log section def function with loop num iter digit list for I in tf range num iter digit list append I return tf add n digit list function with loop 5 samething without tf function return tf tensor 10 shape dtype int32 describe the expect behavior tf function with loop 5 should return tf tensor 10 shape dtype int32 code to reproduce the issue write above other info log valueerror try to capture a tensor from an inner function this can be cause by access a tensor define inside a loop or conditional body or a subfunction from a calling function without go through the proper return value mechanism consider use tensorflow mechanism such as tensorarray to return tensor from inner function or loop conditional body tensor tensor tensorarrayv2read tensorlistgetitem 0 shape dtype int32 tensor graph funcgraph name while body 40 i d 139946430861008 this graph funcgraph name tf function with loop i d 139946437179024 |
tensorflowtensorflow | tensorflow debugger run t fail on kera | Bug | see the description at system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior exception throw describe the expect behavior run the number of iteration as specify in the run t command code to reproduce the issue see other info log |
tensorflowtensorflow | add update in cross replica mode be break batchnormalization layer impossible to use | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 archlinux tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 3374 g9eb67b17bf 2 0 0 dev20190605 python version 3 6 cuda cudnn version 10 gpu model and memory 1080 ti describe the current behavior I expect to do a forward pass with a model with a bachnormalization layer in training mode when use the tf distribuite mirroredstrategy but I can t because it reise the follow exception runtimeerror add update be call in a cross replica context this be not expect if you require this feature please file an issue why it be not expect describe the expect behavior it should work the commit that introduce this behavior be diff 8eb7e20502209f082d0cb15119a50413 code to reproduce the issue python import tensorflow as tf model tf keras sequential tf keras layer dense 10 tf keras layer batchnormalization tf keras layer dense 1 strategy tf distribute mirroredstrategy with strategy scope out model tf zeros 1 10 training true |
tensorflowtensorflow | pix2pix tutorial batchnorm issue | Bug | url s with the issue description of issue what need change the generate image function python def generate image model test input tar the training true be intentional here since we want the batch statistic while run the model on the test dataset if we use training false we will get the accumulate statistic learn from the training dataset which we don t want prediction model test input training true plt figure figsize 15 15 display list test input 0 tar 0 prediction 0 title input image ground truth predict image for I in range 3 plt subplot 1 3 I 1 plt title title I get the pixel value between 0 1 to plot it plt imshow display list I 0 5 0 5 plt axis off plt show generate image use the generator model with the flag training true the problem be that in this way the batch normalization statistic move mean and move variance be update use the test set statistic this be wrong the original pix2pix paper assert that they evaluate the model use the flag training true but they do this in order to normalize use the minibatch with batch size 1 statistic this be do only in the test phase and statistic should not be save into the model I think that a well approach be to visualize the datum generate during training not re call the generator but use the datum generate in order to calculate the loss once the training be finish we can evaluate the model the problem be that in tf 2 0 be not possible to use the minibatch statistic in the batch normalization layer every time we call batchnorm input training true the move mean and move variance be update I think that this can be manage in a well way by add a flag that tell the layer whether to use the minibatch statistic |
tensorflowtensorflow | tensorflow lite model accuracy page not find | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change in post training quantization page the link to tensorflow lite model accuracy page be invalid it change to new address after check please fix it clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | 404 | Bug | tf 2 0 alpha page 404 |
tensorflowtensorflow | tf 2 0 crossshardoptimizer not work with global step parameter | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow ubuntu 14 04 os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tensorflow 2 0 0 a python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m try to run crossshardoptimizer with tf optimizer adam however tf optimizer adam be update to v2 with no global step parameter and crossshardoptimizer be still in version 1 and still have that global step parameter so crossshardoptimizer be fail badly describe the expect behavior the crossshardoptimizer shouldn t break code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python def model fn feature label mode param model tf keras sequential tf keras layer flatten input shape 28 28 1 tf keras layer dense 128 tf keras layer dense 10 activation softmax optimizer none if mode tf estimator modekeys train optimizer tf optimizer adam param get learing rate 1e 3 if param get use tpu true optimizer tpu optimizer crossshardoptimizer optimizer with tf gradienttape as tape logit model feature if mode tf estimator modekey predict pred prediction logit return tpu estimator tpuestimatorspec mode prediction pred loss tf keras loss sparsecategoricalcrossentropy from logit true label logit if mode tf estimator modekeys eval return tpu estimator tpuestimatorspec mode loss loss def train fn assert optimizer be not none gradient tape gradient loss model trainable variable global step tf compat v1 train get global step update global step tf compat v1 assign global step global step 1 name update global step with tf control dependency update global step apply grad optimizer apply gradient zip gradient model trainable variable return apply grad if mode tf estimator modekeys train return tpu estimator tpuestimatorspec mode loss loss train op train fn the complete code can be find at other info log this issue get fix when l173 line be modify to self opt apply gradient sum grad and var name name and update global step manually like this l75 l78 do you want to contribute yes |
tensorflowtensorflow | feature column stock example fail on gpu | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no this be a stock example see collab notebook here to reproduce os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 dev20190605 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory collab you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior model fit fail in the stock example with the follow error invalidargumenterror 2 root error s find 0 invalid argument expect d2 of index to be 2 get 3 at position 1 node sequential dense feature 6 age bucketize x thal indicator sparsecross define at 14 1 invalid argument expect d2 of index to be 2 get 3 at position 1 node sequential dense feature 6 age bucketize x thal indicator sparsecross define at 14 sequential dense feature 6 age bucketize x thal indicator sparsetodense 56 0 successful operation 0 derive error ignore op inference keras scratch graph 2134 describe the expect behavior code to reproduce the issue other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 api doc tf datum experimental shuffle and repeat | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description no it have warn and it be kinda abstract on when to use the function raise list and define no yet while compare dataset shuffle buffer size reshuffle each iteration true repeat count the difference be in the action perform on the dataset usage example no there be need of an example to clearly explain the difference between the two shuffle |
tensorflowtensorflow | unitt and test session interaction | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 12 1 1 13 1 1 14 0rc0 python version 3 5 3 6 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 gpu model and memory 7 5 0 environment capture available at describe the current behavior additional ghost test be be run but skip when use unitt with tensorflow testcase class this behavior be present in 1 12 1 when upgrade to 1 13 1 or 1 14 0rc0 the test be be skip entirely as the ghost test be in regard to the test session method that you have within the tensorflow python framework testutil and unitt believe that the test be not actually test describe the expect behavior no ghost test should be run at all in 1 12 1 and the test work in 1 13 1 and 1 14 0rc0 code to reproduce the issue import tensorflow as tf import numpy as np import unittest print tf version def get entry np t indice d1 indice d2 batch size result np zero batch size for I in range batch size result I t I indice d1 I indice d2 I return result def get entry tf t indice d1 indice d2 batch size indice tf stack tf range batch size indice d1 indice d2 axis 1 return tf gather nd t indice start of region of interest please enable and disable this region with tensorflow 1 12 1 and then with 1 13 1 or 1 14 0rc0 and the behaviour will be see try delattr tf test testcase test session except attributeerror pass class owntestcase tf test testcase pass end of region of interest class testcaset tf test testcase def test get entry self success true for in range 10 sample input batch size d1 d2 map int np random randint low 2 high 100 size 3 test input np random random batch size d1 d2 test indice d1 np random randint low 0 high d1 1 size batch size test indice d2 np random randint low 0 high d2 1 size batch size evaluate the numpy version test result get entry np test input test indice d1 test indice d2 batch size evaluate the tensorflow version with self cache session as sess tf input tf constant test input dtype tf float32 tf indice d1 tf constant test index d1 dtype tf int32 tf indice d2 tf constant test indice d2 dtype tf int32 tf result get entry tf tf input tf indice d1 tf indice d2 batch size tf result sess run tf result check that output be similar success success and np allclose test result tf result self assertequal success true |
tensorflowtensorflow | timedistribute wrapper around depthwiseconv2d break attributeerror in 1 13 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from pip3 tensorflow version b v1 13 1 0 g6612da8951 1 13 1 python version cuda cudnn version 10 0 130 7 4 1 gpu model and memory nvidia k80 11 gb describe the current behavior timedistribute wrapper around depthwiseconv2d fail with attributeerror tuple object have no attribute dim describe the expect behavior wrapper should succesfully apply to layer previously work in tf 1 11 0 code to reproduce the issue python import tensorflow as tf from tensorflow keras layers import input conv2d depthwiseconv2d timedistribute test td input input shape none 1 128 8 timedistribute depthwiseconv2d depth multipli 1 kernel size 1 4 stride 1 1 test td input other info log traceback most recent call last file time distribute bug py line 12 in kernel size 1 4 stride 1 1 test td input file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 538 in call self maybe build input file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 1603 in maybe build self build input shape file usr local lib python3 6 dist package tensorflow python keras layers wrappers py line 216 in build self layer build tuple child input shape file usr local lib python3 6 dist package tensorflow python keras layers convolutional py line 1811 in build if input shape dim channel axis value be none attributeerror tuple object have no attribute dim |
tensorflowtensorflow | python3 issue with keras custom layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary on tx2 follow this tensorflow version use command below 1 13 1 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn7 3 gpu model and memory tx2 nvidia jetson you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to load a model hdf5 which have a keras custom layer I also see the same error when create a keras model from scratch conv2d however work alright note I use python3 be there a modification need for custom layer however the script fail with the follow error traceback most recent call last file usr local lib python3 6 dist package tensorflow python framework tensor util py line 558 in make tensor proto str value compat as bytes x for x in proto value file usr local lib python3 6 dist package tensorflow python framework tensor util py line 558 in str value compat as bytes x for x in proto value file usr local lib python3 6 dist package tensorflow python util compat py line 61 in as byte byte or text typeerror expect binary or unicode string get 1 during handling of the above exception another exception occur traceback most recent call last file try py line 82 in out netvladlayer num cluster 16 input img file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 538 in call self maybe build input file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 1603 in maybe build self build input shape file try py line 21 in build trainable true file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 349 in add weight aggregation aggregation file usr local lib python3 6 dist package tensorflow python train checkpointable base py line 607 in add variable with custom getter kwarg for getter file usr local lib python3 6 dist package tensorflow python keras engine base layer util py line 145 in make variable aggregation aggregation file usr local lib python3 6 dist package tensorflow python op variable py line 213 in call return cls variable v1 call args kwargs file usr local lib python3 6 dist package tensorflow python op variable py line 176 in variable v1 call aggregation aggregation file usr local lib python3 6 dist package tensorflow python op variable py line 155 in previous getter lambda kwargs default variable creator none kwargs file usr local lib python3 6 dist package tensorflow python ops variable scope py line 2488 in default variable creator import scope import scope file usr local lib python3 6 dist package tensorflow python op variable py line 217 in call return super variablemetaclass cls call args kwargs file usr local lib python3 6 dist package tensorflow python op resource variable op py line 294 in init constraint constraint file usr local lib python3 6 dist package tensorflow python op resource variable op py line 406 in init from args initial value if init from fn else initial value file usr local lib python3 6 dist package tensorflow python keras engine base layer util py line 127 in shape dtype dtype partition info partition info file usr local lib python3 6 dist package tensorflow python op init op py line 266 in call shape self minval self maxval dtype seed self seed file usr local lib python3 6 dist package tensorflow python op random op py line 239 in random uniform shape shapetensor shape file usr local lib python3 6 dist package tensorflow python op random op py line 44 in shapetensor return op convert to tensor shape dtype dtype name shape file usr local lib python3 6 dist package tensorflow python framework op py line 1039 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file usr local lib python3 6 dist package tensorflow python framework op py line 1097 in convert to tensor v2 as ref false file usr local lib python3 6 dist package tensorflow python framework op py line 1175 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file usr local lib python3 6 dist package tensorflow python framework constant op py line 304 in constant tensor conversion function return constant v dtype dtype name name file usr local lib python3 6 dist package tensorflow python framework constant op py line 245 in constant allow broadcast true file usr local lib python3 6 dist package tensorflow python framework constant op py line 283 in constant impl allow broadcast allow broadcast file usr local lib python3 6 dist package tensorflow python framework tensor util py line 562 in make tensor proto support type type value value typeerror fail to convert object of type to tensor content 1 1 dimension 256 16 consider cast element to a support type describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 api docs tf keras backend maximum | Bug | url s with the issue please provide a link to the documentation entry for example description of the issue what need change clear description the description be not clear enough correct link the link be okay parameter define the parameter be define return define the return value be define raise list and define error be not define or list usage example there be no usage example request visual if applicable there be no visual submit a pull request no |
tensorflowtensorflow | tf 2 0 api docs tf data experimental scan | Bug | url s with the issue description of issue what need change raise list and define no it s hard for a person to tell what be raise the error usage example no usage example it might be hard for someone to know under what context to use it request visual if applicable no |
tensorflowtensorflow | tf 2 0 api docs tf datum experimental make saveable from iterator | Bug | url s with the issue description of issue what need change return define the return section be miss raise list and define raise be neither list nor define |
tensorflowtensorflow | tf 2 0 api docs tf io decode base64 | Bug | exist url with the issue description of issue what need change clear description for example why should someone use this method how be it useful correct link the link to the python script where the function be define be inactive wrong python op gen string op py correct the file be not mention in the repo parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example be not define usage example no usage example provide use case the documentation do not define how to use when to use the symbol |
tensorflowtensorflow | tf 2 0 api docs tf io decode json example | Bug | exist url contain the issue description of issue what need change correct link the link to the python script where the function be define be inactive wrong python op gen parse op py correct the file mention here be not in the repo usage example no usage example be provide use case the documentation do not define when to use and when not to use the symbol raise list and define error be not define |
tensorflowtensorflow | link do not exist | Bug | url s with the issue description of issue what need change clear description when someone be in this page click on the bullet point r1 13 stable redirect to a 404 not find page correct link the bullet point target here while it should target here submit a pull request I try to fix it but I can not find where this link ref be state if someone could point I in the right direction I could fix it otherwise someone else could do it |
tensorflowtensorflow | 2 0alpha0 autograph tf function do not automatically transform nest class method | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary binary tensorflow version use command below 2 0 0alpha0 python version 3 6 5 describe the current behavior when we define multiple method for a class and only decorate one of they with tf function the nest method be not automatically transform and some error raise describe the expect behavior we only need decorate the outermost method code to reproduce the issue code utf 8 author lin lan from future import absolute import from future import division from future import print function import tensorflow as tf class foo tf keras model def init self super foo self init self dense tf keras layer dense 20 self embedding tf variable tf random normal 100 5 dtype tf float32 tf function def call self input embedding tf nn embed lookup self embedding input return self inner embedding tf function def inner self embedding batch tf shape embedding 0 ta tf tensorarray tf float32 size batch for I in tf range batch this self dense embedding I tf newaxis ta ta write I this return ta stack foo foo re foo 0 2 4 6 8 other info log typeerror tensor object be only iterable when eager execution be enable to iterate over this tensor use tf map fn also decorate the method inner eliminate the error |
tensorflowtensorflow | dropout in gru lstm in tensorflow 2 0 doesn t reset dropout mask on call | Bug | system information tensorflow git version v1 12 1 3283 geff4ae822a version 2 0 0 dev20190604 colab environment describe the current behavior in rnns the dropout mask should be reset after every call however in training mode where the dropout be activate gru and lstm implementation in tensorflow 2 0 seem to be re use the same dropout mask lead to deterministic behavior simple rnn seem to be do the right thing re sample dropout mask after each call describe the expect behavior the expect behaviour should be the same as simplernn re sample dropout mask on each call code to reproduce the issue the follow code produce the correct behaviour in tensorflow 1 13 1 from future import absolute import division print function import tensorflow as tf tf enable eager execution import numpy as np tf enable eager execution print tf version datum np random normal 0 1 1 10 2 astype np float32 rnn tf keras layers gru unit 10 dropout 0 5 recurrent dropout 0 5 print set rnn datum training true numpy 0 0 for in range 5 output 1 13 1 0 09432551 0 07633728 0 03358479 0 010588642 0 0 but in tensorflow 2 0 it doesn t from future import absolute import division print function import tensorflow as tf import numpy as np print tf version git version tf version version datum np random normal 0 1 1 10 2 astype np float32 rnn tf keras layers gru unit 10 dropout 0 5 recurrent dropout 0 5 print set rnn datum training true numpy 0 0 for in range 5 output v1 12 1 3283 geff4ae822a 2 0 0 dev20190604 0 14212656 a quick fix be to call reset dropout mask and reset recurrent dropout mask between call however this look like a break change def fix rnn rnn cell reset dropout mask rnn cell reset recurrent dropout mask return rnn data training true numpy 0 0 print set fix rnn for in range 5 output 0 004232532 1 1669009 0 009177759 3 0901778 4 5860972 |
tensorflowtensorflow | tf 2 0 api docs tf image convert image dtype | Bug | url s with the issue description of issue what need change raise list and define raise be not list usage example usage example be not provide submit a pull request yes |
tensorflowtensorflow | miss documentation for execute tf 1 0 frozen and save model | Bug | url s with the issue description of issue what need change I can not find a documentation how to do the native inference with a frozen or save model in tf 2 0 clear description we would like to do inference prediction with exist model we train in tensorflow 1 therefore we have normal frozen and save model however we can not find a documentation how we can load these model and execute they afterwards we manage to load the graph object but the graph object do not allow we to do prediction in the tfjs project the api be clear to we but in tf 2 0 we struggle a lot correct link parameter define return define raise list and define usage example be there a usage example import os import tensorflow as tf load the tensorflow model into memory detection graph tf compat v1 graph with detection graph as default od graph def tf compat v1 graphdef with tf io gfile gfile path to frozen model rb as fid serialize graph fid read od graph def parsefromstring serialize graph tf import graph def od graph def name request visual if applicable submit a pull request many thank sebastian |
tensorflowtensorflow | tensorflow image resize image not upgrade | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos tensorflow instal from source or binary pip tensorflow version use command below 2 0 0a0 python version 3 7 3 describe the current behavior the function tensorflow image resize image be not upgrade with the tf upgrade v2 script it be not replace with any work function in tf2 describe the expect behavior the function to be convert to tf image resize code to reproduce the issue upgrade the follow code with tf upgrade v2 and run it in tensorflow 2 0 python import tensorflow tensorflow image resize image |
tensorflowtensorflow | batch matrix multiplication be incorrect for large batch | Bug | this bug seem to exist in all version of tensorflow from at least 1 13 to the current 2 0 nightly under multiple configuration host os gpu python version with cuda 10 0 it can be reproduce fairly easily python with tf device gpu 0 v tf one 2 17 1 1 dtype tf float32 print tf reduce sum tf matmul v v this print 65535 0 instead of 131072 0 stem from the output value at index 2 16 be 0 colab some observation this occur for batch dimension great than or equal to 216 this only occur on the gpu the cpu version seem fine this only occur for fp32 fp64 seem fine in tensorflow 2 0 the bug only manifest on the first run as show in the colab example subsequent run produce the correct result |
tensorflowtensorflow | valueerror output tensor to a model must be the output of a tensorflow layer | Bug | system information os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip3 install tensorflow tensorflow version 1 13 1 python version 3 7 instal use virtualenv pip conda no bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a I be currently build a few custom keras layer however when I try to return a tf keras model with output that be the output of the last layer I get the error valueerror output tensor to a model must be the output of a tensorflow layer thus hold past layer metadata find tensor layer normalization 1 batchnorm add 1 0 shape 10 dtype float32 provide the exact sequence of command step that you execute before run into the problem import tensorflow as tf inps tf keras input shape none 256 name inps mask tf keras input shape 1 1 none name mask m1 tf random uniform shape 8 20 m2 tf random uniform shape 8 20 output tf keras layer dense unit 512 m1 m2 model tf keras model inputs inps mask output output name test these be toy command that be similar to my actual code I be confuse as to why the output value from a dense layer aren t consider output from a tensorflow layer |
tensorflowtensorflow | cudnn lstm fail with xla on 2080 ti and cuda 10 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary from official tensorflow gpu power docker tensorflow version use command below 1 13 1 python version 3 5 2 2 7 cuda cudnn version 10 1 gpu model and memory nvidia geforce 2080 ti and nvidia tesla t4 describe the current behavior when run with xla on the code give below fail with the follow exception 2019 06 03 14 05 14 646045 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 06 03 14 05 14 658647 I tensorflow compiler xla service service cc 161 xla service 0x4df19b0 execute computation on platform host device 2019 06 03 14 05 14 658738 I tensorflow compiler xla service service cc 168 streamexecutor device 0 2019 06 03 14 05 15 083263 I tensorflow compiler xla service service cc 161 xla service 0x4e6c080 execute computation on platform cuda device 2019 06 03 14 05 15 083333 I tensorflow compiler xla service service cc 168 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2019 06 03 14 05 15 084392 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 65 pcibusid 0000 06 00 0 totalmemory 10 73gib freememory 10 57gib 2019 06 03 14 05 15 084478 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2019 06 03 14 05 15 640030 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 06 03 14 05 15 640114 I tensorflow core common runtime gpu gpu device cc 990 0 2019 06 03 14 05 15 640129 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2019 06 03 14 05 15 641178 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10198 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 06 00 0 compute capability 7 5 warn the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see if you depend on functionality not list there please file an issue warn tensorflow from usr local lib python3 5 dist package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 06 03 14 05 21 949030 e tensorflow compiler xla status macros cc 49 internal ret check failure tensorflow compiler xla service gpu ir emitter unnested cc 3171 shapeutil e qual first reduce shape inst shape begin stack trace tensorflow status xla hloinstruction visit xla dfshlovisitorbase tensorflow status xla hloinstruction accept xla dfshlovisitorbase bool bool tensorflow status xla hlocomputation accept xla dfshlovisitorbase const xla gpu nvptxcompiler runbackend std unique ptr stream executor streamexecutor xla devicememoryallocator xla service buildexecutable xla hlomoduleproto const std unique ptr xla backend stream executor streamexecutor xla devicememoryallocator tensorflow xlacompilationcache buildexecutable tensorflow xlacompiler option const tensorflow xlacompiler compilationresult const std unique ptr tensorflow xlacompilationcache compileimpl tensorflow xlacompiler option const tensorflow nameattrlist const absl span s td function const absl optional tensorflow xlacompiler compilationresul t const xla localexecutable tensorflow xlacompilationcache compile tensorflow xlacompiler option const tensorflow nameattrlist const absl span tenso rflow xlacompiler compileoption const tensorflow xlacompilationcache compilemode tensorflow xlacompiler compilationresult const xla localexecutable tensorflow xlacompileop compute tensorflow opkernelcontext tensorflow basegpudevice computehelper tensorflow opkernel tensorflow opkernelcontext tensorflow basegpudevice compute tensorflow opkernel tensorflow opkernelcontext eigen threadpooltempl workerloop int std function handler lambda 1 m invoke std any datum const clone end stack trace 2019 06 03 14 05 21 949546 w tensorflow core framework op kernel cc 1401 op require fail at xla ops cc 429 internal ret check failure tensorflow compiler xla service gpu ir emitter unnested cc 3171 shapeutil equal first reduce shape inst shape traceback most recent call last file usr local lib python3 5 dist package tensorflow python client session py line 1334 in do call return fn args file usr local lib python3 5 dist package tensorflow python client session py line 1319 in run fn option feed dict fetch list target list run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1407 in call tf sessionrun run metadata tensorflow python framework error impl internalerror ret check failure tensorflow compiler xla service gpu ir emitter unnested cc 3171 shapeutil equal first reduce shape in st shape node cluster 1 1 xla compile during handling of the above exception another exception occur traceback most recent call last file test tf py line 20 in session run output feed dict input np zero 1 step 1 dtype np float32 file usr local lib python3 5 dist package tensorflow python client session py line 929 in run run metadata ptr file usr local lib python3 5 dist package tensorflow python client session py line 1152 in run feed dict tensor option run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1328 in do run run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl internalerror ret check failure tensorflow compiler xla service gpu ir emitter unnested cc 3171 shapeutil equal first reduce shape in st shape node cluster 1 1 xla compile reproduce in the follow environment official tensorflow docker tensorflow tensorflow late gpu nvidia tensorflow docker nvcr io nvidia tensorflow 19 03 py3 fail with the same error on keras code cudnn lstm keras implementation too describe the expect behavior give code complete successfully code to reproduce the issue import numpy as np import tensorflow as tf config tf configproto no fail when xla be disabled config graph option optimizer option global jit level tf optimizeroption on 1 session tf session config config step 2 no fail when 1 input tf placeholder dtype tf float32 shape none step 1 lstm tf contrib cudnn rnn cudnnlstm num layer 1 num unit 1 dtype tf float32 lstm build input get shape lstm output lstm input output tf concat lstm output input axis 2 output tf reduce sum output axis 1 keepdim false output tf nn l2 normalize output axis 1 session run tf global variable initializer session run output feed dict input np zero 1 step 1 dtype np float32 if you remove any of the tf op above error doesn t reproduce |
tensorflowtensorflow | doc break reference link in basic text classification tutorial | Bug | doc issue url s with the issue description of issue what need change break wrong link in basic text classification tutorial need to be update remove change clear description in the basic text classification tutorial imdb review the reference link for pad sequence point to but you get redirect to which be pretty unhelpful on the reference link to the keras documentation return a 404 so there be not much information to be gain from follow that link |
tensorflowtensorflow | suggest kmp affinity setting harm performance | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes actually kera use tensorflow backend os platform and distribution e g linux ubuntu 16 04 rhel 7 6 tensorflow instal from source or binary binary via anaconda tensorflow version use command below 1 13 1 mkl py37h54b294f 0 python version 3 7 3 cuda cudnn version n a describe the current behavior set kmp affinity granularity fine verbose compact 1 0 as per the performance guideline with omp num thread 4 I train a model and see the following output on a 44 core 88 hyperthread system omp info 250 kmp affinity pid 372422 tid 372422 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372420 tid 372420 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372423 tid 372423 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372424 tid 372424 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372425 tid 372425 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372426 tid 372426 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372427 tid 372427 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372421 tid 372421 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372414 tid 372414 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372415 tid 372415 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372416 tid 372416 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372417 tid 372417 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372418 tid 372418 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372419 tid 372419 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372429 tid 372429 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372413 tid 372413 thread 0 bind to os proc set 0 omp info 250 kmp affinity pid 372378 tid 372396 thread 1 bind to os proc set 4 omp info 250 kmp affinity pid 372378 tid 372460 thread 2 bind to os proc set 8 omp info 250 kmp affinity pid 372378 tid 372461 thread 3 bind to os proc set 10 omp info 250 kmp affinity pid 372378 tid 372462 thread 4 bind to os proc set 6 there be a bunch of process and thread here some be worker process for feed in datum only the final 4 be training thread which be pin appropriately the other proc thread be erroneously pin to a single hardware thread context use kmp affinity verbose I get the following output and substantially great performance omp info 250 kmp affinity pid 373414 tid 373414 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373416 tid 373416 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373420 tid 373420 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373425 tid 373425 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373424 tid 373424 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373418 tid 373418 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373422 tid 373422 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373427 tid 373427 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373415 tid 373415 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373432 tid 373432 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373426 tid 373426 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373421 tid 373421 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373423 tid 373423 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373419 tid 373419 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373429 tid 373429 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373417 tid 373417 thread 0 bind to os proc set 0 87 omp info 250 kmp affinity pid 373298 tid 373399 thread 1 bind to os proc set 0 87 omp info 250 kmp affinity pid 373298 tid 373441 thread 2 bind to os proc set 0 87 omp info 250 kmp affinity pid 373298 tid 373442 thread 3 bind to os proc set 0 87 omp info 250 kmp affinity pid 373298 tid 373443 thread 4 bind to os proc set 0 87 describe the expect behavior don t pin non omp worker process to a single hardware thread context perhaps the documentation could suggest start with kmp affinity disabled as a baseline other info log other performance relate issue in the tracker be likely to have be raise because of this problem e g 29008 and 23238 be currently open there be close issue that be likely cause by this but have be close without proper resolution 15320 and 22212 but probably many other at least suggest kmp affinity disabled in such issue might be warrant in the future |
tensorflowtensorflow | markdown format issue in tutorial load datum tf record | Bug | url s with the issue description of issue what need change in the context the tf train feature message type can accept one of the follow three type see the proto file most other generic type can be coerce into one of these note the link of proto file be incorrect it should be the tf train feature message type can accept one of the follow three type see the proto file for reference most other generic type can be coerce into one of these |
tensorflowtensorflow | tf config set soft device placement seem to have no effect | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 13 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below version 2 0 0 dev20190527 git version v1 12 1 2821 gc5b8e15064 python version 3 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 0 it s just a colab gpu instance gpu model and memory tesla p4 15079mib describe the current behavior the tf config set soft device placement function seem to have no effect when I create an integer variable and try to place it on a gpu I still get an exception describe the expect behavior I expect soft placement to fallback to use the cpu no error code to reproduce the issue python import tensorflow as tf tf config set soft device placement true with tf device gpu 0 f tf variable 42 other info log the code above cause the follow exception notfounderror traceback most recent call last in 2 tf config set soft device placement true 3 with tf device gpu 0 4 f tf variable 42 10 frame usr local lib python3 6 dist package tensorflow python op variable py in call cls args kwargs 260 return cls variable v1 call args kwargs 261 elif cls be variable 262 return cls variable v2 call args kwargs 263 else 264 return super variablemetaclass cls call args kwargs usr local lib python3 6 dist package tensorflow python op variable py in variable v2 call cls initial value trainable validate shape cache device name variable def dtype import scope constraint synchronization aggregation shape 254 synchronization synchronization 255 aggregation aggregation 256 shape shape 257 258 def call cls args kwargs usr local lib python3 6 dist package tensorflow python op variable py in kws 235 shape none 236 call on variable class useful to force the signature 237 previous getter lambda kws default variable creator v2 none kws 238 for getter in op get default graph variable creator stack pylint disable protect access 239 previous getter make getter getter previous getter usr local lib python3 6 dist package tensorflow python ops variable scope py in default variable creator v2 next creator kwargs 2549 synchronization synchronization 2550 aggregation aggregation 2551 shape shape 2552 2553 usr local lib python3 6 dist package tensorflow python op variable py in call cls args kwargs 262 return cls variable v2 call args kwargs 263 else 264 return super variablemetaclass cls call args kwargs 265 266 usr local lib python3 6 dist package tensorflow python op resource variable op py in init self initial value trainable collection validate shape cache device name dtype variable def import scope constraint distribute strategy synchronization aggregation shape 462 synchronization synchronization 463 aggregation aggregation 464 shape shape 465 466 def repr self usr local lib python3 6 dist package tensorflow python op resource variable op py in init from args self initial value trainable collection cache device name dtype constraint synchronization aggregation shape 616 share name share name 617 name name 618 graph mode self in graph mode 619 pylint disable protect access 620 if self in graph mode and initial value be not none and usr local lib python3 6 dist package tensorflow python op resource variable op py in eager safe variable handle initial value shape share name name graph mode 223 dtype initial value dtype base dtype 224 return variable handle from shape and dtype 225 shape dtype share name name graph mode initial value 226 227 usr local lib python3 6 dist package tensorflow python op resource variable op py in variable handle from shape and dtype shape dtype share name name graph mode extra handle datum 139 share name share name 140 name name 141 container container 142 if extra handle datum be none 143 extra handle datum handle usr local lib python3 6 dist package tensorflow python op gen resource variable op py in var handle op dtype shape container share name name 1416 else 1417 message e message 1418 six raise from core status to exception e code message none 1419 add node to the tensorflow graph 1420 dtype execute make type dtype dtype usr local lib python3 6 dist package six py in raise from value from value notfounderror no register varhandleop opkernel for gpu device compatible with node node varhandleop opkernel be find but attribute didn t match request attribute container dtype dt int32 shape share name cd2c89b7 88b7 44c8 ad83 06c2a9158347 register device gpu dtype in dt variant device gpu dtype in dt int64 device gpu dtype in dt complex128 device gpu dtype in dt complex64 device gpu dtype in dt bool device gpu dtype in dt double device gpu dtype in dt float device gpu dtype in dt half device cpu device xla cpu device xla gpu op varhandleop name variable |
tensorflowtensorflow | tf 2 0 api docs tf datum dataset map example have web render issue | Bug | url s with the issue description of issue what need change the map method call for each of these class have an example which be not be render properly this be what be intend by the docstre note the follow example use to represent the content of a dataset each element be a tf tensor object a 1 2 3 4 5 map func take a single argument of type tf tensor with the same shape and dtype result a map lambda x this be what get render to the web it look like the back tick reference have an issue note the follow example use to represent the content of a dataset each element be a tf tensor object a 1 2 3 4 5 map func take a single argument of type tf tensor with the same shape and dtype result a map lambda x submit a pull request no the docstring look correct I don t know how to fix the error |
tensorflowtensorflow | tf 2 0 api docs tf datum dataset and tf datum experimental csvdataset list file | Bug | url s with the issue list file list file description of issue what need change list file routine for both class use the same bit of code to describe how to use the method the descriptive text be write in docstring and it look like this example if we have the follow file on our filesystem path to dir a txt path to dir b py path to dir c py if we pass path to dir py as the directory the dataset would produce path to dir b py path to dir c py however when render it be all one line without the indentation or bullet point so it look like this if we have the follow file on our filesystem path to dir a txt path to dir b py path to dir c py if we pass path to dir py as the directory the dataset would produce path to dir b py path to dir c py this should be correct |
tensorflowtensorflow | tf 2 0 api docs tf datum dataset | Bug | url s with the issue description of issue what need change need an overall example of how to use tf datum dataset it be a struggle to relate the dataset with the keras model code the model fit doc describe how to use a tf dataset as input but without experimentation its hard to discern the correct format of the dataset correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define raise list and define be the error define yes usage example be there a usage example no I add an example for the common use case of most user to use the dataset with a simple model fit call for kera request visual if applicable be there currently visual no yes it would be helpful submit a pull request be you plan to also submit a pull request to fix the issue yes tf 2 0 api docs tf datum add pointer to tutorial which work with r2 0 29323 |
tensorflowtensorflow | tf 2 0 api docs tf image central crop | Bug | url s with the issue description of issue what need change usage example no usage example provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf image adjust saturation | Bug | url s with the issue description of issue what need change raise list and define raise be not list usage example no usage example have be provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf image adjust jpeg quality | Bug | url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example have be provide submit a pull request yes |
tensorflowtensorflow | tf module doesn t recognise non trainable variable from keras layers tf 2 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf nightly 2 0 preview 2 0 0 dev20190602 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cpu gpu model and memory when use keras layer inside tf module module and set trainable false in the keras layer doesn t result in non trainable variable in the tf module scope the below example code module m s trainable variable should return 6 but it return 8 code to reproduce the issue import tensorflow as tf class m tf module def init self super m self init self list self list append tf keras layer dense 5 trainable false tf keras layer dense 5 self list append tf keras layer dense 5 tf keras layer dense 5 def call self input output input for l list in self list for l in l list output l output return output m m m tf one 10 10 get print len m trainable variable 8 expect print len m trainable variable 6 |
tensorflowtensorflow | tf 2 0 api docs tf image adjust hue | Bug | url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example have be provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf image adjust gamma | Bug | url s with the issue description of issue what need change usage example no usage example provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf datum | Bug | url s with the issue description of issue what need change add reference to three tutorial on use tf datum since there be a mix of r2 0 and r1 x tutorial on dataset these three be relevant for 2 0 correct link yes parameter define yes na return define yes na raise list and define no perhaps na usage example no this be to add a pointer to usage request visual if applicable no it would be good to have some simple visual relative to the different class submit a pull request 29323 |
tensorflowtensorflow | tf 2 0 api docs tf image adjust contrast | Bug | url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example have be provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf image adjust brightness | Bug | url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example have be provide submit a pull request yes |
tensorflowtensorflow | tf 2 0 api docs tf identity n | Bug | url s with the issue description of issue what need change correct link the path should be href also the file it refer to do not exist raise list and define raise be not list and define submit a pull request no |
tensorflowtensorflow | tf 2 0 api docs tf identity | Bug | url s with the issue description of issue what need change raise list and define raise be not list and define usage example no usage example provide submit a pull request yes |
tensorflowtensorflow | android build | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | tf 2 0 api docs tf registergradient | Bug | relate with tf registergradient s init can raise typeerror so add it |
tensorflowtensorflow | keras model load weight do not consider custom model layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary conda anaconda tensorflow version use command below 1 13 issue still exist in the late source code I fail to load the official resnet h5 weight with self implement resnet use the tf keras model I find the behaviour of load weight in network py do not consider the sub tf keras model for example the resnet have a convblock and a identityblock instead of use a function to define they I use tf keras model to define they so it follow the late standard instead of simply pass the self layer to load weight from hdf5 group by name the load weight function be network py should collect all layer inside the sub model then pass to load weight from hdf5 group by name network py line 1415 suggest naive fix def get all layer model tf keras model layer for layer in model layer if isinstance layer tf keras model layer extend get all layer layer else layer append layer return layer |
tensorflowtensorflow | tf 2 0 api docs tf registergradient | Bug | url s with the issue description of issue what need change raise list and define init method can return typeerror but not list |
tensorflowtensorflow | tf 2 0 api docs tf estimator vocabinfo | Bug | url s with the issue description of issue what need change clear description this api documentation do not state when or when not to use this symbol the description lack detail specific to the desire use case correct link the link to the source code be correct parameter define all of the parameter be define and format correctly return define return value be not define raise list and define error be not define usage example the api symbol also doesn t contain well document code sample s the current code sample provide be miss concise explanation for example why would you use the different backup initializer or invocation request visual if applicable no visual specify submit a pull request yes I ve create a pull request |
tensorflowtensorflow | tf 2 0 api docs tf keras layers maximum | Bug | url s with the issue description of issue what need change clarify description and add usage example clear description description give no insight into this method function usage example no usage example provide |
tensorflowtensorflow | tf 2 0 api docs tf datum experimental reducer | Bug | url s with the issue description of issue what need change clear description well description could be provide parameter define no return define no usage example no example be provide |
tensorflowtensorflow | issue with usage example in tf estimator baselineestimator | Bug | link to issue so I notice that in the usage example for tf estimator baselineestimator it reference tf contrib estimator multi label head and I hear that tf contrib be be deprecate there be not replacement yet available for multi label head and I want to bring this to you attention |
tensorflowtensorflow | issue with usage example in tf estimator dnnestimator | Bug | link to issue so I notice that in the usage example for tf estimator dnnestimator it reference tf contrib estimator multi label head and I hear that tf contrib be be deprecate there be not replacement yet available for multi label head and I want to bring this to you attention |
tensorflowtensorflow | tf 2 0 api docs tf datum experimental optionalstructure | Bug | url s with the issue description of issue what need change clear description no description provide to any of the define method usage example no usage example be provide |
tensorflowtensorflow | link not work | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change no hyperlink for define in generate file python op gen image op py clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | tf 2 0 api docs tf data experimental optional | Bug | url s with the issue usage example no usage example provide |
tensorflowtensorflow | issue with usage example in tf estimator baselineestimator | Bug | link to issue so I notice that in the usage example for the module list above they reference tf contrib estimator multi label head and I hear that tf contrib be be deprecate there be not replacement yet available for multi label head and I want to bring this to you attention |
tensorflowtensorflow | tf 2 0 api docs tf divide | Bug | url s with the issue parameter define parameter name be define but no documentation provide regard how to use it |
tensorflowtensorflow | tf 2 0 api docs tf nn convolution | Bug | url s with the issue description of issue what need change usage example need usage example |
tensorflowtensorflow | tf 2 0 api doc tf control dependency | Bug | url s with the issue description of issue what need change clear description while current description be sufficiently clear may want to link to the api doc reference currently wrapper for graph control dependency use the default graph suggest wrapper for graph control dependency control dependency use the default graph submit a pull request yes |
tensorflowtensorflow | tf datum option | Bug | url s with the issue description of issue what need change usage example there be no usage example |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.