repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tf keras layers simplernn doesn t use gpu
Bug
first this be a example in tensorflow 2 0 with kera from tensorflow keras datasets import imdb num word 10000 x train y train x test y test imdb load datum num word num word from tensorflow kera preprocesse sequence import pad sequences max len 500 pad x train pad sequence x train maxlen max len pad x test pad sequences x test maxlen max len from tensorflow keras model import sequential from tensorflow keras layer import simplernn dense embed model sequential model add embed input dim num word output dim 32 model add simplernn 32 return sequence true dropout 0 15 recurrent dropout 0 15 model add simplernn 32 model add dense 1 activation sigmoid model compile optimizer adam loss binary crossentropy metric acc model summary history model fit pad x train y train batch size 32 epoch 15 validation split 0 2 use window 10 and tf 2 0 nvidia driver 418 81 cuda 10 1 gpu geforce gtx 1050 and when I use the layer simplernn in keras gpu util be very low what happen image for check if my gpu work correctly I use it use conv2d image the above picture be run on the mnist dataset which do not seem to require much gpu util I haven t see 100 use but my gpu seem to work fine stack multiple simplernn layer or use just one will not increase gpu util at 10 what s wrong be it because of the feature of the layer
tensorflowtensorflow
tf 2 0 unknown graph error when use tf function decorator with pre train model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 rc0 python version 3 6 9 cuda cudnn version 10 0 gpu model and memory gtx 1050 ti describe the current behavior if I remove the tf function decorator the error disappear face this error when I create the loss function where feature extraction with the pre train vgg16 be a step in the pipeline valueerror unknown graph abort code to reproduce the issue reproduce the error of keras pretrain model with tf function wrapper in tf 2 0 import tensorflow as tf from tensorflow keras application import vgg16 from tensorflow keras model import model import numpy as np tf function def extract feat feat extractor input feat feat extractor predict input step 1 return feat def main pretrain vgg16 vgg16 weight imagenet include top false feature extractor model input pretrain vgg16 input output pretrain vgg16 get layer block3 conv3 output create an dummy input input np random rand 1 224 224 3 0 5 0 5 feat extract feat feature extractor input print feat shape if name main main other info log 2019 11 05 12 03 34 045339 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 11 05 12 03 34 066201 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 066747 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 01 00 0 2019 11 05 12 03 34 066896 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 12 03 34 067713 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 05 12 03 34 068423 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 05 12 03 34 068599 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 05 12 03 34 069586 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 05 12 03 34 070296 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 05 12 03 34 072541 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 05 12 03 34 072620 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 073192 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 073714 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 11 05 12 03 34 073892 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 05 12 03 34 101732 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2208000000 hz 2019 11 05 12 03 34 102528 I tensorflow compiler xla service service cc 168 xla service 0x557c6a90fd40 execute computation on platform host device 2019 11 05 12 03 34 102541 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 05 12 03 34 167039 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 167648 I tensorflow compiler xla service service cc 168 xla service 0x557c6a942ac0 execute computation on platform cuda device 2019 11 05 12 03 34 167662 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce gtx 1050 ti compute capability 6 1 2019 11 05 12 03 34 167802 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 168336 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 01 00 0 2019 11 05 12 03 34 168363 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 12 03 34 168374 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 05 12 03 34 168384 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 05 12 03 34 168393 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 05 12 03 34 168403 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 05 12 03 34 168412 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 05 12 03 34 168421 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 05 12 03 34 168455 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 168990 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 169514 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 11 05 12 03 34 169538 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 12 03 34 170424 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 11 05 12 03 34 170433 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 11 05 12 03 34 170437 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 11 05 12 03 34 170550 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 171093 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 12 03 34 171631 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2787 mb memory physical gpu device 0 name geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 traceback most recent call last file home biendltb project derain gan tool error reproduce py line 29 in main file home biendltb project derain gan tool error reproduce py line 24 in main feat extract feat feature extractor input file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager def function py line 427 in call self initialize args kwd add initializer to initializer map file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager def function py line 370 in initialize args kwd file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager function py line 1847 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager function py line 2147 in maybe define function graph function self create graph function args kwargs file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager function py line 2038 in create graph function capture by value self capture by value file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python framework func graph py line 915 in func graph from py func func output python func func args func kwargs file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python eager def function py line 320 in wrap fn return weak wrap fn wrap args kwd file home biendltb anaconda3 envs dynim lib python3 6 site package tensorflow core python framework func graph py line 905 in wrapper raise e ag error metadata to exception e valueerror in convert code relative to home biendltb project derain gan tool error reproduce py 13 extract feat feat feat extractor predict input step 1 anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training py 915 predict use multiprocesse use multiprocesse anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training array py 722 predict callback callback anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training array py 189 model iteration f make execution function model mode anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training array py 565 make execution function return model make execution function mode anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training py 2155 make execution function self make predict function anaconda3 envs dynim lib python3 6 site package tensorflow core python keras engine training py 2145 make predict function kwargs anaconda3 envs dynim lib python3 6 site package tensorflow core python keras backend py 3658 function return eagerexecutionfunction input output update update name name anaconda3 envs dynim lib python3 6 site package tensorflow core python keras backend py 3542 init raise valueerror unknown graph abort valueerror unknown graph abort
tensorflowtensorflow
autograph error in lstmcell when use dropout
Bug
system information have I write custom code no os platform and distribution manjaro linux testing tensorflow instal from pip binary tensorflow version v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 cuda cudnn version n a gpu model and memory n a describe the current behavior tensorflow return an error when decorate function call lstmcell call with tf function if dropout be non zero most common scenario typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name lstm cell 1 one like 0 describe the expect behavior compile the lstmcell code correctly code to reproduce the issue import tensorflow as tf good cell tf keras layers lstmcell unit 2 dropout 0 0 implementation 1 bad cell tf keras layers lstmcell unit 2 dropout 0 1 implementation 1 input tf one 1 2 state good cell get initial state input tf function def no dropout output good cell input state return output tf function def dropout output bad cell input state return output print 50 print no dropout print 50 print dropout other info log I do a quick investigation and find out that set self dropout mask none at the top of the call method l2210 make the code work again this force the reset of the cache dropout mask use at the previous first call of lstmcell
tensorflowtensorflow
module tensorflow have no attribute adamoptimizer
Bug
docs url tensorflow version that I m use 2 0 0 issue I m try to create an estimator use an optimizer with a learning rate decay the documentation say to use tf adamoptimizer tf exponential decay and tf get global step tensorflow have none of those attribute I ve try play around with suggestion from stack overflow ie tf train adamoptimizer but I can t figure out where any of those attribute be actually locate example code model tf estimator dnnregressor feature column feature column hide unit 100 100 100 optimizer lambda tf adamoptimizer learning rate tf exponential decay learning rate 0 1 global step tf get global step decay step 10000 decay rate 0 96
tensorflowtensorflow
keras fit yield incorrect result when use a custom loss function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below 2 0 0 describe the current behavior when use a custom loss function keras fit produce incorrect result code to reproduce the issue see this colab notebook
tensorflowtensorflow
unable to get layer by name on custom layer and lambda layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mohave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow 2 0 0 tensorflow version use command below pip install tensorflow 2 0 0 python version python3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior get layer raise exception describe the expect behavior should pass and not raise exception valueerror no such layer embedding code to reproduce the issue import tensorflow as tf python import tensorflow as tf class l2normalizelayer tf keras layers layer def init self name normalize kwargs super l2normalizelayer self init name name kwargs def call self input return tf keras backend l2 normalize input axis 1 def get config self config super l2normalizelayer self get config return config shape 224 224 3 functional model base model2 tf keras application mobilenetv2 include top false weight imagenet input shape shape input tf keras input shape shape name input x base model2 input x tf keras layer globalaveragepooling2d x x tf keras layer dense 256 activation relu x y l2normalizelayer name embedding x y tf keras layers lambda lambda k tf keras backend l2 normalize k axis 1 name embedding x output tf keras layer dense 2 activation softmax name prob x model2 tf keras model inputs input output output after training model I would like to load it and extract prob with embedding tf keras model save model model2 model h5 model l2 tf keras model load model model h5 model load tf keras model input model l2 input output model l2 get layer layer name output for layer name in prob embedding other info log 19 29 07 817639 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 04 19 29 07 832947 I tensorflow compiler xla service service cc 168 xla service 0x7fa8ada42530 execute computation on platform host device 2019 11 04 19 29 07 832973 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version warning tensorflow no training configuration find in save file the model be not compile compile it manually traceback most recent call last file save2 py line 36 in input model l2 input output model l2 get layer layer name output for layer name in prob embedding file save2 py line 36 in input model l2 input output model l2 get layer layer name output for layer name in prob embedding file user michallukac env tf2 lib python3 7 site package tensorflow core python keras engine network py line 539 in get layer raise valueerror no such layer name valueerror no such layer embedding
tensorflowtensorflow
crash with multiworkermirroredstrategy keras example from doc
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below 2 0 python version 3 7 3 cuda cudnn version 10 7 6 4 38 gpu model and memory tesla p100 16 gb describe the current behavior I can t make run the tutorial describe here the example be crash with the follow tf config python os environ tf config json dump cluster worker server1 23531 server2 41660 task type worker index 0 in the other machine python os environ tf config json dump cluster worker server1 23531 server2 41660 task type worker index 1 when the script start process the first epoch it crash I test on a single machine and it work describe the expect behavior don t crash code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python usr bin env python code utf 8 multiple worker training import tensorflow as tf use tensorflow dataset import tensorflow dataset as tfds tfds disable progress bar import os import json buffer size 10000 batch size 64 num worker 2 here the batch size scale up by number of worker since tf datum dataset batch expect the global batch size previously we use 64 and now this become 128 global batch size batch size num worker define tf convig environemnt variable this step create a environment variable that give the location and port available on the server to perform training os environ tf config json dump cluster worker server1 23531 server2 41660 task type worker index 1 this need to be call at the program startup strategy tf distribute experimental multiworkermirroredstrategy def make dataset unbatched scale mnist datum from 0 255 to 0 1 def scale image label image tf cast image tf float32 image 255 return image label dataset info tfds load name mnist with info true as supervise true return dataset train map scale cache shuffle buffer size return dataset train map scale shuffle buffer size def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model define a strategy and distribute training with strategy scope creation of dataset and model building compile need to be within strategy scope train dataset make dataset unbatched batch global batch size multi worker model build and compile cnn model multi worker model fit x train dataset epoch 3 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach bash 2019 11 04 16 59 37 654216 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 04 16 59 38 488142 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 04 16 59 38 488853 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 replica 2 stride slice 7 2019 11 04 16 59 38 488860 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 groupcrossdevicecontroledge 0 metric accuracy div no nan 85 2019 11 04 16 59 38 489032 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 sgd sgd group dep 165 2019 11 04 16 59 38 489335 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 2019 11 04 16 59 39 358800 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 groupcrossdevicecontroledge 2 sgd sgd update 0 const 117 2019 11 04 16 59 40 201240 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 04 16 59 40 210692 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 04 16 59 40 276529 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 04 16 59 40 294888 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 04 16 59 40 356048 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 04 16 59 40 360051 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 1 unknown 7 7s steptraceback most recent call last file multi worker train py line 83 in multi worker model fit x train dataset epoch 3 file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training distribute py line 789 in fit args kwargs file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training distribute py line 776 in wrapper mode dc coordinatormode independent worker file mnt anaconda3 lib python3 7 site package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file mnt anaconda3 lib python3 7 site package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training distribute py line 771 in worker fn return method model kwargs file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file mnt anaconda3 lib python3 7 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file mnt anaconda3 lib python3 7 site package tensorflow core python eager def function py line 457 in call result self call args kwd file mnt anaconda3 lib python3 7 site package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file mnt anaconda3 lib python3 7 site package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file mnt anaconda3 lib python3 7 site package tensorflow core python eager function py line 1141 in filter call self capture input file mnt anaconda3 lib python3 7 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file mnt anaconda3 lib python3 7 site package tensorflow core python eager function py line 511 in call ctx ctx file mnt anaconda3 lib python3 7 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror 3 root error s find 0 internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 define at mnt anaconda3 lib python3 7 site package tensorflow core python framework op py 1751 groupcrossdevicecontroledge 0 metric accuracy div no nan 85 1 internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 define at mnt anaconda3 lib python3 7 site package tensorflow core python framework op py 1751 2 internal collective op have group size 8 and group key2 but that group have size 4 additional grpc error information create 1572886778 488763529 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message collective op have group size 8 and group key2 but that group have size 4 grpc status 13 node allreduce 1 collectivereduce 1 define at mnt anaconda3 lib python3 7 site package tensorflow core python framework op py 1751 replica 2 stride slice 7 0 successful operation 2 derive error ignore op inference distribute function 2500 function call stack distribute function distribute function distribute function 2019 11 04 16 59 40 527776 w tensorflow core kernel data generator dataset op cc 102 error occur when finalize generatordataset iterator cancel operation be cancel 2019 11 04 16 59 40 884570 w tensorflow core common runtime eager context cc 290 unable to destroy server object so release instead server don t support clean shutdown
tensorflowtensorflow
java lang illegalargumentexception invalid output tensor index 1
Bug
I ve train my own model for object detection with tensorflow and I get it work with tensorflow mobile for android now since tensorflow lite be release and be go to replace mobile in the future I want to start work with it the tensorflow team provide a demo for tflite for object detection so I try to get it work with my model but I get the error in the title here s the logcat java lang illegalargumentexception invalid output tensor index 1 at org tensorflow lite nativeinterpreterwrapper getoutputtensor nativeinterpreterwrapper java 308 at org tensorflow lite nativeinterpreterwrapper run nativeinterpreterwrapper java 164 at org tensorflow lite interpreter runformultipleinputsoutput interpreter java 296 at org tensorflow lite example detection tflite tfliteobjectdetectionapimodel recognizeimage tfliteobjectdetectionapimodel java 194 at org tensorflow lite example detection detectoractivity 2 run detectoractivity java 181
tensorflowtensorflow
assert shape broken code in documentation
Bug
url s with the issue description of issue what need change the source code in the example be incorrect be tf assert shape x n q y n d param q scalar should be tf assert shape x n q y n d param q scalar note that be not allow in python to form 2 tuple
tensorflowtensorflow
autograph unexpected indent in tf nightly gpu 2 1 0 dev20191103
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 2 1 0 dev20191103 python version 3 5 2 cuda cudnn version 10 0 7 1 gpu model and memory gtx 1080 ti describe the current behavior after upgrade to tf nightly gpu 2 1 0 dev20191103 from tensorflow gpu 2 0 0 I obtain this error when run my code warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause unexpected indent line 36 I do export autograph verbosity 10 but do not observe any other type of output other than the above it say unexpected indent so I guess something change in the parsing of python code when build the graph the problem do not occur on tensorflow gpu 2 0 0 I instal tf nightly gpu 2 1 0 dev20191103 because I need to be able to load weight pretraine weight by name true skip mismatch true which be not available in 2 0 0 there be another bug with this function that I will report in a separate issue describe the expect behavior like in tf 2 0 0 no autograph warn code to reproduce the issue unfortunately my code have a lot of dependency and I be unable to create a minimal reproducible example but from the warning message I guess it s easy enough to check in the source code
tensorflowtensorflow
docker kubernete memory limit not respect oomkilled when deploy to gcp
Bug
check python python version 3 5 6 python branch python build version default aug 26 2018 21 41 56 python compiler version gcc 7 3 0 python implementation cpython check os platform os linux os kernel version 1 smp tue jul 2 22 58 16 utc 2019 os release version 4 9 184 linuxkit os platform linux 4 9 184 linuxkit x86 64 with debian buster sid linux distribution debian buster sid linux os distribution debian buster sid mac version uname uname result system linux node 413655a3a81f release 4 9 184 linuxkit version 1 smp tue jul 2 22 58 16 utc 2019 machine x86 64 processor x86 64 architecture 64bit machine x86 64 be we in docker yes compiler c ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 copyright c 2017 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 15 2 numpydoc 0 9 1 protobuf 3 9 1 tensorflow 1 14 0 tensorflow estimator 1 14 0 tensorflow hub 0 6 0 tensorflow serve api 1 14 0 check for virtualenv false tensorflow import tf version version 1 14 0 tf version git version v1 14 0 0 g87989f6 tf version compiler version 4 8 4 sanity check array 1 dtype int32 65 find library libpthread so 0 0 search hundred of line follow here env ld library path be unset dyld library path be unset nvidia smi tf env collect sh 1 line 147 nvidia smi command not find cuda lib tensorflow instal from info name tensorflow version 1 14 0 summary tensorflow be an open source machine learning framework for everyone home page author email license apache 2 0 location root miniconda3 lib python3 5 site package require by witwidget tensorflow serve api seldon core python version major minor micro releaselevel serial 3 5 6 final 0 bazel version build label 0 19 0 build time mon oct 29 14 35 30 2018 1540823730 build timestamp 1540823730 build timestamp as int 1540823730 describe the current behavior I m use kubeflow to deploy a simple kera tf model training job two lstm layer 28000x150x27 input from csv the dockerfile it produce be from gcr io deeplearne platform release tf cpu 1 14 workdir python env copy requirement txt run python3 m pip install r requirement txt copy and the output at the top of this issue be from within the result container the container have a cpu limit of 7 core and a memory limit of 26gi the host node have 8 core and 30gi by the 3rd of 20 epoch after about 50 minute the container be kill by kubernete with oomkilled look at a graph of memory use it increase linearly over the 50 minute and evidently ignore the limit I have previously train this model on my laptop 8 core 16 gb ram 20 epoch without issue so this look to be relate to the docker or kubernete environment describe the expect behavior memory limit be respect code to reproduce the issue the above dockerfile plus python import tensorflow as tf from tensorflow keras model import sequential from tensorflow keras layer import dense dropout lstm from urllib request import urlopen import numpy as np import importlib import os here s how datum be load dataset np loadtxt raw data delimiter build model model sequential model add lstm 32 return sequence true model add lstm 32 model add dense 32 activation softmax opt tf keras optimizer adam lr 1e 3 decay 1e 5 model compile loss sparse categorical crossentropy optimizer opt metric accuracy training datum have be load via np loadtxt model fit training dataset training label epoch 20 shuffle true validation datum validation dataset validation label
tensorflowtensorflow
model restore error confict in tensorflow
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below sudo pip3 install tensorflow gpu 2 0 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 130 7 6 2 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I have save one model with more than two layer use model save weight and want to restore its weight and only use the front conv layer but when I use model load weight I get two different result 1 first when I load weight only for the front conv layer one big model can load successfully but another small model can not load 2 two the big model load success but the weight of the last layer be wrong but any other layer weiight be right describe the expect behavior I want to load a ckpt into a small model successfully code to reproduce the issue here be my big model definition import tensorflow as tf import math num class 10 def swish x return x tf keras activations sigmoid x def round filter filter multipli depth divisor 8 min depth none min depth min depth or depth divisor filter filter multipli new filter max min depth int filter depth divisor 2 depth divisor depth divisor if new filter 0 9 filter new filter depth divisor return int new filter def round repeat repeat multiplier if not multipli return repeat return int math ceil multipli repeat def seblock input input channel ratio 0 25 num reduce filter max 1 int input channel ratio branch tf keras layer globalaveragepooling2d input branch tf keras layers lambda lambda branch tf expand dim input branch axis 1 branch branch tf keras backend expand dim branch 1 branch tf keras backend expand dim branch 1 branch tf keras layers lambda lambda branch tf expand dim input branch axis 1 branch branch tf keras layer conv2d filter num reduce filter kernel size 1 1 stride 1 padding same branch branch swish branch branch tf keras layer conv2d filter input channel kernel size 1 1 stride 1 padding same branch branch tf keras activations sigmoid branch output input branch return output def mbconv in channel out channel expansion factor stride k drop connect rate input training false x tf keras layer conv2d filter in channel expansion factor kernel size 1 1 stride 1 padding same input x tf keras layer batchnormalization x training training x swish x x tf keras layers depthwiseconv2d kernel size k k strides stride padding same x x tf keras layer batchnormalization x training training x seblock x in channel expansion factor x swish x x tf keras layer conv2d filter out channel kernel size 1 1 stride 1 padding same x x tf keras layer batchnormalization x training training if stride 1 and in channel out channel if drop connect rate x tf keras layers dropout rate drop connect rate x training training x tf keras layer add x input return x def build mbconv block input in channel out channel layer stride expansion factor k drop connect rate training x input for I in range layer if I 0 x mbconv in channel in channel out channel out channel expansion factor expansion factor stride stride k k drop connect rate drop connect rate input x training training else x mbconv in channel out channel out channel out channel expansion factor expansion factor stride 1 k k drop connect rate drop connect rate input x training training return x def efficientnet input width coefficient depth coefficient dropout rate drop connect rate 0 2 training false feature x tf keras layer conv2d filter round filter 32 width coefficient kernel size 3 3 stride 2 padding same input x tf keras layer batchnormalization x training training x swish x x build mbconv block x in channel round filter 32 width coefficient out channel round filter 16 width coefficient layer round repeat 1 depth coefficient stride 1 expansion factor 1 k 3 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 16 width coefficient out channel round filter 24 width coefficient layer round repeat 2 depth coefficient stride 1 expansion factor 6 k 3 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 24 width coefficient out channel round filter 40 width coefficient layer round repeat 2 depth coefficient stride 2 expansion factor 6 k 5 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 40 width coefficient out channel round filter 80 width coefficient layer round repeat 3 depth coefficient stride 2 expansion factor 6 k 3 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 80 width coefficient out channel round filter 112 width coefficient layer round repeat 3 depth coefficient stride 1 expansion factor 6 k 5 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 112 width coefficient out channel round filter 192 width coefficient layer round repeat 4 depth coefficient stride 2 expansion factor 6 k 5 drop connect rate drop connect rate training training feature append x x build mbconv block x in channel round filter 192 width coefficient out channel round filter 320 width coefficient layer round repeat 1 depth coefficient stride 1 expansion factor 6 k 3 drop connect rate drop connect rate training training feature append x x tf keras layer conv2d filter round filter 1280 width coefficient kernel size 1 1 stride 1 padding same x x tf keras layer batchnormalization x training training x swish x x tf keras layer globalaveragepooling2d x x tf keras layers dropout rate dropout rate x training training x tf keras layer dense unit 1 activation tf keras activation softmax x return x feature def efficient net b0 input training return efficientnet input width coefficient 1 0 depth coefficient 1 0 dropout rate 0 2 drop connect rate 0 2 training training def up sample input training true x tf keras layer upsampling2d input x tf keras layer batchnormalization x training training x tf keras layers relu x return x def biggermodel input outc training true feature efficient net b0 input input training training 1 2 1 4 1 8 1 8 1 16 output for I name in enumerate feature x feature I if x shape 1 input shape 1 4 continue while x shape 1 input shape 1 4 x up sample x training output append x quater re tf keras layers concatenate output quater re tf keras layer conv2d 512 3 1 same activation tf nn relu quater res quater re tf keras layer conv2d 512 3 1 same activation tf nn relu quater res quater re out tf keras layer conv2d outc 1 1 same name quater activation none quater re half re up sample quater re training half re tf keras layer conv2d 512 3 1 same activation tf nn relu half re half re tf keras layer conv2d 512 3 1 same activation tf nn relu half re half re out tf keras layer conv2d outc 1 1 same name half activation none half re if training return quater re out half re out else return quater re out and here be my small model definition def smallmodel input outc training true quater re tf keras layer conv2d 512 3 1 same activation tf nn relu input quater re tf keras layer conv2d 512 3 1 same activation tf nn relu quater res quater re out tf keras layer conv2d outc 1 1 same activation none quater re half re tf keras layer conv2d 512 3 1 same activation tf nn relu quater re half re tf keras layer conv2d 512 3 1 same activation tf nn relu half re half re out tf keras layer conv2d outc 1 1 same activation none half re if training return quater re out half re out else return quater re out about big model first I save weight and print layer name quater weight if name main import os os environ cuda visible device 1 input tf keras input shape 224 224 3 name modelinput output biggermodel input outc 18 1 training true model tf keras model input output model summary model save weight model test test model load weight model test test print model get layer quater get weight 0 0 0 0 4 and I get this image and then I restore weight by set parameter training false and print quater layer weight if name main import os os environ cuda visible device 1 input tf keras input shape 224 224 3 name modelinput output biggermodel input outc 18 1 training false model tf keras model input output model summary model save weight model test test model load weight model test test print model get layer quater get weight 0 0 0 0 4 and get result like this image we can see that this two output be different but I have checkout all other layer weight be same resutlt so it s really weird for small model as like in big model I have do first save weight and print quater layer weight if name main import os os environ cuda visible device 1 input tf keras input shape 224 224 3 name modelinput output smallmodel input outc 18 1 training true model tf keras model input output model summary model save weight model test test model load weight model test test print model get layer quater get weight 0 0 0 0 4 and result be image then I try to restore this and set parameter training false if name main import os os environ cuda visible device 1 input tf keras input shape 224 224 3 name modelinput output smallmodel input outc 18 1 training false model tf keras model input output model summary model save weight model test test model load weight model test test print model get layer quater get weight 0 0 0 0 4 I get this error image how can I fix this other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
notimplementederror layer with argument in init must override get config
Bug
relate to 32662 system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google compute engine gpu run intel r xeon r cpu 2 30ghz mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior model be unable to be save use model save describe the expect behavior model be save successfully code to reproduce the issue code to reproduce this issue can be find on the relate stackoverflow page another good reproduction of this since I m use attention layer be find here issuecomment 542931948 other info log log txt
tensorflowtensorflow
model summary input layer output layer not work properly in sub class method of tf keras
Bug
edit after further research I find that in subclasse many of the model method and attribute be not work example be input name output name etc I be use tf2 I be try to build a model via sub class method in sequantial model method and attribute such as summary input layer output layer work import tensorflow as tf import tensorflow kera as keras model keras sequential kera layer dense 4 input shape none 3 kera layer dense 3 activation relu keras layer dense 2 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy print model summary print model input shape print model output shape but in sub class those method and attribute be not work properly class mlp tf keras model def init self super init self fc1 tf keras layer dense 4 input shape none 3 self fc2 tf keras layer dense 3 activation relu self fc3 tf keras layer dense 2 activation softmax def call self x x self fc1 x x self fc2 x x self fc3 x return x model mlp model build 1 3 print model summary print model input shape print model output shape the summary output in sub class model doesn t show the output shape layer type output shape param dense 11 dense multiple 16 dense 12 dense multiple 15 dense 13 dense multiple 8 total param 39 trainable param 39 non trainable param 0 and model input shape and model output shape return this error raise attributeerror the layer have never be call attributeerror the layer have never be call and thus have no define output shape
tensorflowtensorflow
sigmoid be ignore when calculate loss by call method model fit
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information windoiw version v2 0 0 rc2 26 g64c3d382ca 2 0 0 describe the current behavior I get a incorrect loss from history return by call model fit you can see the correct and incorrect result by change parameter error from my code code to reproduce the issue python import math import numpy as np import tensorflow as tf error true n feature 100 batch 2 model x tf keras input shape n feature dtype tf float32 w tf variable 1 0 n feature b tf variable 1 0 z tf reduce sum w x axis 1 keepdim true b loss be incorrect if error be true if error y tf sigmoid z else y 1 0 1 0 math e z m tf keras model inputs x output y loss optimizer tf keras optimizer sgd learn rate 0 001 loss tf keras loss binarycrossentropy m compile optimizer optimizer loss loss train dataset x np array 1 0 for I in range n feature batch dtype np float32 y np array 0 0 batch dtype np float32 get correct loss logit m x l loss y logit get incorrect loss history m fit x y history history loss l numpy print history history print l numpy
tensorflowtensorflow
insert window command line in the tutorial
Bug
url s with the issue example description of issue what need change the tutorial use posix command like ls checkpoint dir unfortunately I be on window so those command do not work be there no way to condition the command on the os dir checkpoint dir the alternative way would be to use python straightaway onlyfile f for f in os listdir mypath if os path isfile os path join mypath f print onlyfile
tensorflowtensorflow
tf compat purpose may be misstate
Bug
url s with the issue description of issue what need change doc say function for python 2 vs 3 compatibility but on stackoverflow they tell I that the documentation for the module about python should actually be change originally tf compat only hold function for that purpose and it be like that until 1 13 see all module documentation however it be later repurpose for tensorflow version compatibility
tensorflowtensorflow
parallel for no converter define for spacetobatchnd
Bug
computation of jacobian with pfor do not work for tf keras layers convolution2d with dilation rate 1 because there be no converter implement for spacetobatchnd tf version 2 1 0 dev20191030 python import tensorflow as tf nlat 10 nlon 20 n channel in 1 x tf one 1 nlat nlon n channel in layer1 tf keras layers convolution2d 32 kernel size 3 dilation rate 1 layer2 tf keras layers convolution2d 32 kernel size 3 dilation rate 2 with tf gradienttape persistent true as gt gt watch x y1 layer1 x y2 layer2 x j gt jacobian y1 x work j2 gt jacobian y2 x fail with follow error python valueerror traceback most recent call last pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager backprop py in jacobian self target source unconnected gradient parallel iteration experimental use pfor 1112 output pfor op pfor loop fn target size 1113 parallel iteration parallel iteration 1114 except valueerror as err pf nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python op parallel for control flow op py in pfor loop fn iter parallel iteration 188 f function defun f 189 return f 190 pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in call self args kwargs 2340 with self lock 2341 graph function args kwargs self maybe define function args kwargs 2342 return graph function filter call args kwargs pylint disable protect access pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2675 self function cache miss add call context key 2676 graph function self create graph function args kwargs 2677 self function cache primary cache key graph function pf nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2565 override flat arg shape override flat arg shape 2566 capture by value self capture by value 2567 self function attribute pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 957 958 func output python func func args func kwargs 959 pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python framework func graph py in wrapper args kwargs 947 if hasattr e ag error metadata 948 raise e ag error metadata to exception e 949 else valueerror in convert code relative to pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python op parallel for control flow op py 183 f return pfor impl loop fn iter parallel iteration parallel iteration control flow op py 256 pfor impl output append converter convert loop fn output pfor py 1280 convert output self convert helper y pfor py 1460 convert helper y op type y op convert input valueerror no converter define for spacetobatchnd name loop body spacetobatchnd op spacetobatchnd input loop body reshape 2 input loop body spacetobatchnd block shape input loop body spacetobatchnd padding attr key t value type dt float attr key tblock shape value type dt int32 attr key tpadding value type dt int32 input wrappedtensor t be stack true be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false either add a converter or set op conversion fallback to while loop true which may run slow during handling of the above exception another exception occur valueerror traceback most recent call last in 1 j2 gt jacobian y2 x fail pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager backprop py in jacobian self target source unconnected gradient parallel iteration experimental use pfor 1119 jacobian computation vectorization can be disable by set 1120 experimental use pfor to false 1121 sys exc info 2 1122 else 1123 if context execute eagerly and not self persistent pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package six py in reraise tp value tb 690 value tp 691 if value traceback be not tb 692 raise value with traceback tb 693 raise value 694 finally pf nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager backprop py in jacobian self target source unconnected gradient parallel iteration experimental use pfor 1111 try 1112 output pfor op pfor loop fn target size 1113 parallel iteration parallel iteration 1114 except valueerror as err 1115 six reraise pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python op parallel for control flow op py in pfor loop fn iter parallel iteration 187 if context execute eagerly or be under xla context 188 f function defun f 189 return f 190 191 pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in call self args kwargs 2339 call a graph function specialize to the input 2340 with self lock 2341 graph function args kwargs self maybe define function args kwargs 2342 return graph function filter call args kwargs pylint disable protect access 2343 pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2674 2675 self function cache miss add call context key 2676 graph function self create graph function args kwargs 2677 self function cache primary cache key graph function 2678 return graph function args kwargs pf nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2564 arg name arg name 2565 override flat arg shape override flat arg shape 2566 capture by value self capture by value 2567 self function attribute 2568 tell the concretefunction to clean up its graph once it go out of pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 956 convert func 957 958 func output python func func args func kwargs 959 960 invariant func output contain only tensor compositetensor pf nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python framework func graph py in wrapper args kwargs 946 except exception as e pylint disable broad except 947 if hasattr e ag error metadata 948 raise e ag error metadata to exception e 949 else 950 raise valueerror in convert code relative to pfs nobackup home s sebsc miniconda3 envs tf2 env lib python3 7 site package tensorflow core python op parallel for control flow op py 183 f return pfor impl loop fn iter parallel iteration parallel iteration control flow op py 256 pfor impl output append converter convert loop fn output pfor py 1280 convert output self convert helper y pfor py 1460 convert helper y op type y op convert input valueerror no converter define for spacetobatchnd name loop body spacetobatchnd op spacetobatchnd input loop body reshape 2 input loop body spacetobatchnd block shape input loop body spacetobatchnd padding attr key t value type dt float attr key tblock shape value type dt int32 attr key tpadding value type dt int32 input wrappedtensor t be stack true be sparse stack false wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack false be sparse stack false either add a converter or set op conversion fallback to while loop true which may run slow encounter an exception while vectorize the jacobian computation vectorization can be disable by set experimental use pfor to false
tensorflowtensorflow
internalerror blas sgemm launch fail m 10 n 1 k 4 op conv2d throw when train keras model with train on batch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker image tensorflow tensorflow 2 0 0 gpu py3 tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version nvidia smi 418 87 00 driver version 418 87 00 cuda version 10 1 gpu model and memory geforce gtx 960 m 2004mib describe the current behavior when train a kera model with train on batch an internalerror be throw see stacktrace train the same model with fit work describe the expect behavior no internalerror throw and the attach code successfully execute code to reproduce the issue mkdir tf issue cp run py tf issue docker run gpu all it v pwd tf issue tf issue tensorflow tensorflow 2 0 0 gpu py python3 tf issue run py on my machine the above take a few minute when set up tensorflow which be a bit ridiculous but that s another issue run py import tensorflow as tf import numpy as np from tensorflow keras layers import input conv2d from tensorflow keras optimizer import adam tf keras backend set image datum format channel first model tf keras sequential model add tf keras layer conv2d filter 1 kernel size 3 stride 2 input shape 224 224 3 model add tf keras layer conv2d filter 1 kernel size 3 stride 2 model add tf keras layer conv2d filter 1 kernel size 3 stride 2 model add tf keras layer conv2d filter 1 kernel size 3 stride 2 model add tf keras layer conv2d filter 1 kernel size 3 stride 2 model add tf keras layer conv2d filter 1 kernel size 3 stride 2 model add tf keras layer conv2d filter 1 kernel size 2 stride 1 activation sigmoid model compile optimizer adam loss binary crossentropy model summary gpu tf test be gpu available print gpu be available gpu assert gpu batch size 10 x np random random batch size 224 224 3 y np random random batch size 1 1 1 print x x shape print y y shape y pre model predict x print predict successful y pre shape print begin train with fit model fit x y epoch 10 print fit successful print begin train with train on batch for I in range 10 model train on batch x y print on batch successful other info log output and stack trace model sequential layer type output shape param conv2d conv2d none 111 111 1 28 conv2d 1 conv2d none 55 55 1 10 conv2d 2 conv2d none 27 27 1 10 conv2d 3 conv2d none 13 13 1 10 conv2d 4 conv2d none 6 6 1 10 conv2d 5 conv2d none 2 2 1 10 conv2d 6 conv2d none 1 1 1 5 total param 83 trainable param 83 non trainable param 0 gpu be available true x 10 224 224 3 y 10 1 1 1 predict successful 10 1 1 1 begin train with fit train on 10 sample epoch 1 10 10 10 1s 82ms sample loss 0 6654 epoch 2 10 10 10 0s 870us sample loss 0 6633 epoch 3 10 10 10 0s 882us sample loss 0 6613 epoch 4 10 10 10 0s 1ms sample loss 0 6594 epoch 5 10 10 10 0s 979us sample loss 0 6574 epoch 6 10 10 10 0s 902us sample loss 0 6555 epoch 7 10 10 10 0s 872us sample loss 0 6536 epoch 8 10 10 10 0s 960us sample loss 0 6517 epoch 9 10 10 10 0s 905us sample loss 0 6498 epoch 10 10 10 10 0s 1ms sample loss 0 6479 fit successful begin train with train on batch not compile to use avx2 fma 2019 11 01 13 45 13 958471 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2592000000 hz 2019 11 01 13 45 13 959294 I tensorflow compiler xla service service cc 168 xla service 0x4595560 execute computation on platform host device 2019 11 01 13 45 13 959336 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 01 13 45 14 000846 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 001574 I tensorflow compiler xla service service cc 168 xla service 0x45973c0 execute computation on platform cuda device 2019 11 01 13 45 14 001596 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce gtx 960 m compute capability 5 0 2019 11 01 13 45 14 001719 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 002377 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 960 m major 5 minor 0 memoryclockrate ghz 1 0975 pcibusid 0000 01 00 0 2019 11 01 13 45 14 002406 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 01 13 45 14 002417 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 01 13 45 14 002427 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 01 13 45 14 002444 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 01 13 45 14 002454 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 01 13 45 14 002463 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 01 13 45 14 002473 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 01 13 45 14 002522 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 003154 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 003750 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 11 01 13 45 14 003777 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 01 13 45 14 004647 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 11 01 13 45 14 004660 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 11 01 13 45 14 004667 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 11 01 13 45 14 004779 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 005421 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 14 006043 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 1742 mb memory physical gpu device 0 name geforce gtx 960 m pci bus i d 0000 01 00 0 compute capability 5 0 2019 11 01 13 45 15 207106 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 15 207755 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 960 m major 5 minor 0 memoryclockrate ghz 1 0975 pcibusid 0000 01 00 0 2019 11 01 13 45 15 207787 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 01 13 45 15 207801 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 01 13 45 15 207813 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 01 13 45 15 207824 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 01 13 45 15 207836 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 01 13 45 15 207847 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 01 13 45 15 207859 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 01 13 45 15 207910 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 15 208535 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 15 209130 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 11 01 13 45 15 209154 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 11 01 13 45 15 209161 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 11 01 13 45 15 209168 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 11 01 13 45 15 209276 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 15 209931 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 01 13 45 15 210556 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 0 with 1742 mb memory physical gpu device 0 name geforce gtx 960 m pci bus i d 0000 01 00 0 compute capability 5 0 2019 11 01 13 45 15 339203 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 01 13 45 17 264840 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 01 13 45 17 421437 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2019 11 01 13 45 17 421474 w tensorflow stream executor stream cc 1919 attempt to perform bla operation use streamexecutor without blas support traceback most recent call last file test sample weight py line 41 in model train on batch x y file usr local lib python3 6 dist package tensorflow core python keras engine training py line 973 in train on batch class weight class weight reset metric reset metric file usr local lib python3 6 dist package tensorflow core python keras engine training v2 util py line 264 in train on batch output loss metric model output loss metric file usr local lib python3 6 dist package tensorflow core python keras engine training eager py line 311 in train on batch output loss metric output loss metric file usr local lib python3 6 dist package tensorflow core python keras engine training eager py line 252 in process single batch training training file usr local lib python3 6 dist package tensorflow core python keras engine training eager py line 127 in model loss out model input kwargs file usr local lib python3 6 dist package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file usr local lib python3 6 dist package tensorflow core python keras engine sequential py line 256 in call return super sequential self call input training training mask mask file usr local lib python3 6 dist package tensorflow core python keras engine network py line 708 in call convert kwargs to constant base layer util call context save file usr local lib python3 6 dist package tensorflow core python keras engine network py line 860 in run internal graph output tensor layer compute tensor kwargs file usr local lib python3 6 dist package tensorflow core python keras engine base layer py line 891 in call output self call cast input args kwargs file usr local lib python3 6 dist package tensorflow core python keras layers convolutional py line 197 in call output self convolution op input self kernel file usr local lib python3 6 dist package tensorflow core python op nn op py line 1134 in call return self conv op inp filter file usr local lib python3 6 dist package tensorflow core python op nn op py line 639 in call return self call inp filter file usr local lib python3 6 dist package tensorflow core python op nn op py line 238 in call name self name file usr local lib python3 6 dist package tensorflow core python op nn op py line 2010 in conv2d name name file usr local lib python3 6 dist package tensorflow core python ops gen nn op py line 1031 in conv2d datum format datum format dilation dilation name name ctx ctx file usr local lib python3 6 dist package tensorflow core python ops gen nn op py line 1130 in conv2d eager fallback ctx ctx name name file usr local lib python3 6 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror blas sgemm launch fail m 10 n 1 k 4 op conv2d docker version client docker engine community version 19 03 4 api version 1 40 go version go1 12 10 git commit 9013bf583a build fri oct 18 15 54 09 2019 os arch linux amd64 experimental false server docker engine community engine version 19 03 4 api version 1 40 minimum version 1 12 go version go1 12 10 git commit 9013bf583a build fri oct 18 15 52 40 2019 os arch linux amd64 experimental false containerd version 1 2 10 gitcommit b34a5c8af56e510852c35414db4c1f4fa6172339 runc version 1 0 0 rc8 dev gitcommit 3e425f80a8c931f88e6d94a8c831b9d5aa481657 docker init version 0 18 0 gitcommit fec3683
tensorflowtensorflow
have a default value for monitor on early stop callback
Bug
null
tensorflowtensorflow
write tflite file from keras model throw cycle find we already encounter that input array
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v1 12 1 7396 g12481e7e74 1 15 0 dev20190730 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 9 1 10 0 10 1 gpu model and memory gtx 1050ti 4 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m try to write convert a pure keras model to pb file in order to write it to tflite it s a few conv layer and some end logic in order to do so I freeze it with convert variable to constant and reload it use the tfliteconverter from frozen graph however I get the follow error f tensorflow lite toco tooling util cc 1182 cycle find we already encounter that input array decode prediction loop over batch while nextiteration early in the above trace we expect graph to be acyclic even rnn let we know if some graph actually need to have cycle but first please check if it really be an inference graph training graph be out of scope for toco fatal python error abort describe the expect behavior I would expect convert variable to constant to write a frozen graph readable by the from frozen graph method and simply convert my model I ve try many thing like write it with different method set learning phase change the final layer which be cause the problem etc code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem def freeze session session model keep var name none clear device none graph session graph with graph as default freeze var name list set v op name for v in tf global variable difference keep var name or output name out op name for out in model output output name v op name for v in tf global variable input graph def graph as graph def if clear device for node in input graph def node node device frozen graph convert variable to constant session input graph def output name freeze var name return frozen graph def save model model iteration k clear session k set learning phase 0 new model get out model model frozen graph freeze session k get session new model with tf gfile gfile model model pb wb as f f write frozen graph serializetostre path tf train write graph frozen graph model model str iteration pb converter tf lite tfliteconverter from frozen graph model model pb input array new model input name output array prediction concat output array decode prediction loop over batch tensorarraystack tensorarraygatherv3 converter optimization tf lite optimize default converter target spec support op set tf lite opsset select tf op converter allow custom op true converter drop control dependency false tflite model converter convert with open model tflite wb as tfile tfile write tflite model k set learning phase 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach current thread 0x00007f8956f0f740 most recent call first file home oliver local lib python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute file home oliver local lib python3 6 site package absl app py line 250 in run main file home oliver local lib python3 6 site package absl app py line 299 in run file home oliver local lib python3 6 site package tensorflow core python platform app py line 40 in run file home oliver local lib python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main file home oliver local bin toco from protos line 11 in abort core dump none file home oliver project datagen retrainer retrainer py line 189 in loss train save model model socket file home oliver project datagen retrainer retrainer py line 168 in train save model save model model I file home oliver project datagen retrainer retrainer py line 121 in save model tflite model converter convert file home oliver local lib python3 6 site package tensorflow core lite python lite py line 983 in convert converter kwargs file home oliver local lib python3 6 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file home oliver local lib python3 6 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr
tensorflowtensorflow
update sample session for build from source instruction doc
Bug
url s with the issue description of issue what need change the sample session give in the doc be old bazel 0 15 1 tensorflow 1 11 python 2 7 a new session copy with the more recent tensorflow 2 x with bazel 0 26 1 would be more helpful also python 2 7 be reach end of life in jan 2020 submit a pull request be you plan to also submit a pull request to fix the issue yes
tensorflowtensorflow
copy link to this section feature not work for certain section
Bug
url s with the issue browser use try on microsoft edge and mozilla firefox up to date version description of issue what need change the copy link to this section feature be not work for the view sample configuration session section for the linux and window build from source instruction doc clear description the issue cause section be locate under these section sample session linux configure the build window submit a pull request be you plan to also submit a pull request to fix the issue yes I can fix the markdown to correct the issue
tensorflowtensorflow
eager context device issue segmentation fault after context re set serverdef
Bug
system information os platform and distribution ubuntu 18 04 tensorflow instal from source or binary binary whl tensorflow version use command below tensorflow gpu 2 0 0 python version 3 6 8 cuda cudnn version 10 0 7 6 4 gpu model and memory geforce gtx 1080 ti describe the current behavior python import tensorflow as tf from tensorflow core protobuf tensorflow server pb2 import serverdef from tensorflow python eager import context from tensorflow python training server lib import clusterspec cluster def clusterspec worker 127 0 0 1 15293 as cluster def 15293 be just some random available port server def serverdef cluster cluster def job name worker task index 0 protocol grpc v tf variable 3 print v device job localhost replica 0 task 0 device cpu 0 context set server def server def print v device segmentation fault core dump describe the expect behavior should api user expect the variable re place and re initialize on the new server code to reproduce the issue see above
tensorflowtensorflow
tf image per image standardization return wrong value
Bug
tensorflow version 1 15 tf image per image standardization do not standardize value to 1 1 I test in tensorflow 1 12 it work correctly python import tensorflow as tf import numpy as np print tf version a tf constant 1 2 3 4 5 6 b tf image per image standardization a with tf session as sess sess run tf global variable initializer print sess run b x np asarray 1 2 3 4 5 6 dtype np float64 def normalize meanstd a axis none axis param denote axis along which mean std reduction be to be perform mean np mean a axis axis keepdim true std np sqrt a mean 2 mean axis axis keepdim true return a mean std print normalize meanstd x 1 15 0 6 3 1 1 3 6 1 46385011 0 87831007 0 29277002 0 29277002 0 87831007 1 46385011
tensorflowtensorflow
bug in save model in hdf5 format
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave version 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when try to save the below model in keras format we get the follow error valueerror unable to create group name already exist this happen as this model have three layer with name as below tf op layer pad padding 0 tf op layer pad padding tf op layer pad such name cause error in keras as describe here describe the expect behavior model saving should not fail code to reproduce the issue import tensorflow as tf from tensorflow import keras x keras input shape none 10 dtype int32 name input t tf shape x 0 to pad t 2 y tf pad x 0 to pad 0 0 0 0 model keras model input x output y model save model h5 other info log this fix for this have be check into keras few day ago it seem but it seem tf have its own copy of this hdf5 save so it seem this fix will also have to be make there l624
tensorflowtensorflow
gru layer fail on single gpu when use mirroredstrategy
Bug
system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 window and linux tensorflow instal from binary tensorflow version 2 0 python version 3 7 describe the current behavior use single gpu and tf keras layers gru layer and model fit with mirroredstrategy fail on error invalid argument var and delta do not have the same shape tf keras layers lstm work fine on one gpu on machine with more gpu the gru layer work fine I know it do not make much sense to run mirror strategy on only one gpu but it be good for test your code before upload it to multi gpu machine it work in tf 1 5 code to reproduce the issue on google colab use gpu runtime and run this code tensorflow version 2 x tensorflow version 1 x here it work import tensorflow as tf print f tensorflow ver tf version print f num gpu available len tf config experimental list physical device gpu ds tf datum dataset from tensor slice input tf zero 64 4 target tf zero 64 5 ds ds batch 3 strategy tf distribute mirroredstrategy with strategy scope p input tf keras input shape 4 name input p target tf keras input shape 5 name target gru tf keras layers lstm 8 this work gru tf keras layers gru 8 x p input x tf expand dim x axis 1 x gru x x tf keras layer dense 5 activation tanh x model tf keras model p input p target x model add loss tf keras loss mse p target x model compile optimizer tf keras optimizer sgd model fit ds other info log tensorflow ver 2 0 0 num gpu available 1 warn tensorflow output dense 9 miss from loss dictionary we assume this be do on purpose the fit and evaluate apis will not be expect any datum to be pass to dense 9 1 unknown 3s 3s step invalidargumenterror traceback most recent call last in 32 33 model compile optimizer tf keras optimizer sgd 34 model fit ds 11 frame usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror 2 root error s find 0 invalid argument var and delta do not have the same shape 8 24 2 24 node sgd sgd update 1 update 0 resourceapplygradientdescent define at tensorflow 2 0 0 python3 6 tensorflow core python framework op py 1751 1 invalid argument var and delta do not have the same shape 8 24 2 24 node sgd sgd update 1 update 0 resourceapplygradientdescent define at tensorflow 2 0 0 python3 6 tensorflow core python framework op py 1751 arithmeticoptimizer reordercastlikeandvaluepreserving int64 cast 3 18 0 successful operation 0 derive error ignore op inference distribute function 24603 function call stack distribute function distribute function
tensorflowtensorflow
I be try to edit tensorflow lite doc but not able to find source code
Bug
link which I want to edit the description of digit classifier be incorrect
tensorflowtensorflow
tf nightly unable to load save functional model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos mohave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tf nightly tensorflow version use command below tf nightly 2 1 0 dev20191029 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior loading functional model should pass in late version of tf2 everything work great in tensorflow 2 0 0 describe the expect behavior loading save functional model raise exception code to reproduce the issue python import tensorflow as tf shape 224 224 3 sequential model model1 tf keras sequential tf keras input shape shape name input tf keras application mobilenetv2 include top false weight imagenet input shape shape tf keras layer globalaveragepooling2d tf keras layer dense 256 activation relu name descriptor tf keras layer dense 2 activation softmax name prob functional model base model2 tf keras application mobilenetv2 include top false weight imagenet input shape shape input tf keras input shape shape name input x base model2 input x tf keras layer globalaveragepooling2d x x tf keras layer dense 256 activation relu name descriptor x output tf keras layer dense 2 activation softmax name prob x model2 tf keras model inputs input output output tf save model save model1 test1 tf save model save model2 test2 model2 save test2 include optimizer false save format tf model 1 tf keras model load model test1 this raise exception model 2 tf keras model load model test2 other info log 2019 10 31 09 13 04 371282 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 31 09 13 04 384631 I tensorflow compiler xla service service cc 168 xla service 0x7fcafda7be50 initialize for platform host this do not guarantee that xla will be use device 2019 10 31 09 13 04 384655 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2019 10 31 09 13 17 924297 w tensorflow python util util cc 309 set be not currently consider sequence but this may change in the future so consider avoid use they warn tensorflow from user michallukac env tf2 lib python3 7 site package tensorflow core python op resource variable op py 1785 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer traceback most recent call last file save py line 32 in model 2 tf keras model load model test2 file user michallukac env tf2 lib python3 7 site package tensorflow core python keras save save py line 150 in load model return save model load load filepath compile file user michallukac env tf2 lib python3 7 site package tensorflow core python keras save save model load py line 89 in load model tf load load internal path loader cls kerasobjectloader file user michallukac env tf2 lib python3 7 site package tensorflow core python save model load py line 543 in load internal export dir file user michallukac env tf2 lib python3 7 site package tensorflow core python keras save save model load py line 119 in init self finalize file user michallukac env tf2 lib python3 7 site package tensorflow core python keras save save model load py line 157 in finalize create layer layer name layer for layer in node layer file user michallukac env tf2 lib python3 7 site package tensorflow core python keras engine network py line 1885 in reconstruct from config process node layer node datum file user michallukac env tf2 lib python3 7 site package tensorflow core python keras engine network py line 1833 in process node output tensor layer input tensor kwargs file user michallukac env tf2 lib python3 7 site package tensorflow core python keras engine base layer py line 773 in call output call fn cast input args kwargs file user michallukac env tf2 lib python3 7 site package tensorflow core python keras engine network py line 712 in call raise notimplementederror when subclasse the model class you should notimplementederror when subclasse the model class you should implement a call method
tensorflowtensorflow
segfault on multiple write to dynamically sized tensorarray inside tf function
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 15 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory this code reliably produce a segmentation fault python tensorflow version 1 x import faulthandler faulthandler enable import tensorflow as tf tf enable control flow v2 from tensorflow python op tensor array op import build ta with new flow as ta like num iteration 5 if true dynamic sizing this be the break case initial xs tf tensorarray tf float32 size 0 dynamic size true clear after read false else initial xs tf tensorarray tf float32 size chunk size num iteration dynamic size false clear after read false z tf constant 0 tf function autograph false def body fn I flow xs ta like initial xs flow write to the tensorarray multiple time in consecutive location chunk size 1 be the break case chunk size 2 for j in range chunk size xs xs write I chunk size j z return I 1 xs flow I xs tf constant 0 initial xs for in range num iteration I flow body fn I xs flow xs ta like initial xs flow the second call to tf gradient cause a segfault tf gradient tf reduce mean xs stack z tf gradient tf reduce mean xs stack z the loop code be pretty hairy but in plain python it would read like this python xs for I in range num iteration for j in range chunk size xs append 0 the issue seem to be the combination of dynamic sizing of the tensorarray and multiple append per iteration use faulthandler as above give the follow traceback at the point of the segfault file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python framework op py line 1607 in create c op file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python framework op py line 1770 in init file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python framework op py line 3426 in create op internal file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python framework op py line 3357 in create op file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python util deprecation py line 507 in new func file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python framework op def library py line 794 in apply op helper file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python ops gen functional op py line 672 in stateful partitioned call file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python op functional op py line 859 in partition call file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python eager function py line 540 in call file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python eager function py line 1230 in call flat file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python eager function py line 697 in rewrite forward and call backward file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python eager function py line 715 in register grad fn file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python op gradient util py line 679 in file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python op gradient util py line 350 in maybecompile file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python op gradient util py line 679 in gradientshelper file home tim miniconda3 envs py3 lib python3 6 site package tensorflow core python op gradient impl py line 158 in gradient file home tim memoryhole segfault py line 39 in segmentation fault core dump p s my use case be I m try to convert an exist while loop into an unrolled sequence of tf function call in order to reduce graph size without give up on second order derivative it look promise so far but do let I know if this be wrong head
tensorflowtensorflow
outdate doc for tf keras model save
Bug
url s with the issue save save description of issue what need change the follow sentence be outdate and recommend to use a deprecate method the tf option be currently disabled use tf keras experimental export save model instead
tensorflowtensorflow
tf lite gpu delegate e libegl call to opengl es api with no current context log once per thread
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow fllow this document and use this project in android studio os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy virtual device pixel 2 in android studio tensorflow instal from source or binary java package tensorflow version use command below org tensorflow tensorflow lite 0 0 0 nightly org tensorflow tensorflow lite gpu 0 0 0 nightly python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 gpu model and memory gpu 1080 describe the current behavior my retrain ssd mobilenet v2 model with my own dataset call detect tflite input shape name normalize input image tensor index 308 shape array 1 300 300 3 dtype int32 dtype numpy float32 quantization 0 0 0 output shape name tflite detection postprocess index 300 shape array 1 10 4 dtype int32 dtype numpy float32 quantization 0 0 0 name tflite detection postprocess 1 index 301 shape array 1 10 dtype int32 dtype numpy float32 quantization 0 0 0 name tflite detection postprocess 2 index 302 shape array 1 10 dtype int32 dtype numpy float32 quantization 0 0 0 name tflite detection postprocess 3 index 303 shape array 1 dtype int32 dtype numpy float32 quantization 0 0 0 it could run object detect app use project just modify path to tflite and labelmap however we want to use gpu delegate like so I just use mobile ssd v2 float coco tflite which download from input detail name normalize input image tensor index 306 shape array 1 320 320 3 dtype int32 dtype numpy float32 quantization 0 0 0 output detail name raw outputs box encoding index 307 shape array 1 2034 4 dtype int32 dtype numpy float32 quantization 0 0 0 name raw output class prediction index 308 shape array 1 2034 91 dtype int32 dtype numpy float32 quantization 0 0 0 but report this error in android studio when use virtual device pixel 2 to debug e libegl call to opengl es api with no current context log once per thread e androidruntime fatal exception main process org tensorflow lite example detection pid 5063 java lang runtimeexception java lang illegalargumentexception internal error fail to apply delegate opencl library not load dlopen fail library libopencl pixel so not find fall back to opengl tflitegpudelegate init gl invalid enum an unacceptable value be specify for an enumerate argument glgetbufferparameteri64v in tensorflow lite delegates gpu gl gl buffer cc 46 tflitegpudelegate prepare delegate be not initialize node number 116 tflitegpudelegatev2 fail to prepare if I use my detect tflite error be delegate custom tflite detection postprocess operation be not support first 114 operation will run on the gpu and the remain 1 on the cpu opencl library not load dlopen fail library libopencl pixel so not find and I modify tfliteobjectdetectionapimodel java to use gpu delegate private bytebuffer imgdata private interpreter tflite private static gpudelegate delegate new gpudelegate private static interpreter option option new interpreter option adddelegate delegate private tfliteobjectdetectionapimodel public static classifier create d tflite new interpreter loadmodelfile assetmanager modelfilename option describe the expect behavior 1 why the mobile ssd v2 float coco tflite input shape and output shape be different from the model retrain use object detection api 2 the code I modifyie tfliteobjectdetectionapimodel java to use gpu delegate be right
tensorflowtensorflow
optimizer state get automatically restore when load weight from checkpoint and do not change when you compile the model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary binary tensorflow version use command below tf 2 0 0 stable python version 3 6 bazel version if compile from source none gcc compiler version if compile from source none describe the current behavior optimizer state be get restore while load weight from model checkpoint tf keras callback modelcheckpoint later when you compile model with different learning rate different from the restore optimizer state it doesn t get update describe the expect behavior compile the model should update the optimizer state code to reproduce the issue python import tensorflow as tf from tensorflow python keras import backend as k import numpy as np def get datum image np zero 64 224 label np zero 64 5 return image label def create model input layer tf keras layers input name image input shape 224 dtype float32 model tf keras layer dense 5 input layer model tf keras layers activation softmax name output softmax model model tf keras model model input input layer output model return model os makedirs checkpoint checkpoint tf keras callback modelcheckpoint checkpoint monitor loss save weight only true save freq epoch reduce lr tf keras callbacks reducelronplateau monitor loss factor 0 1 patience 1 verbose 1 early stop tf keras callback earlystopping monitor loss min delta 0 patience 5 verbose 1 model create model datum get data optimizer tf keras optimizers adadelta learn rate 1 0 model compile loss categorical crossentropy optimizer optimizer metric accuracy print learning rate float k get value model optimizer lr model fit data 0 datum 1 batch size 16 epoch 10 callback checkpoint reduce lr early stop print learning rate 1 float k get value model optimizer lr model create model model load weight checkpoint optimizer tf keras optimizers adadelta learning rate 0 1 model compile loss categorical crossentropy optimizer optimizer metric accuracy print learning rate float k get value model optimizer lr final learning rate should be 0 1 but it be equal to learn rate 1
tensorflowtensorflow
tensorflow linalg norm document miss
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue l426 description of issue what need change I can t find tensorflow linalg norm on official v2 0 api document
tensorflowtensorflow
typo in tensorflow core tutorial image classification
Bug
url s with the issue description of issue what need change typo clear description create the model the model consist of three convolution block with a max pool layer in each of they there s a fully connect layer with 512 unit on top of it thatr be activate by a relu activation function in create the model section thatr should be that create the model the model consist of three convolution block with a max pool layer in each of they there s a fully connect layer with 512 unit on top of it that be activate by a relu activation function submit a pull request
tensorflowtensorflow
call train in savedmodelestimator give valueerror at least two variable have the same name
Bug
system information tensorflow version use command below 1 12 0 rc2 3 ga6d8ffae09 1 12 0 python version python 3 6 9 describe the current behavior define a model use tensorflow kera convert the compile model to estimator training estimator use train and evaluate export all save model for train eval and predict when I create a savedmodelestimator from the previous export datum I be able to call evaluate and predict successfully however if I call the method train I get an error valueerror at least two variable have the same name dense 1 bias adam I can warm start sucessfully if I change the line in get group variable to l349 op graphkey trainable variable scope v code to reproduce the issue python import tensorflow as tf from tensorflow keras layers import input dense dropout from tensorflow keras model import model from tensorflow keras estimator import model to estimator import kera backend as k import numpy as np import os def model fn 10 feature input layer input shape 10 name input dtype tf float32 dense 1 dense unit 10 activation relu name dense 1 input layer dense 2 dense unit 1 activation linear name output dtype tf float32 dense 1 model model input input layer output dense 2 model compile optimizer tf train adamoptimizer loss mse return model def synthetic input fn num example num feature dummy datum return tf datum dataset from tensor slice input np random random num example num feature np random randint 10 size num example shuffle 512 batch 32 repeat 10 if name main model dir model save model dir os path join model dir save def cust train input fn return synthetic input fn 1000 10 def cust eval input fn return synthetic input fn 100 10 tf logging set verbosity tf log info define model and convert to estimator model model fn estimator model to estimator keras model model model dir model dir train spec tf estimator trainspec input fn cust train input fn eval spec tf estimator evalspec input fn cust eval input fn train and evaluate dummy model tf estimator train and evaluate estimator estimator train spec train spec eval spec eval spec export all mode feature spec input tf placeholder tf float64 none 10 label spec tf placeholder dtype tf int64 serve fn tf estimator export build raw serve input receiver fn feature spec training fn tf contrib estimator build raw supervised input receiver fn feature spec label spec rcrv fn map tf estimator modekeys train training fn tf estimator modekeys eval training fn tf estimator modekey predict serve fn tf contrib estimator export all save model estimator export dir base save model dir input receiver fn map rcrv fn map tf keras backend clear session save estimator tf contrib estimator savedmodelestimator os path join save model dir os listdir save model dir 0 save estimator train cust train input fn other info log use tensorflow backend info tensorflow use default config info tensorflow use the keras model provide 2019 10 29 14 26 59 280066 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma info tensorflow use config model dir model tf random seed none save summary step 100 save checkpoint step none save checkpoint sec 600 session config allow soft placement true graph option rewrite option meta optimizer iteration one keep checkpoint max 5 keep checkpoint every n hour 10000 log step count step 100 train distribute none device fn none protocol none eval distribute none experimental distribute none service none cluster spec task type worker task i d 0 global i d in cluster 0 master evaluation master be chief true num ps replicas 0 num worker replicas 1 info tensorflow not use distribute coordinator info tensorflow run training and evaluation locally non distribute info tensorflow start train and evaluate loop the evaluate will happen after every checkpoint checkpoint frequency be determine base on runconfig argument save checkpoint step none or save checkpoint sec 600 info tensorflow call model fn info tensorflow do calling model fn info tensorflow warm start with warmstartsetting warmstartsetting ckpt to initialize from model keras keras model ckpt var to warm start var name to vocab info var name to prev var name info tensorflow warm start from model keras keras model ckpt info tensorflow warm start variable dense 1 kernel prev var name unchanged info tensorflow warm start variable dense 1 bias prev var name unchanged info tensorflow warm start variable outputs kernel prev var name unchanged info tensorflow warm start variable output bias prev var name unchanged info tensorflow create checkpointsaverhook info tensorflow graph be finalize info tensorflow run local init op info tensorflow do run local init op info tensorflow save checkpoint for 0 into model model ckpt info tensorflow loss 32 57418 step 1 info tensorflow global step sec 1201 8 info tensorflow loss 24 11268 step 101 0 083 sec info tensorflow global step sec 1799 1 info tensorflow loss 12 514932 step 201 0 056 sec info tensorflow global step sec 1810 48 info tensorflow loss 9 915621 step 301 0 055 sec info tensorflow save checkpoint for 320 into model model ckpt info tensorflow call model fn info tensorflow do calling model fn info tensorflow start evaluation at 2019 10 29 14 27 00 info tensorflow graph be finalize info tensorflow restore parameter from model model ckpt 320 info tensorflow run local init op info tensorflow do run local init op info tensorflow evaluation 10 100 info tensorflow evaluation 20 100 info tensorflow evaluation 30 100 info tensorflow evaluation 40 100 info tensorflow finish evaluation at 2019 10 29 14 27 00 info tensorflow save dict for global step 320 global step 320 loss 8 093195 info tensorflow save checkpoint path summary for global step 320 model model ckpt 320 info tensorflow loss for final step 10 114872 info tensorflow call model fn info tensorflow do calling model fn info tensorflow signature include in export for classify none info tensorflow signature include in export for regress none info tensorflow signature include in export for predict none info tensorflow signature include in export for train train info tensorflow signature include in export for eval none warn tensorflow export include no default signature info tensorflow restore parameter from model model ckpt 320 warn tensorflow from anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py 1044 call savedmodelbuilder add meta graph and variable from tensorflow python save model builder impl with legacy init op be deprecate and will be remove in a future version instruction for update pass your op to the equivalent parameter main op instead info tensorflow asset add to graph info tensorflow no asset to write info tensorflow call model fn info tensorflow do calling model fn info tensorflow signature include in export for classify none info tensorflow signature include in export for regress none info tensorflow signature include in export for predict none info tensorflow signature include in export for train none info tensorflow signature include in export for eval eval warn tensorflow export include no default signature info tensorflow restore parameter from model model ckpt 320 warn tensorflow from anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py 1046 call savedmodelbuilder add meta graph from tensorflow python save model builder impl with legacy init op be deprecate and will be remove in a future version instruction for update pass your op to the equivalent parameter main op instead info tensorflow asset add to graph info tensorflow no asset to write info tensorflow call model fn info tensorflow do calling model fn info tensorflow signature include in export for classify none info tensorflow signature include in export for regress none info tensorflow signature include in export for predict serve default info tensorflow signature include in export for train none info tensorflow signature include in export for eval none info tensorflow restore parameter from model model ckpt 320 info tensorflow asset add to graph info tensorflow no asset to write info tensorflow savedmodel write to model save temp b 1572359223 save model pb info tensorflow use default config warn tensorflow use temporary folder as model directory var folder 8s tjdwdq296s5fdljp3p6jd9z40000gn t tmp7c0vhh49 info tensorflow use config model dir var folder 8s tjdwdq296s5fdljp3p6jd9z40000gn t tmp7c0vhh49 tf random seed none save summary step 100 save checkpoint step none save checkpoint sec 600 session config allow soft placement true graph option rewrite option meta optimizer iteration one keep checkpoint max 5 keep checkpoint every n hour 10000 log step count step 100 train distribute none device fn none protocol none eval distribute none experimental distribute none service none cluster spec task type worker task i d 0 global i d in cluster 0 master evaluation master be chief true num ps replicas 0 num worker replicas 1 info tensorflow check available mode for savedmodelestimator info tensorflow available mode for estimator train eval infer info tensorflow call model fn info tensorflow do calling model fn info tensorflow warm start with warmstartsetting warmstartsetting ckpt to initialize from model save 1572359223 variable variable var to warm start dense 1 bias dense 1 bias adam dense 1 bias adam 1 dense 1 kernel dense 1 kernel adam dense 1 kernel adam 1 global step output bias output bias adam output bias adam 1 outputs kernel outputs kernel adam outputs kernel adam 1 training tfoptimizer beta1 power training tfoptimizer beta2 power var name to vocab info var name to prev var name info tensorflow warm start from model save 1572359223 variable variable info tensorflow warm start variable dense 1 bias prev var name unchanged info tensorflow warm start variable dense 1 bias adam prev var name unchanged traceback most recent call last file main py line 83 in save estimator train cust train input fn file anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py line 354 in train loss self train model input fn hook save listener file anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py line 1207 in train model return self train model default input fn hook save listener file anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py line 1241 in train model default save listener file anaconda3 envs tf abalone lib python3 6 site package tensorflow python estimator estimator py line 1360 in train with estimator spec warm starting util warm start self warm start setting file anaconda3 envs tf abalone lib python3 6 site package tensorflow python training warm starting util py line 463 in warm start warm start var variable ckpt to initialize from prev var name file anaconda3 envs tf abalone lib python3 6 site package tensorflow python training warm starting util py line 170 in warm start var current var name infer var name var file anaconda3 envs tf abalone lib python3 6 site package tensorflow python training warm starting util py line 142 in infer var name name to var dict saver basesaverbuilder oplisttodict var file anaconda3 envs tf abalone lib python3 6 site package tensorflow python training saver py line 572 in oplisttodict name valueerror at least two variable have the same name dense 1 bias adam
tensorflowtensorflow
use metric sparsetopkcategoricalaccuracy on an rnn result in rank mismatch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes the code be give below to reproduce the issue os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary use pip tensorflow version v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 cuda cudnn version 10 1 7 6 2 gpu model and memory geforce gtx 1070 8 gb describe the current behavior when I compile an rnn model with the metric sparsetopkcategoricalaccuracy the follow error result no error occur if I use sparsecategoricalaccuracy instead traceback most recent call last file home pi venv tf2 lib python3 6 site package tensorflow core python framework op py line 1610 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror shape must be rank 2 but be rank 3 for metric sparse top k categorical accuracy in top k intopkv2 op intopkv2 with input shape 10 during handling of the above exception another exception occur traceback most recent call last file report py line 12 in metric tf keras metrics sparsetopkcategoricalaccuracy file home pi venv tf2 lib python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home pi venv tf2 lib python3 6 site package tensorflow core python keras engine training py line 366 in compile mask self prepare output mask file home pi venv tf2 lib python3 6 site package tensorflow core python keras engine training py line 2063 in handle metric target output output mask file home pi venv tf2 lib python3 6 site package tensorflow core python keras engine training py line 2014 in handle per output metric metric fn y true y pre weight weight mask mask file home pi venv tf2 lib python3 6 site package tensorflow core python keras engine training util py line 1067 in call metric function return metric fn y true y pre sample weight weight file home pi venv tf2 lib python3 6 site package tensorflow core python keras metrics py line 193 in call replica local fn args kwargs file home pi venv tf2 lib python3 6 site package tensorflow core python keras distribute distribute training util py line 1135 in call replica local fn return fn args kwargs file home pi venv tf2 lib python3 6 site package tensorflow core python keras metrics py line 176 in replica local fn update op self update state args kwargs pylint disable not callable file home pi venv tf2 lib python3 6 site package tensorflow core python keras util metric util py line 75 in decorate update op update state fn args kwargs file home pi venv tf2 lib python3 6 site package tensorflow core python keras metrics py line 581 in update state match self fn y true y pre self fn kwargs file home pi venv tf2 lib python3 6 site package tensorflow core python keras metrics py line 2805 in sparse top k categorical accuracy nn in top k y pre math op cast y true int32 k k floatx file home pi venv tf2 lib python3 6 site package tensorflow core python op nn op py line 4843 in in top k return gen nn op in top kv2 prediction target k name name file home pi venv tf2 lib python3 6 site package tensorflow core python ops gen nn op py line 5043 in in top kv2 intopkv2 prediction prediction target target k k name name file home pi venv tf2 lib python3 6 site package tensorflow core python framework op def library py line 793 in apply op helper op def op def file home pi venv tf2 lib python3 6 site package tensorflow core python framework func graph py line 548 in create op compute device file home pi venv tf2 lib python3 6 site package tensorflow core python framework op py line 3429 in create op internal op def op def file home pi venv tf2 lib python3 6 site package tensorflow core python framework op py line 1773 in init control input op file home pi venv tf2 lib python3 6 site package tensorflow core python framework op py line 1613 in create c op raise valueerror str e valueerror shape must be rank 2 but be rank 3 for metric sparse top k categorical accuracy in top k intopkv2 op intopkv2 with input shape 10 describe the expect behavior I expect no error I suppose sparsetopkcategoricalaccuracy can be use in exactly the same way as sparsecategoricalaccuracy code to reproduce the issue python import numpy as np import tensorflow as tf from tensorflow keras import layer model tf keras sequential layer embed input dim 1000 output dim 64 layer lstm 128 return sequence true layer dense 10 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric tf keras metrics sparsetopkcategoricalaccuracy datum np random randint 0 1000 32 10 batch size 32 seq length 10 label np random randint 0 10 32 10 model fit datum label epoch 1 batch size 32 other info log I use this custom metric to get around the problem python class intopk tf keras metric mean def init self k name in top k kwargs super intopk self init name name kwargs self k k def update state self y true y pre sample weight none match tf nn in top k flatten tensor tf reshape tf cast y true tf int32 1 tf reshape y pre 1 y pre shape 1 k self k return super intopk self update state match sample weight sample weight
tensorflowtensorflow
code and tutorial description mismatch on cnn
Bug
url s with the issue description of issue what need change two mismatch between output shape on model summary and tutorial 1 to complete our model you will feed the last output tensor from the convolutional base of shape 3 3 64 should be change to to complete our model you will feed the last output tensor from the convolutional base of shape 4 4 64 2 as you can see our 3 3 64 output be flatten into vector of shape 576 before go through two dense layer this should be change to as you can see our 4 4 64 output be flatten into vector of shape 1024 before go through two dense layer
tensorflowtensorflow
miss tf 2 0 low level api guide
Bug
url s with the issue description of issue what need change in the tf1 doc a very helpful low level api guide be provide which help those of we interested in use tensorflow for application other than nn style model this appear to be entirely miss from the tf 2 0 documentation be this omission on purpose and if so how do we teach people how to use the low level api thank chris
tensorflowtensorflow
normalization in cosine similarity
Bug
url s with the issue description of issue what need change the documentation for the cosine similarity do not state whether y true and y pre be expect to be normalize vector the provide equation loss sum y true y pre suggest the need to be but look at the source they be normalize as part of the computation y true nn l2 normalize y true axis axis y pre nn l2 normalize y pre axis axis return math op reduce sum y true y pre axis axis as a special case the doc do not state what happen in the case of either be zero also isn t the above implementation suboptimal in term of speed as each element be divide by the norm instead of simply divide the result once
tensorflowtensorflow
the relationship among optimize for inference py quantize graph py graph transform
Bug
be graph transform tool the new tool to optimize and quantize for pb when I find tf 1 11 have quantize graph py
tensorflowtensorflow
get datum adapter should be mutually exclusive for handle input find multiple adapter to handle error when call model fit with imagedatagenerator
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 aarch64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source 0 29 0 gcc compiler version if compile from source 7 4 cuda cudnn version n a gpu model and memory n a describe the current behavior when fit a model with imagedatagenerator it raise this error datum adapter should be mutually exclusive for handle input find multiple adapter generatordataadapter kerassequenceadapter to handle describe the expect behavior 1 log warning message if multiple datum adapter find instead of raise an error 2 use the first available datum adapter code to reproduce the issue please refer to link below I connect to my local jupyter instance with colab ui python batch size 100 img shape 150 image gen imagedatagenerator rescale 1 255 horizontal flip true train datum gen image gen flow from directory batch size batch size directory train dir shuffle true target size img shape img shape val datum gen image gen flow from directory batch size batch size directory val dir shuffle true target size img shape img shape model tf keras model sequential tf keras layer conv2d 32 3 3 activation relu input shape 150 150 3 tf keras layer maxpooling2d 2 2 tf keras layer conv2d 64 3 3 activation relu tf keras layer maxpooling2d 2 2 tf keras layer conv2d 128 3 3 activation relu tf keras layer maxpooling2d 2 2 tf keras layer conv2d 128 3 3 activation relu tf keras layer maxpooling2d 2 2 tf keras layers dropout 0 5 tf keras layer flatten tf keras layer dense 512 activation relu tf keras layer dense 2 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy epoch 100 model fit train datum gen step per epoch int np ceil total train float batch size epoch epoch validation datum val datum gen validation step int np ceil total val float batch size to avoid this issue I ll have to manually exclude kerassequenceadapter before call model fit python from tensorflow python keras engine import datum adapter from tensorflow python keras engine datum adapter import listsofscalarsdataadapter from tensorflow python keras engine datum adapter import tensorlikedataadapter from tensorflow python keras engine datum adapter import genericarraylikedataadapter from tensorflow python keras engine datum adapter import datasetadapter from tensorflow python keras engine datum adapter import generatordataadapter from tensorflow python keras engine datum adapter import compositetensordataadapter datum adapter all adapter cls listsofscalarsdataadapter tensorlikedataadapter genericarraylikedataadapter datasetadapter generatordataadapter tensorflow python keras engine datum adapter kerassequenceadapter compositetensordataadapter datum adapter all adapter cls other info log n a
tensorflowtensorflow
mirroredstrategy compare to onedevicestrategy slow and much weak learning
Bug
system information system information os platform and distribution arch linux 5 3 7 arch1 1 arch tensorflow instal from binary tensorflow version 2 0 0 keras version 2 2 4 tf python version 3 7 4 cuda cudnn version cuda 10 1 243 cudnn 7 6 2 24 gpu model and memory 2x gtx 1080 ti 11 gb describe the current behavior if the model be train with onedevicestrategy on one gpu an accuracy of 0 9988 be reach after 150 epoch in 5h 24min if the model be train with mirroredstrategy on two gpu an accuracy of 0 be reach after 150 epoch in 5h the loss do not significantly drop describe the expect behavior with mirroredstrategy the same accuracy be reach as training on one gpu in short time ideally in half the time might be relate to issue 33767 code to reproduce the issue the complete code with datum be available in a git repo if require model class featureextraction layer def init self conv filter pool size name feature extraction kwargs super featureextraction self init name name kwargs self conv1 conv2d filter conv filter kernel size 3 3 padding same activation relu kernel initializer he normal name conv1 self conv2 conv2d filter conv filter kernel size 3 3 padding same activation relu kernel initializer he normal name conv2 self max1 maxpooling2d pool size pool size pool size name max1 self max2 maxpooling2d pool size pool size pool size name max2 def call self input x self conv1 input x self max1 x x self conv2 x return self max2 x def get config self return super featureextraction self get config class featurereduction layer def init self img w img h pool size conv filter name feature reduction kwargs super featurereduction self init name name kwargs target shape img w pool size 2 img h pool size 2 conv filter self reshape reshape target shape target shape name reshape self dense dense 32 activation relu name dense def call self input x self reshape input return self dense x def get config self return super featurereduction self get config class sequentiallearn layer def init self name sequential learner kwargs super sequentiallearn self init name name kwargs self gru 1a gru 512 return sequence true kernel initializer he normal name gru 1a self gru 1b gru 512 return sequence true go backwards true kernel initializer he normal name gru 1b self gru 2a gru 512 return sequence true kernel initializer he normal name gru 2a self gru 2b gru 512 return sequence true go backwards true kernel initializer he normal name gru 2b def call self input x 1a self gru 1a input x 1b self gru 1b input x add x 1a x 1b x 2a self gru 2a x x 2b self gru 2b x return concatenate x 2a x 2b def get config self return super sequentiallearn self get config class output layer def init self output size name output kwargs super output self init name name kwargs self dense dense output size kernel initializer he normal name dense self softmax activation softmax name softmax def call self input x self dense input return self softmax x def get config self return super output self get config class ocrnet model def init self output size img w img h max text len name ocrnet kwargs parameter conv filter 16 pool size 2 define layer feature extraction featureextraction conv filter conv filter pool size pool size sequential learner sequentiallearn feature reduction featurereduction img w img w img h img h pool size pool size conv filter conv filter output output output size nhwc channel last nchw channel first initialize input shape if channel first k image data format input shape 1 img w img h else input shape img w img h 1 input input input name the input shape input shape dtype float32 label input name the label shape max text len dtype float32 input length input name input length shape 1 dtype int64 label length input name label length shape 1 dtype int64 call layer x feature extraction input x feature reduction x x sequential learner x prediction output x keras doesn t currently support loss func with extra parameter so ctc loss be implement in a lambda layer loss out lambda self ctc lambda func output shape 1 name ctc prediction label input length label length super ocrnet self init input input label input length label length output loss out name name kwargs ctc decoder flatten input length k reshape input length 1 top k decode k ctc decode prediction flatten input length self decoder k function input flatten input length top k decode 0 loss and train function network architecture def ctc lambda func self args prediction label input length label length args the 2 be critical here since the first couple output of the rnn tend to be garbage prediction prediction 2 return k ctc batch cost label prediction input length label length training strip strategy tf distribute mirroredstrategy if 1 ngpus else tf distribute onedevicestrategy device gpu 1 batch size batch size strategy num replicas in sync with strategy scope model ocrnet train gen output size img w img h max text len model summary adam adam lr lr beta 1 0 9 beta 2 0 999 epsilon 1e 08 model compile loss ctc lambda y true y pre y pre optimizer adam metric accuracy callback start time perf counter model fit train gen validation data val gen epochs epoch shuffle false use multiprocesse true worker 6 callback callback elapse time perf counter start logg info elapse 0 3f format elapse other info log output use onedevicestrategy train for 700 step validate for 150 step epoch 1 150 warn tensorflow entity initialize variable at 0x7fb63f70d170 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num 2019 10 29 06 07 28 887 warning entity initialize variable at 0x7fb63f70d170 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num 2019 10 29 06 07 31 225908 e tensorflow core grappler optimizer meta optimizer cc 502 function optimizer fail invalid argument node ocrnet sequential learner gru 1a statefulpartitionedcall statefulpartitionedcall 2 26 connect to invalid output 31 of source node ocrnet sequential learner gru 1a statefulpartitionedcall which have 31 output 2019 10 29 06 07 31 774256 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2019 10 29 06 07 31 965277 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 699 700 eta 0s loss 41 7196 accuracy 0 0000e 002019 10 29 06 09 19 892252 w tensorflow core grappler optimizer implementation selector cc 310 skip optimization due to error while loading function librarie invalid argument function inference cudnn gru with fallback 16142 specialize for ocrnet sequential learner gru 2b statefulpartitionedcall at inference distribute function 16574 and inference cudnn gru with fallback 16142 both implement gru 939794c2 9fc6 48f9 8d2a e319e349d493 but their signature do not match 700 700 134 191ms step loss 41 6829 accuracy 0 0000e 00 val loss 15 7647 val accuracy 0 0000e 00 epoch 2 150 700 700 129 185ms step loss 15 9857 accuracy 0 0000e 00 val loss 15 0192 val accuracy 0 0000e 00 epoch 3 150 700 700 129 184ms step loss 14 3529 accuracy 0 0000e 00 val loss 13 8274 val accuracy 0 0000e 00 epoch 4 150 700 700 129 185ms step loss 13 4774 accuracy 0 0000e 00 val loss 13 1987 val accuracy 0 0000e 00 epoch 5 150 700 700 129 185ms step loss 12 9877 accuracy 0 0000e 00 val loss 12 8102 val accuracy 0 0000e 00 epoch 145 150 700 700 130s 185ms step loss 0 0059 accuracy 0 9985 val loss 0 0119 val accuracy 0 9952 epoch 146 150 700 700 130s 185ms step loss 0 0058 accuracy 0 9985 val loss 0 0118 val accuracy 0 9953 epoch 147 150 700 700 130s 185ms step loss 0 0056 accuracy 0 9987 val loss 0 0116 val accuracy 0 9953 epoch 148 150 700 700 129 185ms step loss 0 0055 accuracy 0 9988 val loss 0 0114 val accuracy 0 9953 epoch 149 150 700 700 129 185ms step loss 0 0053 accuracy 0 9988 val loss 0 0113 val accuracy 0 9954 epoch 150 150 700 700 129 185ms step loss 0 0052 accuracy 0 9988 val loss 0 0111 val accuracy 0 9954 2019 10 28 04 33 15 026 info elapse 19429 327 output use mirroredstrategy train for 350 step validate for 75 step epoch 1 150 info tensorflow batch all reduce 20 all reduce with algorithm nccl num pack 1 agg small grad max bytes 0 and agg small grad max group 10 2019 10 28 21 41 23 061 info batch all reduce 20 all reduce with algorithm nccl num pack 1 agg small grad max bytes 0 and agg small grad max group 10 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 23 323 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 23 328 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 24 338 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 24 341 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 warn tensorflow entity initialize variable at 0x7f787c47b170 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num 2019 10 28 21 41 24 385 warning entity initialize variable at 0x7f787c47b170 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num info tensorflow batch all reduce 20 all reduce with algorithm nccl num pack 1 agg small grad max bytes 0 and agg small grad max group 10 2019 10 28 21 41 28 827 info batch all reduce 20 all reduce with algorithm nccl num pack 1 agg small grad max bytes 0 and agg small grad max group 10 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 29 117 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 29 120 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 29 128 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 29 130 info reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 2019 10 28 21 41 29 440363 e tensorflow core grappler optimizer meta optimizer cc 502 function optimizer fail invalid argument node replica 1 ocrnet sequential learner gru 1a statefulpartitionedcall replica 1 statefulpartitionedcall 2 26 connect to invalid output 31 of source node replica 1 ocrnet sequential learner gru 1a statefulpartitionedcall which have 31 output 2019 10 28 21 41 30 730864 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2019 10 28 21 41 31 078427 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 349 350 eta 0s loss 89 5114 accuracy 0 0000e 002019 10 28 21 43 16 332927 w tensorflow core grappler optimizer implementation selector cc 310 skip optimization due to error while loading function librarie invalid argument function inference standard gru 26258 and inference cudnn gru with fallback 26349 specialize for ocrnet sequential learner gru 2b statefulpartitionedcall at inference distribute function 28787 both implement gru d38ba96e e1cb 43bd a1af 2f107f6aab80 but their signature do not match 350 350 138 395ms step loss 89 5000 accuracy 0 0000e 00 val loss 86 4455 val accuracy 0 0000e 00 epoch 2 150 350 350 120s 342ms step loss 83 2679 accuracy 0 0000e 00 val loss 80 2358 val accuracy 0 0000e 00 epoch 3 150 350 350 120s 342ms step loss 76 8871 accuracy 0 0000e 00 val loss 73 5664 val accuracy 0 0000e 00 epoch 4 150 350 350 120s 342ms step loss 69 5524 accuracy 0 0000e 00 val loss 65 3586 val accuracy 0 0000e 00 epoch 5 150 350 350 120s 342ms step loss 61 0491 accuracy 0 0000e 00 val loss 57 3255 val accuracy 0 0000e 00 epoch 145 150 350 350 120s 343ms step loss 9 9171 accuracy 0 0000e 00 val loss 9 8855 val accuracy 0 0000e 00 epoch 146 150 350 350 120s 343ms step loss 9 8615 accuracy 0 0000e 00 val loss 9 8293 val accuracy 0 0000e 00 epoch 147 150 350 350 120s 342ms step loss 9 8055 accuracy 0 0000e 00 val loss 9 7728 val accuracy 0 0000e 00 epoch 148 150 350 350 120s 343ms step loss 9 7491 accuracy 0 0000e 00 val loss 9 7160 val accuracy 0 0000e 00 epoch 149 150 350 350 120s 343ms step loss 9 6923 accuracy 0 0000e 00 val loss 9 6588 val accuracy 0 0000e 00 epoch 150 150 350 350 120s 342ms step loss 9 6351 accuracy 0 0000e 00 val loss 9 6010 val accuracy 0 0000e 00 2019 10 29 02 41 31 149 info elapse 18013 239
tensorflowtensorflow
tensorflow pre build package not available for download
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary na tensorflow version 2 0 python version 3 6 instal use virtualenv pip conda pip use wheel bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version gpu model and memory describe the problem we be not able to download the wheel give on page we be get the follow error when we try to get the whl in a browser nosuchkey the specify key do not exist no such object tensorflow linux gpu tensorflow gpu 2 0 0 cp36 cp36 m linux x86 64 whl the issue occur with all the link we try wget command on linux and get 404 error provide the exact sequence of command step that you execute before run into the problem wget any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 0 python 3 8 typeerror logger find caller take from 0 to 1 positional argument but 2 be give
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow see script from tensorflow training session and uploaded file below nb there be no error with tf2 0 0 and python 3 6 or 3 7 the error occur with tf2 0 0 and python 3 8 os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 3 8 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version cuda 10 cudnn 7 6 4 gpu model and memory nvidia rtx 2080 ti and 2080 maxq describe the current behavior after run the code below with the attached file you get the follow error assertionerror traceback most recent call last tf38 lib python3 8 site package tensorflow core python autograph impl api py in convert call f args kwargs caller fn scope option 525 option option autograph module tf inspect getmodule convert call 526 convert f conversion convert target entity program ctx 527 tf38 lib python3 8 site package tensorflow core python autograph impl conversion py in convert entity program ctx 324 325 convert entity info convert with cache entity program ctx 326 free nonglobal var name tf38 lib python3 8 site package tensorflow core python autograph impl conversion py in convert with cache entity program ctx free nonglobal var name 238 239 node convert name entity info convert entity to ast 240 entity program ctx tf38 lib python3 8 site package tensorflow core python autograph impl conversion py in convert entity to ast o program ctx 474 elif tf inspect ismethod o 475 node name entity info convert func to ast o program ctx 476 elif hasattr o class tf38 lib python3 8 site package tensorflow core python autograph impl conversion py in convert func to ast f program ctx do rename 672 context converter entitycontext namer entity info program ctx new name 673 node node to graph node context 674 tf38 lib python3 8 site package tensorflow core python autograph impl conversion py in node to graph node context 702 node converter standard analysis node context be initial true 703 node converter apply node context function scope 704 node converter apply node context arg default tf38 lib python3 8 site package tensorflow core python autograph core converter py in apply node context converter module 408 node standard analysis node context 409 node converter module transform node context 410 return node tf38 lib python3 8 site package tensorflow core python autograph converter function scope py in transform node ctx 119 def transform node ctx 120 return functionbodytransformer ctx visit node tf38 lib python3 8 site package tensorflow core python autograph core converter py in visit self node 345 try 346 return super base self visit node 347 finally tf38 lib python3 8 site package tensorflow core python autograph pyct transformer py in visit self node 479 if not anno hasanno node anno basic skip processing 480 result super base self visit node 481 self ctx current origin parent origin usr local lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 tf38 lib python3 8 site package tensorflow core python autograph converter function scope py in visit functiondef self node 101 102 wrap body template replace 103 template tf38 lib python3 8 site package tensorflow core python autograph pyct template py in replace template replacement 268 for node in nodes 269 node replacetransformer replacement visit node 270 if isinstance node list tuple usr local lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr local lib python3 8 ast py in generic visit self node 435 if isinstance value ast 436 value self visit value 437 if value be none usr local lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr local lib python3 8 ast py in generic visit self node 444 elif isinstance old value ast 445 new node self visit old value 446 if new node be none usr local lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr local lib python3 8 ast py in generic visit self node 435 if isinstance value ast 436 value self visit value 437 if value be none usr local lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 tf38 lib python3 8 site package tensorflow core python autograph pyct template py in visit name self node 199 200 new node self prepare replacement node node i d 201 tf38 lib python3 8 site package tensorflow core python autograph pyct template py in prepare replacement self replace key 138 139 new node ast util copy clean repl preserve anno self preserve anno 140 if isinstance new nodes gast ast tf38 lib python3 8 site package tensorflow core python autograph pyct ast util py in copy clean node preserve anno 75 76 return cleancopier preserve anno copy node 77 tf38 lib python3 8 site package tensorflow core python autograph pyct ast util py in copy self node 53 if not f startswith and hasattr node f 54 new field f self copy getattr node f 55 new node type node new field tf38 lib python3 8 site package tensorflow core python autograph pyct ast util py in copy self node 40 if isinstance node list 41 return self copy n for n in node 42 elif isinstance node tuple tf38 lib python3 8 site package tensorflow core python autograph pyct ast util py in 0 40 if isinstance node list 41 return self copy n for n in node 42 elif isinstance node tuple tf38 lib python3 8 site package tensorflow core python autograph pyct ast util py in copy self node 54 new field f self copy getattr node f 55 new node type node new field 56 tf38 lib python3 8 site package gast gast py in create node self args kwargs 9 nbparam len args len kwargs 10 assert nbparam in 0 len field 11 bad argument number for expect assertionerror bad argument number for keyword 1 expect 2 during handling of the above exception another exception occur typeerror traceback most recent call last in 1 tf model fit xs train 0 1 y train reshape 1 1 tf38 lib python3 8 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count tf38 lib python3 8 site package tensorflow core python eager def function py in call self args kwd 613 this be the first call of call so we have to initialize 614 initializer 615 self initialize args kwd add initializer to initializer 616 finally 617 at this point we know that the initialization be complete or less tf38 lib python3 8 site package tensorflow core python eager def function py in initialize self args kwd add initializer to 494 self graph deleter functiondeleter self lift initializer graph 495 self concrete stateful fn 496 self stateful fn get concrete function internal garbage collect pylint disable protect access 497 args kwd 498 tf38 lib python3 8 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 2363 args kwargs none none 2364 with self lock 2365 graph function self maybe define function args kwargs 2366 return graph function 2367 tf38 lib python3 8 site package tensorflow core python eager function py in maybe define function self args kwargs 2671 2672 self function cache miss add call context key 2673 graph function self create graph function args kwargs 2674 self function cache primary cache key graph function 2675 return graph function args kwargs tf38 lib python3 8 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2551 arg name base arg name miss arg name 2552 graph function concretefunction 2553 func graph module func graph from py func 2554 self name 2555 self python function tf38 lib python3 8 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 956 convert func 957 958 func output python func func args func kwargs 959 960 invariant func output contain only tensor compositetensor tf38 lib python3 8 site package tensorflow core python eager def function py in wrap fn args kwd 437 wrap allow autograph to swap in a converted function we give 438 the function a weak reference to itself to avoid a reference cycle 439 return weak wrap fn wrap args kwd 440 weak wrap fn weakref ref wrap fn 441 tf38 lib python3 8 site package tensorflow core python eager function py in bind method wrapper args kwargs 3179 however the replacer be still responsible for attach self properly 3180 todo mdan be it possible to do it here instead 3181 return wrap fn args kwargs 3182 weak bind method wrapper weakref ref bind method wrapper 3183 tf38 lib python3 8 site package tensorflow core python framework func graph py in wrapper args kwargs 935 todo mdan push this block high in tf function s call stack 936 try 937 return autograph convert call 938 original func 939 args tf38 lib python3 8 site package tensorflow core python autograph impl api py in convert call f args kwargs caller fn scope option 552 cause s target entity e 553 else 554 log warn 555 autograph could not transform s and will run it as be n 556 please report this to the tensorflow team when file the bug set tf38 lib python3 8 site package tensorflow core python autograph util ag log py in warn msg args kwargs 144 145 def warn msg args kwargs 146 log warn msg args kwargs 147 if echo log to stdout 148 output to stdout warning msg args kwargs tf38 lib python3 8 site package tensorflow core python platform tf log py in warn msg args kwargs 159 tf export v1 log warn 160 def warn msg args kwargs 161 get logg warning msg args kwargs 162 163 usr local lib python3 8 logging init py in warn self msg args kwargs 1444 1445 if self isenabledfor warn 1446 self log warn msg args kwargs 1447 1448 def warn self msg args kwargs usr local lib python3 8 logging init py in log self level msg args exc info extra stack info stacklevel 1563 ironpython can use log 1564 try 1565 fn lno func sinfo self findcaller stack info stacklevel 1566 except valueerror pragma no cover 1567 fn lno func unknown file 0 unknown function typeerror logger find caller take from 0 to 1 positional argument but 2 be give describe the expect behavior there should be no error it work fine with tf2 0 0 and python 3 6 or python 3 7 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf import numpy as np import gzip import json from sklearn model selection import shufflesplit with gzip open small datum cal house json gz r as fin housing json load fin for train test in shufflesplit 1 0 2 random state 42 split housing datum x train np array housing datum train astype np float32 y train np array housing target train astype np float32 x test np array housing datum test astype np float32 y test np array housing target test astype np float32 x mean x train mean axis 0 x std x train std axis 0 xs train x train x mean x std xs test x test x mean x std class linearregressiontf def init self eta 1 self w tf variable 0 self b tf variable 0 self opt tf keras optimizer sgd learn rate eta def loss self x y return func false def loss return tf reduce mean tf square x self w self b y if not return func return loss return loss tf function def fit self x y step 1 for in range step self opt minimize self loss x y return func true self w self b tf model linearregressiontf tf model fit xs train 0 1 y train reshape 1 1 cal house json gz other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach nil
tensorflowtensorflow
tf 2 0 api docs tf keras layers simplernn
Bug
url s with the issue description of issue what need change clear description the document mention nothing about call argument input when it take batch timestep feature batch state in case of input be a list the element of input 1 work as initial state in each batch example import tensorflow as tf from tensorflow keras layers import class foo tf keras model def init self rnn unit dense unit kwargs super init kwargs self r1 simplernn rnn unit self r2 simplernn rnn unit self flat tf keras layer flatten self d1 dense rnn unit self d2 dense dense unit def call self input kwargs x self r1 input state self d1 self flat x x self r2 input state x self d2 x return x train input tf random normal shape 6 5 10 train target tf random normal shape 6 8 a foo 10 8 a compile tf keras optimizer sgd 0 01 loss tf keras loss meansquarederror a fit train input train target b simplernn 10 train input state dense 10 tf reshape b tf shape b 0 1 b simplernn 10 train input state it also should be note that initial state argument should be none when input have state other recurrent layer have same issue submit a pull request if this issue be not intend or a bug I m plan to submit a pull request to fix the doc issue in a week may I
tensorflowtensorflow
tfliteconverter from keras model typeerror call get an unexpected keyword argument training
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information custom code arch linux kernel 5 3 7 tensorflow instal via pip tensorflow version v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 describe the current behavior when try to convert a keras cnn to tflite file I get an error at the follow line converter tf lite tfliteconverter from keras model model describe the expect behavior a tflite file be expect to be create and write to the local directory code to reproduce the issue import the kera library and package from keras model import sequential save model from keras layers core import dense dropout flatten from keras layers convolutional import conv2d maxpooling2d from keras preprocesse import image from sklearn metric import classification report confusion matrix import tensorflow as tf import numpy as np imageresx imageresy 256 256 def cnnmodel classifier sequential classifier add conv2d filter 128 kernel size 3 3 activation relu input shape imageresx imageresy 3 classifier add maxpooling2d pool size 3 3 classifier add flatten classifier add dense unit 128 activation relu classifier add dropout rate 0 5 classifier add dense unit 4 activation softmax classifi compile optimizer sgd loss categorical crossentropy metric accuracy return classifier create a data generator train datagen image imagedatagenerator rescale 1 255 test datagen image imagedatagenerator rescale 1 255 training set train datagen flow from directory my datum training target size imageresx imageresy batch size 64 class mode categorical evaluate set train datagen flow from directory my data evaluation target size imageresx imageresy batch size 64 class mode categorical test set test datagen flow from directory my data testing target size imageresx imageresy batch size 64 class mode categorical shuffle false step size train training set n training set batch size step size evaluate evaluate set n evaluate set batch size step size test test set n test set batch size model cnnmodel history model fit generator generator training set step per epoch step size train epoch 1 validation datum evaluate set validation step step size evaluate label training set class indice label dict v k for k v in label item save keras model modelname st ai model save model model str modelname h5 convert keras model converter tf lite tfliteconverter from keras model model tflite model converter convert open str modelname tflite wb write tflite model prediction model predict generator generator test set verbose 1 class test set class test set index array predict class index np argmax prediction axis 1 target name label k for k in range len training set class indice print confusion matrix print confusion matrix test set class test set index array predict class index print classification report print sum predict class indice class len test set class print classification report test set class test set index array predict class index target name target name other info log traceback most recent call last file home user code classifier py line 98 in converter tf lite tfliteconverter from keras model model file usr lib python3 7 site package tensorflow core lite python lite py line 383 in from keras model concrete func func get concrete function file usr lib python3 7 site package tensorflow core python eager def function py line 776 in get concrete function self initialize args kwargs add initializer to initializer map file usr lib python3 7 site package tensorflow core python eager def function py line 408 in initialize args kwd file usr lib python3 7 site package tensorflow core python eager function py line 1848 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr lib python3 7 site package tensorflow core python eager function py line 2150 in maybe define function graph function self create graph function args kwargs file usr lib python3 7 site package tensorflow core python eager function py line 2041 in create graph function capture by value self capture by value file usr lib python3 7 site package tensorflow core python framework func graph py line 915 in func graph from py func func output python func func args func kwargs file usr lib python3 7 site package tensorflow core python eager def function py line 358 in wrap fn return weak wrap fn wrap args kwd file usr lib python3 7 site package tensorflow core python keras save save util py line 143 in wrap model outputs list nest flatten model inputs input training false file usr lib python3 7 site package keras backend tensorflow backend py line 75 in symbolic fn wrapper return func args kwargs file usr lib python3 7 site package kera engine base layer py line 489 in call output self call input kwargs typeerror call get an unexpected keyword argument training finish in 666 7 with exit code 1
tensorflowtensorflow
use tf keras backend zero in while loop
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a describe the current behavior the effect of tf zero and tf keras backend zero be not the same and result in some inconsistent behaviour this also hold for other function such as tf keras backend one and other like it specifically use tf keras backend zero in a tf map fn function or something similar break because tf keras backend zero have a tf init scope which cause it to be create out of the context of the while loop describe the expect behavior expect behaviour would be that tf zero and tf keras backend zero be identical and that they follow the usage of tf zero mean not change the scope in which they be create code to reproduce the issue python import tensorflow as tf tf compat v1 disable v2 behavior work because we don t change the scope def work input return tf zeros tf keras backend shape input 0 0 work because both tf zero and its input be in the same scope this function help explain the root of the issue def works2 input with tf init scope return tf zeros tf keras backend shape input 0 0 break because tf keras backend zeros be be create in a different scope from tf keras backend shape def break input return tf keras backend zeros tf keras backend shape input 0 0 break because the shape be create outside of the context of the tf zero this function help explain the root of the issue this be an extract of how tf keras backend zeros be implement def breaks2 input shape tf keras backend shape input 0 0 with tf init scope return tf zeros shape input tf keras layers input shape 5 5 this work when use tf zero because it doesn t change the scope tf map fn work elem input dtype tf keras backend floatx this work when tf zero and its input be in the same scope use tf init scope tf map fn works2 elem input dtype tf keras backend floatx this break when use tf keras backend zero because it create the zero in a new scope try tf map fn break elem input dtype tf keras backend floatx except valueerror as e print catch error format e this break when use tf zero when its input be define in a different scope try tf map fn breaks2 elem input dtype tf keras backend floatx except valueerror as e print catch error format e other info log this change be introduce in by rjpower it would be great if I could get some feedback on why this init scope get add there and if it should be change
tensorflowtensorflow
can not import tf keras engine
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 14 and 2 0 gpu python version 3 6 1 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 6 7 4 gpu model and memory rtx 2060 6 gb after switch my code to tf keras I can not import tf keras engine with keras import keras layer as kl import kera model as km import keras engine as ke ok with tf keras import tensorflow kera layer as kl import tensorflow kera model as km import tensorflow kera engine as ke modulenotfounderror traceback most recent call last in 1 import tensorflow kera layer as kl 2 import tensorflow kera model as km 3 import tensorflow kera engine as ke modulenotfounderror no module name tensorflow keras engine
tensorflowtensorflow
example of audio recognition train in tf 1 15 0 isn t able to recognize sound yes on sparkfun edge development board
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary create a docker container from docker image tensorflow tensorflow 1 15 0 gpu py3 jupyter inside the container get tensorflow source tree through the follow step mkdir tensorflow cd tensorflow git init git remote add origin git ls remote tag origin grep 1 15 0 38ea9bbfea423eb968fcc70bc454471277c9537c refs tag v1 15 0 git pull origin ref tag v1 15 0 tensorflow version use command below v1 15 0 python version 3 6 8 bazel version if compile from source neither instal nor use gcc compiler version if compile from source gcc version 7 4 0 ubuntu 7 4 0 1ubuntu1 18 04 1 cuda cudnn version compute capability 5 2 gpu model and memory geforce gtx 960 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v1 15 0 rc3 22 g590d6ee 1 15 0 describe the current behavior go through the follow step for all the step in detail please see code to reproduce the issue sparkfun edge board be be flash successfully but it wasn t able to recognize the sound yes describe the expect behavior if flash the board with the binary tf model tensorflow tensorflow lite experimental micro examples micro speech simple feature tiny conv simple feature model datum cc that tf source tree bring the sound yes be recognize perfectly could you let we know the exact environment and procedure that your tiny conv simple feature model datum cc be generate in code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem 1 training model refer to training model on your local machine use your local machine 1 1 in the same container that we get in system information 1 2 python tensorflow tensorflow example speech command train py model architecture tiny conv window stride 20 preprocess micro want word yes no silence percentage 25 unknown percentage 25 quantize 1 verbosity info how many training step 15000 3000 learning rate 0 001 0 0001 summary dir tmp retrain log datum dir tmp speech dataset train dir tmp speech command train the training reach 90 accuracy see 191028 log as attach 1 3 python tensorflow tensorflow example speech command freeze py model architecture tiny conv window stride 20 preprocess micro want word yes no quantize 1 output file tmp tiny conv pb start checkpoint tmp speech command train tiny conv ckpt 18000 1 4 toco graph def file tmp tiny conv pb output file tmp tiny conv tflite input shape 1 49 40 1 input array reshape 2 output array label softmax inference type quantize uint8 mean value 0 std dev value 9 8077 1 5 xxd I tmp tiny conv tflite tmp tiny conv micro feature model datum cc 1 6 modify the binary model file to tiny conv micro feature model datum new cc as attach put it into tensorflow lite experimental micro examples micro speech micro feature 2 flash the model refer to deploy to sparkfun edge deploy to sparkfun edge 2 1 make f tensorflow lite experimental micro tool make makefile target sparkfun edge tag cmsis micro speech bin 2 2 cp tensorflow lite experimental micro tool make download ambiqsuite rel2 0 0 tool apollo3 script key info0 py tensorflow lite experimental micro tool make download ambiqsuite rel2 0 0 tool apollo3 script key info py 2 3 python3 tensorflow lite experimental micro tool make download ambiqsuite rel2 0 0 tool apollo3 script create cust image blob py bin tensorflow lite experimental micro tool make gen sparkfun edge cortex m4 bin micro speech bin load address 0xc000 magic num 0xcb o main nonsecure ota version 0x0 2 4 python3 tensorflow lite experimental micro tool make download ambiqsuite rel2 0 0 tool apollo3 script create cust wireupdate blob py load address 0x20000 bin main nonsecure ota bin i 6 o main nonsecure wire option 0x1 2 5 walk through ai on a microcontroller with tensorflow lite and sparkfun edge 0 flash our program and bootloader into the board test it then other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow probability distribution hiddenmarkovmodel not work with tf function
Bug
use tensorflow 2 0 on cpu the follow function work with no error when execute in eager mode but if it be decorate with tf function I get the error at the bottom python def generate data size p 0 p 0 0 p 1 1 mu0 s0 mu1 s1 state tfd categorical prob p 0 1 p 0 transition tfd categorical prob p 0 0 1 p 0 0 1 p 1 1 p 1 1 emission tfd normal loc mu0 mu1 scale s0 s1 model tfd hiddenmarkovmodel states transition emission size return model sample datum generate datum 50 0 01 0 3 0 7 10 2 10 3 print datum and the error traceback most recent call last file home abaka abaka fin apis test py line 21 in datum generate datum 50 0 01 0 3 0 7 10 2 10 3 file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager def function py line 457 in call result self call args kwd file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager def function py line 503 in call self initialize args kwd add initializer to initializer map file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager def function py line 408 in initialize args kwd file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager function py line 1848 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager function py line 2150 in maybe define function graph function self create graph function args kwargs file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager function py line 2041 in create graph function capture by value self capture by value file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python framework func graph py line 915 in func graph from py func func output python func func args func kwargs file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python eager def function py line 358 in wrap fn return weak wrap fn wrap args kwd file home abaka venv abaka fin apis lib python3 6 site package tensorflow core python framework func graph py line 905 in wrapper raise e ag error metadata to exception e valueerror in convert code relative to home abaka abaka fin apis test py 19 generate datum return model sample venv abaka fin apis lib python3 6 site package tensorflow probability python distributions distribution py 848 sample return self call sample n sample shape seed name kwargs venv abaka fin apis lib python3 6 site package tensorflow probability python distributions distribution py 826 call sample n sample self sample n n seed kwargs venv abaka fin apis lib python3 6 site package tensorflow probability python distribution hide markov model py 406 sample n lambda init state tf newaxis venv abaka fin apis lib python3 6 site package tensorflow probability python internal prefer static py 176 cond return true fn venv abaka fin apis lib python3 6 site package tensorflow probability python distribution hide markov model py 397 scan multiple step initializer init state venv abaka fin apis lib python3 6 site package tensorflow core python op functional op py 508 scan maximum iteration n venv abaka fin apis lib python3 6 site package tensorflow core python op control flow op py 2675 while loop back prop back prop venv abaka fin apis lib python3 6 site package tensorflow core python op while v2 py 234 while loop len orig loop var expand composite true venv abaka fin apis lib python3 6 site package tensorflow core python op while v2 py 1068 check shape compat specify a less specific shape input t name shape t shape valueerror input tensor hiddenmarkovmodel 1 sample reshape 0 enter the loop with shape 1 1 but have shape 1 none after one iteration to allow the shape to vary across iteration use the shape invariant argument of tf while loop to specify a less specific shape
tensorflowtensorflow
lstmcell name be ignore in trainable variable when wrap in keras layer rnn
Bug
I create a model with several lstmcell cell wrap in keras layer rnn when I print trainable variable cell name be ignore which result into duplicate variable name several same recurrent kernel etc which confuse tensorboard for example collab notebook to reproduce environment tensorflow 2 release
tensorflowtensorflow
autograph error transforming entity assertionerror bad argument number for name 3 expect 4
Bug
system information os platform and distribution arch linux 5 3 7 arch1 1 arch tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 kerasversion use command below 2 2 4 tf python version 3 7 4 cuda cudnn version cuda 10 1 243 cudnn 7 6 2 24 gpu model and memory 2x gtx 1080 ti 11 gb various error warning as entity initialize variable at 0x7f0f2846c320 could not be transform and will be execute as be 2019 10 27 22 54 04 017 info error transforming entity initialize variable at 0x7f0f2846c320 traceback most recent call last file usr lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 322 in convert free nonglobal var name file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 469 in convert entity to ast node name entity info convert func to ast o program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 669 in convert func to ast node node to graph node context file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 698 in node to graph node converter standard analysis node context be initial true file usr lib python3 7 site package tensorflow core python autograph core converter py line 383 in standard analysis node qual name resolve node file usr lib python3 7 site package tensorflow core python autograph pyct qual name py line 254 in resolve return qnresolver visit node file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 ast py line 317 in generic visit value self visit value file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 ast py line 317 in generic visit value self visit value file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 ast py line 326 in generic visit new node self visit old value file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 ast py line 317 in generic visit value self visit value file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 site package tensorflow core python autograph pyct qual name py line 236 in visit subscript if isinstance s value gast num attributeerror module gast have no attribute num warning tensorflow entity initialize variable at 0x7f0f2846c320 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num 2019 10 27 22 54 04 017 warning entity initialize variable at 0x7f0f2846c320 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute num 2019 10 27 22 54 06 670890 w tensorflow core grappler optimizer implementation selector cc 310 skip optimization due to error while loading function librarie invalid argument function inference backward standard gru 10450 11011 and inference backward cudnn gru with fallback 9300 9441 specialize for statefulpartitionedcall 1 at inference distribute function 12398 both implement gru ee50c0e8 e326 45b7 b98e 88e06e2f6f01 but their signature do not match 2019 10 27 22 54 01 401 info error transforming entity traceback most recent call last file usr lib python3 7 site package tensorflow core python autograph impl api py line 506 in convert call convert f conversion convert target entity program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 322 in convert free nonglobal var name file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 240 in convert with cache entity program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 471 in convert entity to ast node name entity info convert func to ast o program ctx file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 669 in convert func to ast node node to graph node context file usr lib python3 7 site package tensorflow core python autograph impl conversion py line 699 in node to graph node converter apply node context function scope file usr lib python3 7 site package tensorflow core python autograph core converter py line 409 in apply node converter module transform node context file usr lib python3 7 site package tensorflow core python autograph converter function scope py line 120 in transform return functionbodytransformer ctx visit node file usr lib python3 7 site package tensorflow core python autograph core converter py line 346 in visit return super base self visit node file usr lib python3 7 site package tensorflow core python autograph pyct transformer py line 480 in visit result super base self visit node file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 site package tensorflow core python autograph converter function scope py line 87 in visit functiondef node self generic visit node file usr lib python3 7 ast py line 317 in generic visit value self visit value file usr lib python3 7 site package tensorflow core python autograph core converter py line 346 in visit return super base self visit node file usr lib python3 7 site package tensorflow core python autograph pyct transformer py line 480 in visit result super base self visit node file usr lib python3 7 ast py line 262 in visit return visitor node file usr lib python3 7 site package tensorflow core python autograph converter function scope py line 44 in visit return value node value file usr lib python3 7 site package tensorflow core python autograph pyct template py line 261 in replace replacement k convert to ast replacement k file usr lib python3 7 site package tensorflow core python autograph pyct template py line 223 in convert to ast return gast name i d n ctx none annotation none file usr lib python3 7 site package gast gast py line 19 in create node format name nbparam len field assertionerror bad argument number for name 3 expect 4 warning tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 2019 10 27 22 54 01 401 warning entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning tensorflow from usr lib python3 7 site package tensorflow core python keras backend py 5783 sparse to dense from tensorflow python op sparse op be deprecate and will be remove in a future version
tensorflowtensorflow
unnecessary tf distribute mirroredstrategy scope in the distribute custom training tutorial
Bug
url s with the issue description of issue what need change in the tutorial with strategy scope appear almost everywhere which give the impression that it be require that those location however when check this example I find that the scope be not require at all only strategy experimental run v2 and strategy reduce suffice therefore I would propose to modify the tutorial code to keep only a minimum amount of with strategy scope when necessary e g when define the model submit a pull request yes I can create a pr but only when you have approve that this be valid thank
tensorflowtensorflow
inclusion of a model re train example
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change since model re training be quite vital in both apply and research base environment I think it would be great to include an example on the same in this tutorial clear description the tutorial show how to save and load model use various option it do mention that use the model checkpoint it be possible to train the model from the point it be leave off however currently there be no section in the tutorial that show how to do that in the correct way for example why should someone use this method how be it useful there be several instance where a model may have to be retrain there be new datum and the model need to re train on that if we be on local machine and if there be a power failure or bottleneck that cause the training process to stop we can always load up the late checkpoint and re train the model from there
tensorflowtensorflow
doc do not link to source
Bug
one really useful feature in the sklearn docs be that each and every item have a link back to its source on github for example see randomforestregressor class sklearn ensemble randomforestregressor n estimator warn criterion mse max depth none min sample split 2 min sample leaf 1 min weight fraction leaf 0 0 max feature auto max leaf nodes none min impurity decrease 0 0 min impurity split none bootstrap true oob score false n job none random state none verbose 0 warm start false source l1046 would be nice if the tf doc could do the same
tensorflowtensorflow
hyperlink in codelab redirect to a 404 page
Bug
32417 url s with the issue 1 description of issue what need change update the link to redirect to the late version of tensorflow clear description always well to redirect to a page that help out a user rather than keep link to 404 page correct link be the link to the source code correct raise list and define server raise a 404 page which be an error usage example it isn t exactly a use case just a qol change request visual if applicable be there currently visual if not will it clarify the content submit a pull request yes I love contribute in small way since I m still a rather newbie developer error even small should be deal with I presume
tensorflowtensorflow
copy one graph to another
Bug
I want to copy a load graph to another one here be what I m try to do import tensorflow as tf import numpy as np import cv2 input name image tensor pb fname1 user vedanshu frozen graph ssd tomato l1 freeze graph pb def get frozen graph graph file read frozen graph file from disk with tf gfile fastgfile graph file rb as f graph def tf graphdef graph def parsefromstring f read return graph def trt graph1 get freeze graph pb fname1 detection graph1 tf graph with detection graph1 as default tf import graph def trt graph1 name tf sess1 tf session graph detection graph1 tf input1 tf sess1 graph get tensor by name input name 0 0 tf scores1 tf sess1 graph get tensor by name detection score 0 tf boxes1 tf sess1 graph get tensor by name detection box 0 tf classes1 tf sess1 graph get tensor by name detection class 0 tf num detections1 tf sess1 graph get tensor by name num detection 0 now I want to copy tf input1 tf scores1 tf boxes1 tf num detections1 to another graph currently I m try to use copy op to graph depricate as follow import sys detection graph2 tf graph namespace ve copy variable sys setrecursionlimit 10000000 tf num detections1 copy tf contrib copy graph copy op to graph tf num detections1 detection graph2 copy variable namespace but the python kernel be get kill without any error system information os mac os 10 13 6 tf veriosn 1 13 1 ram 8 gb
tensorflowtensorflow
typeerror an op outside of the function building code be be pass a graph tensor
Bug
system information have I write custom code yes os platform and distribution mac os catalina 10 15 19a602 tensorflow instal from binary tensorflow version 2 0 0 python version 3 7 4 gpu model and memory intel iris pro 1536 mb describe the current behavior I be get the error tensorflow python eager core symbolicexception input to eager execution function can not be keras symbolic tensor but find after have get the exception typeerror an op outside of the function building code be be pass a graph tensor see the detailed traceback below describe the expect behavior no error code to reproduce the issue from future import print function import tensorflow as tf import tensorflow probability as tfp tf compat v1 disable eager execution def get bayesian model input shape none num class 10 model tf keras sequential model add tf keras layers input shape input shape model add tfp layer convolution2dflipout 6 kernel size 5 padding same activation tf nn relu model add tf keras layer flatten model add tfp layer denseflipout 84 activation tf nn relu model add tfp layer denseflipout num class return model def get mnist datum normalize true img row img col 28 28 x train y train x test y test tf keras datasets mnist load datum if tf keras backend image datum format channel first x train x train reshape x train shape 0 1 img row img col x test x test reshape x test shape 0 1 img row img col input shape 1 img row img col else x train x train reshape x train shape 0 img row img col 1 x test x test reshape x test shape 0 img row img col 1 input shape img row img col 1 x train x train astype float32 x test x test astype float32 if normalize x train 255 x test 255 return x train y train x test y test input shape def train hyper parameter batch size 128 num class 10 epoch 1 get the training datum x train y train x test y test input shape get mnist datum get the model model get bayesian model input shape input shape num class num class prepare the model for training model compile optimizer tf keras optimizer adam loss sparse categorical crossentropy metric accuracy train the model model fit x train y train batch size batch size epoch epoch verbose 1 model evaluate x test y test verbose 0 if name main train other info log warn tensorflow from user nbro desktop my project venv lib python3 7 site package tensorflow probability python layers util py 104 layer add variable from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update please use layer add weight method instead 2019 10 25 20 38 32 504579 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 25 20 38 32 517426 I tensorflow compiler xla service service cc 168 xla service 0x7fe25e59f290 execute computation on platform host device 2019 10 25 20 38 32 517438 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version train on 60000 sample traceback most recent call last 128 60000 eta 7 32 file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager execute py line 61 in quick execute num output typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name conv2d flipout divergence kernel 0 during handling of the above exception another exception occur traceback most recent call last file user nbro desktop my project my module py line 63 in train file user nbro desktop my project my module py line 58 in train model fit x train y train batch size batch size epoch epoch verbose 1 file user nbro desktop my project venv lib python3 7 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file user nbro desktop my project venv lib python3 7 site package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file user nbro desktop my project venv lib python3 7 site package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file user nbro desktop my project venv lib python3 7 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager def function py line 457 in call result self call args kwd file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager function py line 1141 in filter call self capture input file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager function py line 511 in call ctx ctx file user nbro desktop my project venv lib python3 7 site package tensorflow core python eager execute py line 75 in quick execute tensor but find format keras symbolic tensor tensorflow python eager core symbolicexception input to eager execution function can not be keras symbolic tensor but find the problem be apparently relate to the layer tfp layer convolution2dflipout I know that if I use tf compat v1 disable eager execution after have import tensorflow I do not get the mentioned error anymore but I would like to use tensorflow s eager execution avoid session or placeholder I open the same issue here
tensorflowtensorflow
tf keras fail to concatenate batch in predict when use distribute strategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 hpc cluster mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary via pip tensorflow version use command below 2 0 0 python version 3 7 1 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cudnn 7 6 4 cuda 10 0 130 gpu model and memory 8x geforce gtx 1080 ti with 10479 mb memory each describe the current behavior when use the distribute mirror strategy to train the network on multiple gpu s the predict step fail with this error code file cluster home user localization base model py line 76 in predict prediction self model predict x use multiprocesse false file cluster home cspreche local lib python3 7 site package tensorflow core python keras engine training py line 909 in predict use multiprocesse use multiprocesse file cluster home user local lib python3 7 site package tensorflow core python keras engine training v2 py line 462 in predict step step callback callback kwargs file cluster home user local lib python3 7 site package tensorflow core python keras engine training v2 py line 444 in model iteration total epoch 1 file cluster home user local lib python3 7 site package tensorflow core python keras engine training v2 py line 161 in run one epoch batch out aggregate predict result strategy batch out model file cluster home user local lib python3 7 site package tensorflow core python keras engine training v2 py line 631 in aggregate predict result dist util concat along batch dimension nest flatten nest out file cluster home user local lib python3 7 site package tensorflow core python keras distribute distribute training util py line 1205 in concat along batch dimension return np concatenate output valueerror all the input array dimension except for the concatenation axis must match exactly when I print the dimension of the element in the concatenation I get the follow output 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 32 320 320 1 10 320 320 1 0 0 0 1 0 0 0 1 0 0 0 1 it seem that the empty set do not inherit the dimension of full one and lead therefore to a crash with the numpy concatenate function I could solve the issue with replace return np concatenate output with out z for z in output if z shape 0 0 return np concatenate out code to reproduce the issue ds train tfds load ds val tfds load strategy tf distribute mirroredstrategy num gpu strategy num replicas in sync ds train ds train batch 32 num gpu ds val ds val batch 32 num gpu with strategy scope input image input output model tf keras model input input image output output model compile model fit ds train validation datum ds val model predict ds val use multiprocesse true
tensorflowtensorflow
be be possible to run tensorflow lite with threading disable without pthread only 1 thread
Bug
I be try to compile tensorflowlite for emscripten I be aware of tensorflowjs and pthread be currently disable be there a way to use it without pthread
tensorflowtensorflow
relu6
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
could not find valid device for node node node nonmaxsuppressionv4
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below tensorflow 1 14 0 python version 3 7 3 I be try to use tf image non max suppression pad but I get an error maybe I write a wrong code hope somebody can help test code python import tensorflow as tf import numpy as np np random seed 0 num objs per img 10 box np sort np random rand num objs per img 4 score np random rand num objs per img idx tf image non max suppression pad box score max output size 7 iou threshold 0 7 print idx terminal log text traceback most recent call last file draft py line 11 in idx tf image non max suppression pad box score max output size 7 iou threshold 0 7 file home xiefangyuan anaconda3 lib python3 7 site package tensorflow python op image op impl py line 2646 in non max suppression pad pad to max output size file home xiefangyuan anaconda3 lib python3 7 site package tensorflow python op gen image op py line 2561 in non max suppression v4 six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror could not find valid device for node node node nonmaxsuppressionv4 all kernel register for op nonmaxsuppressionv4 device xla gpu t in dt float dt half device xla cpu jit t in dt float dt half device xla gpu jit t in dt float dt half device xla cpu t in dt float dt half device cpu t in dt half device cpu t in dt float op nonmaxsuppressionv4
tensorflowtensorflow
tf 2 0 nightly tensorflow 2 0 nightly build doesn t work with colab tpus
Bug
tpu strategy can t be instantiate in colab I be aware that this issue might be more suitable for colaboratory github but there many issue be not get a response very fast so I think it will be well to ask you about a potential solution if there be a bug relate to the last build and not google colab environment or whether it be plan that colab will have this support in the nearby future the follow link demonstrate the issue tpu runtime be select and the version of the instal tf2 0 be 2 1 0 dev20191024 thank
tensorflowtensorflow
tf lite gpu delegate should use the same linker script as main library
Bug
system information os platform and distribution e g linux ubuntu 16 04 android ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 15 0 python version 3 6 8 instal use virtualenv pip conda ip bazel version if compile from source 0 24 1 gcc compiler version if compile from source ndk r17b clang cuda cudnn version gpu model and memory describe the problem currently libtensorflowlite so be build with wl version script location tensorflow lite tflite version script ld the same be not do for gpu delegate be there any particular reason for that I believe it should be apply to gpu delegate library too as currently you ll need I don t believe any symbol other than tflite relate symbol be necessary if visibility hide by default be ok since I think we only need c api from gpu delegate then shouldn t it be default flag for the library inconsistent symbol hiding make it confusing when build both main library and gpu delegate would it be acceptable to hide symbol by default or use linker script as in main tflite library provide the exact sequence of command step that you execute before run into the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf2 0 lstmcell get initial state go wrong when batch size be a tensor
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not clear tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior lstmcell get initial state go wrong when batch size be a tensor describe the expect behavior I think it should give a good result code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf a tf keras input shape 10 batch size 2 batch size tf shape a 0 cell tf keras layers lstmcell unit 1024 b cell get initial state batch size batch size dtype tf float32 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file usr local lib python3 6 dist package tensorflow core python op array op py line 2355 in zero tensor shape tensorshape shape file usr local lib python3 6 dist package tensorflow core python framework tensor shape py line 776 in init self dim as dimension d for d in dim iter file usr local lib python3 6 dist package tensorflow core python framework tensor shape py line 776 in self dim as dimension d for d in dim iter file usr local lib python3 6 dist package tensorflow core python framework tensor shape py line 718 in as dimension return dimension value file usr local lib python3 6 dist package tensorflow core python framework tensor shape py line 193 in init self value int value typeerror int argument must be a string a bytes like object or a number not tensor during handling of the above exception another exception occur traceback most recent call last file usr local lib python3 6 dist package tensorflow core python framework op py line 1610 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror duplicate node name in graph zero pack during handling of the above exception another exception occur traceback most recent call last file test7 py line 10 in b cell get initial state batch size batch size dtype tf float32 file usr local lib python3 6 dist package tensorflow core python keras layers recurrent py line 2314 in get initial state self input batch size dtype file usr local lib python3 6 dist package tensorflow core python keras layers recurrent py line 2752 in generate zero fill state for cell return generate zero fill state batch size cell state size dtype file usr local lib python3 6 dist package tensorflow core python keras layers recurrent py line 2768 in generate zero fill state return nest map structure create zero state size file usr local lib python3 6 dist package tensorflow core python util nest py line 535 in map structure structure 0 func x for x in entry file usr local lib python3 6 dist package tensorflow core python util nest py line 535 in structure 0 func x for x in entry file usr local lib python3 6 dist package tensorflow core python keras layers recurrent py line 2765 in create zero return array op zeros init state size dtype dtype file usr local lib python3 6 dist package tensorflow core python op array op py line 2358 in zero shape op convert to tensor shape dtype dtype int32 file usr local lib python3 6 dist package tensorflow core python framework op py line 1184 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file usr local lib python3 6 dist package tensorflow core python framework op py line 1242 in convert to tensor v2 as ref false file usr local lib python3 6 dist package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file usr local lib python3 6 dist package tensorflow core python op array op py line 1278 in autopacke conversion function return autopacke helper v dtype name or pack file usr local lib python3 6 dist package tensorflow core python op array op py line 1214 in autopacke helper return gen array op pack elem as tensor name scope file usr local lib python3 6 dist package tensorflow core python ops gen array op py line 6304 in pack pack value value axis axis name name file usr local lib python3 6 dist package tensorflow core python framework op def library py line 793 in apply op helper op def op def file usr local lib python3 6 dist package tensorflow core python framework func graph py line 548 in create op compute device file usr local lib python3 6 dist package tensorflow core python framework op py line 3429 in create op internal op def op def file usr local lib python3 6 dist package tensorflow core python framework op py line 1773 in init control input op file usr local lib python3 6 dist package tensorflow core python framework op py line 1613 in create c op raise valueerror str e valueerror duplicate node name in graph zero pack
tensorflowtensorflow
conv3d node not convert to float16 although automatic mixed precision be on
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tf nightly gpu tensorflow version use command below tf nightly gpu 2 1 0 dev20191023 python version python 3 7 4 cuda cudnn version cuda 10 0 cudnn 7 6 4 gpu model and memory 2x nvidia titan rtx describe the current behavior when automatic precision be active 3d convolution op be not convert to float16 and no performance gain be observe in the output of the provide example we can see that most of the node be not convert to float16 convert 9 854 node to float16 precision use 2 cast s to float16 exclude const and variable cast apart from the warmup delay during the first epoch of the training without amp there be no difference in the duration of epoch with and without amp describe the expect behavior accord to tensorflow tensorflow core grappler optimizer auto mixed precision list h and cudnn release note 3d convolution operation on volta architecture should benefit from amp training with the test version of tf cuda and cudnn when debug with tf cpp min vlog level 2 I can see why the op be not optimize I tensorflow core grappler optimizer auto mixed precision cc 1076 skip readvariableop node model conv3d conv3d readvariableop because it must be preserve I can t figure out why those item must be preserve in the source code I think they be mark as node to preserve in grappleritem nodestopreserve in tensorflow core grappler grappler item cc but I could not find out the exact reason code to reproduce the issue import time import numpy as np import tensorflow as tf from tensorflow keras model import sequential from tensorflow keras layer import from tensorflow python client import device lib print tensorflow version be tf version x train y train x test y test tf keras datasets cifar10 load datum num sample 1000 x train x train num sample y train y train num sample x test x test num sample y test y test num sample def fake3d x return np repeat x np newaxis 8 axis 1 x train fake3d x train x test fake3d x test num class np max y train 1 y train tf keras util to categorical y train num class y test tf keras util to categorical y test num class def normalize ndarray ndarray ndarray astype float32 ndarray ndarray 255 0 return ndarray x train normalize x train x test normalize x test def create model num class 10 model parameter act relu pad same ini he uniform model tf keras model sequential conv3d 128 3 3 3 activation act padding pad kernel initializer ini input shape 8 32 32 3 conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 256 3 3 3 activation act padding pad kernel initializer ini maxpooling3d pool size 2 2 2 conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 512 3 3 3 activation act padding pad kernel initializer ini conv3d 512 3 3 3 activation act padding pad kernel initializer ini maxpooling3d pool size 2 2 2 conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 256 3 3 3 activation act padding pad kernel initializer ini conv3d 128 3 3 3 activation act padding pad kernel initializer ini maxpooling3d pool size 2 4 4 flatten batchnormalization dense 512 activation relu dense num class activation softmax return model model create model num class model summary batch size 320 n epoch 6 opt tf keras optimizer sgd learn rate 0 02 momentum 0 5 def train model mixed precision optimizer model create model num class if mixed precision import tensorflow optimizer tf train experimental enable mixed precision graph rewrite optimizer model compile loss categorical crossentropy optimizer optimizer metric accuracy train start time time train log model fit x train y train batch size batch size epoch n epoch use multiprocesse true worker 2 train end time time result train time train end train start train log train log return result fp32 result train model mixed precision false optimizer opt train time round fp32 result train time 1 print achieve in train time second tf keras backend clear session time sleep 10 mp result train model mixed precision true optimizer opt train time round mp result train time 1 print achieve in train time second
tensorflowtensorflow
potential redundancy in use np array
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry description of the issue what need change in this example we be already perform this original img np array original img in the beginning then what be the point of repeat it here img tf constant np array original img and here shift down shift right img roll random roll np array original img 512
tensorflowtensorflow
an issue with the code in the custom training with tf distribute strategy tutorial
Bug
url s with the issue description of issue what need change I be not sure if this be a bug or an issue with the tutorial but when I apply the custom training with tf distribute strategy 1 tutorial to the image segmentation 2 I just copy paste the code of the two tutorial the scale of the training loss be not good epoch 1 loss 35478 49609375 accuracy 48 560935974121094 test loss 0 8764073252677917 test accuracy 57 97665023803711 epoch 2 loss 20161 634765625 accuracy 74 82583618164062 test loss 0 6519305109977722 test accuracy 77 33595275878906 epoch 3 loss 15657 2880859375 accuracy 81 60499572753906 test loss 0 5801540017127991 test accuracy 79 94847106933594 epoch 4 loss 13322 1689453125 accuracy 84 52685546875 test loss 0 5113006830215454 test accuracy 82 27192687988281 epoch 5 loss 11845 38671875 accuracy 85 9767837524414 test loss 0 4614977538585663 test accuracy 83 19354248046875 epoch 6 loss 10827 380859375 accuracy 86 9468002319336 test loss 0 43975135684013367 test accuracy 83 65667724609375 epoch 7 loss 10006 4892578125 accuracy 87 75154113769531 test loss 0 4181833863258362 test accuracy 83 8880386352539 epoch 8 loss 9534 9345703125 accuracy 88 15916442871094 test loss 0 40620285272598267 test accuracy 84 22107696533203 epoch 9 loss 8993 767578125 accuracy 88 73575592041016 test loss 0 3957768976688385 test accuracy 84 42972564697266 epoch 10 loss 8425 7080078125 accuracy 89 38662719726562 test loss 0 37987643480300903 test accuracy 84 94923400878906 the full code to reproduce be append below it look like the training loss be scale by image height image width code import sys os import tensorflow as tf from tensorflow example model pix2pix import pix2pix import tensorflow dataset as tfds import time def normalize input image input mask input image tf cast input image tf float32 255 0 input mask 1 return input image input mask tf function def load image train datapoint input image tf image resize datapoint image 128 128 input mask tf image resize datapoint segmentation mask 128 128 if tf random uniform 0 5 input image tf image flip leave right input image input mask tf image flip leave right input mask input image input mask normalize input image input mask return input image input mask def load image test datapoint input image tf image resize datapoint image 128 128 input mask tf image resize datapoint segmentation mask 128 128 input image input mask normalize input image input mask return input image input mask def main dataset info tfds load oxford iiit pet 3 0 0 with info true train length info split train num example batch size 192 buffer size 1000 step per epoch train length batch size train dataset train map load image train num parallel call tf data experimental autotune test dataset test map load image test train dataset train cache shuffle buffer size batch batch size repeat train dataset train dataset prefetch buffer size tf datum experimental autotune test dataset test batch batch size output channel 3 strategy tf distribute mirroredstrategy with strategy scope base model tf keras applications mobilenetv2 input shape 128 128 3 include top false use the activation of these layer layer name block 1 expand relu 64x64 block 3 expand relu 32x32 block 6 expand relu 16x16 block 13 expand relu 8x8 block 16 project 4x4 layer base model get layer name output for name in layer name create the feature extraction model down stack tf keras model inputs base model input output layer down stack trainable false up stack pix2pix upsample 512 3 4x4 8x8 pix2pix upsample 256 3 8x8 16x16 pix2pix upsample 128 3 16x16 32x32 pix2pix upsample 64 3 32x32 64x64 def unet model output channel this be the last layer of the model last tf keras layer conv2dtranspose output channel 3 stride 2 padding same activation softmax 64x64 128x128 input tf keras layers input shape 128 128 3 x input downsample through the model skip down stack x x skip 1 skip reverse skip 1 upsampling and establish the skip connection for up skip in zip up stack skip x up x concat tf keras layers concatenate x concat x skip x last x return tf keras model inputs input output x model unet model output channel model compile optimizer adam loss sparse categorical crossentropy metric accuracy model compile optimizer tf keras optimizer adam 3e 4 loss tf keras loss sparsecategoricalcrossentropy tf keras loss reduction none metric sparse categorical accuracy optimizer tf keras optimizer adam 3e 4 metric tf keras metrics meaniou num class 3 epoch 2 validation step info split test num example batch size print start training start time time epoch epoch step step per epoch global batch size batch size distribute the dataset train dist dataset strategy experimental distribute dataset train dataset test dist dataset strategy experimental distribute dataset test dataset with strategy scope set reduction to none so we can do the reduction afterwards and divide by global batch size loss object tf keras loss sparsecategoricalcrossentropy reduction tf keras loss reduction none or loss fn tf keras loss sparse categorical crossentropy def compute loss label prediction per example loss loss object label prediction return tf nn compute average loss per example loss global batch size global batch size with strategy scope test loss tf keras metric mean name test loss train accuracy tf keras metric sparsecategoricalaccuracy name train accuracy test accuracy tf keras metric sparsecategoricalaccuracy name test accuracy with strategy scope def train step input image label input with tf gradienttape as tape prediction model image train true loss compute loss label prediction gradient tape gradient loss model trainable variable optimizer apply gradient zip gradient model trainable variable train accuracy update state label prediction return loss def test step input image label input prediction model image training false t loss loss object label prediction test loss update state t loss test accuracy update state label prediction with strategy scope experimental run v2 replicate the provide computation and run it with the distribute input tf function def distribute train step dataset input per replica loss strategy experimental run v2 train step args dataset input return strategy reduce tf distribute reduceop sum per replica loss axis none tf function def distribute test step dataset input return strategy experimental run v2 test step args dataset input train iter iter train dataset for epoch in range epoch train loop total loss 0 0 num batch 0 while true x next train iter total loss distribute train step x num batch 1 if num batch step break train loss total loss num batch test loop for x in test dist dataset distribute test step x if epoch 2 0 checkpoint save checkpoint prefix template epoch loss accuracy test loss test accuracy print template format epoch 1 train loss train accuracy result 100 test loss result test accuracy result 100 test loss reset state train accuracy reset states test accuracy reset states if name main main 1 2
tensorflowtensorflow
simple custom metric cause tf function retracing when training on multiple gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 5 2 cuda cudnn version 9 2 gpu model and memory gtx 1080 ti describe the current behavior use the follow custom metric event without the comment part import tensorflow as tf class meaniouignorelabel tf keras metrics meaniou mean intersection over union with an ignore label def init self num class ignore label none name miou ignore label kwargs super meaniouignorelabel self init num class name name kwargs self ignore label ignore label def update state self y true y pre sample weight none y pre tf argmax y pre axis 1 if self ignore label be not none if sample weight be not none sample weight tf where y true self ignore label 0 sample weight sample weight tf where tf math equal y true self ignore label 0 sample weight else sample weight y true self ignore label sample weight tf math not equal y true self ignore label return super meaniouignorelabel self update state y true y pre sample weight I obtain the follow warn 2019 10 23 22 27 58 600906 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 warning tensorflow 5 out of the last 9 call to trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail I be use tf distribute mirroredstrategy to train on multiple gpu I guess this be a bug thank
tensorflowtensorflow
valueerror unknown metric function custommetric use custom metric when load tf save model type with tf keras model load model
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 describe the current behavior valueerror unknown metric function custommetric occur when try to load a tf save model use tf keras model load model with a custom metric if you look at the code for load model it be clear the load model currently ignore the custom object dict for the tf save model format describe the expect behavior load model load the custom metric successfully either just implicitly or through the custom object dict code to reproduce the issue import tensorflow as tf from tensorflow import kera from tensorflow keras import layer from tensorflow keras metric import metric import numpy as np class custommetric metric def init self name score dtype tf float32 super custommetric self init name name self true positive self add weight true positive shape 10 initializer zero dtype self dtype def update state self y true y pre sample weight none pass def result self return 0 def get config self return the serializable config of the metric config base config super custommetric self get config return dict list base config item list config item def reset state self self true positive assign np zeros self num class np float32 self weight intermediate assign np zero self num class np float32 input keras input shape 784 name digit x layer dense 64 activation relu name dense 1 input x layer dense 64 activation relu name dense 2 x output layer dense 10 activation softmax name prediction x model keras model inputs input output output name 3 layer mlp model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam lr 001 metric custommetric model save model save format tf new model keras model load model model tf keras model load model score custommetric other info log traceback most recent call last file home sentim website model prediction test load save model py line 46 in new model keras model load model model custom object score custommetric file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras save save py line 150 in load model return save model load load filepath compile file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras save save model load py line 93 in load model training config pylint disable protect access file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras engine training py line 356 in compile self cache output metric attribute metric weight metric file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras engine training py line 1901 in cache output metric attribute metric self output name output shape self loss function file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras engine training util py line 813 in collect per output metric info metric name get metric name metric be weight file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras engine training util py line 987 in get metric name metric metric module get metric file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras metrics py line 2857 in get return deserialize identifi file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras metrics py line 2851 in deserialize printable module name metric function file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras util generic util py line 180 in deserialize keras object config module object custom object printable module name file home sentim anaconda3 envs py37 lib python3 7 site package tensorflow core python keras util generic util py line 165 in class and config for serialized keras object raise valueerror unknown printable module name class name valueerror unknown metric function custommetric
tensorflowtensorflow
update tutorial for tensorflow object detection api generate tf record be give a tough time
Bug
update generate tf record file accord to run version in tensor flow
tensorflowtensorflow
output name lose when load keras model in savedmodel format
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 5 0 0 25 generic x86 64 with ubuntu 18 04 bionic tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 4 describe the current behavior when load a keras model save in the savedmodel format the output name be lose lose the output name cause load to fail if use dictionary for configure loss or metric describe the expect behavior output name to be restore when load the model and dictionary for loss to be work when load the model code to reproduce the issue output name be not restore and output be give new auto generate name py import tensorflow as tf I tf keras layers input 1 x tf keras layer dense 1 name my output I m tf keras model I x m compile loss mse m save my save model m2 tf keras model load model my save model assert m2 output name 0 my output assertionerror model fail to load when use dictionary for loss py import tensorflow as tf I tf keras layers input 1 x tf keras layer dense 1 name my output I m tf keras model I x m compile loss my output mse assert m output name 0 my output m save my save model m2 tf keras model load model my save model valueerror unknown entry in loss dictionary my output only expect follow key output 1
tensorflowtensorflow
tf string split bug
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior attributeerror tensorflow python framework op eagertensor object have no attribute to sparse describe the expect behavior the op should return sparsetensor or raggedtensor code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf tf string split a b other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach colab
tensorflowtensorflow
tf pow overflow
Bug
env colab with gpu on tf 2 0 python3 run tf pow with integer power great than 5 return wrong result and to I it look like an overflow python a tf constant 50 b tf constant 6 tf pow a b the above return tf tensor 1554869184 shape dtype int32 which be mathematically wrong here be the result the correct one when use python s math library python import math math pow 50 6 the above return 15625000000 0 use python s math library give correct result for high power which I d expect tf pow to do for integer input any explanation for this discrepancy
tensorflowtensorflow
distribute training with mirroredstrategy crash when input shape be variable
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip binary tensorflow version use command below tf2 0 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 7 6 4 gpu model and memory rtx titan 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow raise error valueerror when input signature be provide all input to the python function must be convertible to tensor when check input signature of a tf function if input shape be variable in a distribute training environment the training work without any error when I train it with single gpu or input with fix shape or I delete a path to cuda from my environment path describe the expect behavior code to reproduce the issue import tensorflow as tf import tensorflow kera as keras import random import os os environ cuda visible device 2 3 class model keras model def init self super model self init self emb keras layer embed 51 100 self layer keras layer dense 51 def call self x x self emb x x self layer x return x tf function input signature tf tensorspec shape none none dtype tf int64 tf tensorspec shape none none dtype tf int64 def multi gpu step x y def example update step x y with tf gradienttape as tape y model x batch loss keras loss sparse categorical crossentropy y true y y pre y from logit true loss batch loss strategy num replicas in sync step grad tape gradient loss model trainable variable optimizer apply gradient zip step grad model trainable variable return tf reduce mean batch loss 1 example loss strategy experimental run v2 example update step args x y loss sum strategy reduce tf distribute reduceop sum example loss axis 0 return loss sum strategy tf distribute mirroredstrategy datum I for I in range random randint 10 50 for j in range 400 def iterator for I in range len datum yield datum I data I with strategy scope model model optimizer keras optimizer adam dataset tf datum dataset from generator iterator output type tf int64 tf int64 batchfi dataset padded batch 4 padded shape none none batchfi strategy experimental distribute dataset batchfi for x y in batchfi l multi gpu step x y provide a reproducible test case that be the bare minimum necessary to generate the problem other info log ssh 2222 home bj1123 anaconda3 bin python u home bj1123 pycharm gpt2 multi test variable py 2019 10 22 14 21 01 879166 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 10 22 14 21 02 010621 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 b2 00 0 2019 10 22 14 21 02 011936 I tensorflow core common runtime gpu gpu device cc 1618 find device 1 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 db 00 0 2019 10 22 14 21 02 012174 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 10 22 14 21 02 013811 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 10 22 14 21 02 015315 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 10 22 14 21 02 015650 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 10 22 14 21 02 017535 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 10 22 14 21 02 019038 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 10 22 14 21 02 023539 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 10 22 14 21 02 028529 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2019 10 22 14 21 02 028935 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 avx512f fma 2019 10 22 14 21 02 058997 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 10 22 14 21 02 062760 I tensorflow compiler xla service service cc 168 xla service 0x55d0eb1f8ad0 execute computation on platform host device 2019 10 22 14 21 02 062799 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 10 22 14 21 02 415212 I tensorflow compiler xla service service cc 168 xla service 0x55d0eb25ba20 execute computation on platform cuda device 2019 10 22 14 21 02 415251 I tensorflow compiler xla service service cc 175 streamexecutor device 0 titan rtx compute capability 7 5 2019 10 22 14 21 02 415260 I tensorflow compiler xla service service cc 175 streamexecutor device 1 titan rtx compute capability 7 5 2019 10 22 14 21 02 417153 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 b2 00 0 2019 10 22 14 21 02 418575 I tensorflow core common runtime gpu gpu device cc 1618 find device 1 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 db 00 0 2019 10 22 14 21 02 418622 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 10 22 14 21 02 418637 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 10 22 14 21 02 418652 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 10 22 14 21 02 418667 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 10 22 14 21 02 418682 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 10 22 14 21 02 418697 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 10 22 14 21 02 418714 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 10 22 14 21 02 424652 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2019 10 22 14 21 02 424697 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 10 22 14 21 02 428615 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 10 22 14 21 02 428630 I tensorflow core common runtime gpu gpu device cc 1165 0 1 2019 10 22 14 21 02 428636 I tensorflow core common runtime gpu gpu device cc 1178 0 n n 2019 10 22 14 21 02 428641 I tensorflow core common runtime gpu gpu device cc 1178 1 n n 2019 10 22 14 21 02 432235 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 22846 mb memory physical gpu device 0 name titan rtx pci bus i d 0000 b2 00 0 compute capability 7 5 2019 10 22 14 21 02 434469 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 22846 mb memory physical gpu device 1 name titan rtx pci bus i d 0000 db 00 0 compute capability 7 5 2019 10 22 14 21 02 445038 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 b2 00 0 2019 10 22 14 21 02 448745 I tensorflow core common runtime gpu gpu device cc 1618 find device 1 with property name titan rtx major 7 minor 5 memoryclockrate ghz 1 77 pcibusid 0000 db 00 0 2019 10 22 14 21 02 448816 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 10 22 14 21 02 448856 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 10 22 14 21 02 448907 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 10 22 14 21 02 448953 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 10 22 14 21 02 449004 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 10 22 14 21 02 449045 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 10 22 14 21 02 449095 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 10 22 14 21 02 459964 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2019 10 22 14 21 02 460089 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 10 22 14 21 02 460116 I tensorflow core common runtime gpu gpu device cc 1165 0 1 2019 10 22 14 21 02 460136 I tensorflow core common runtime gpu gpu device cc 1178 0 n n 2019 10 22 14 21 02 460156 I tensorflow core common runtime gpu gpu device cc 1178 1 n n 2019 10 22 14 21 02 466303 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 0 with 22846 mb memory physical gpu device 0 name titan rtx pci bus i d 0000 b2 00 0 compute capability 7 5 2019 10 22 14 21 02 468400 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 1 with 22846 mb memory physical gpu device 1 name titan rtx pci bus i d 0000 db 00 0 compute capability 7 5 warn tensorflow efficient allreduce be not support for 1 indexedslice traceback most recent call last file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 1704 in convert input to signature value dtype hint spec dtype file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework op py line 1184 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework op py line 1242 in convert to tensor v2 as ref false file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 286 in constant tensor conversion function return constant v dtype dtype name name file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 227 in constant allow broadcast true file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 235 in constant impl t convert to eager tensor value ctx dtype file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype valueerror attempt to convert a value perreplica 0 job localhost replica 0 task 0 device gpu 0 1 job localhost replica 0 task 0 device gpu 1 with an unsupported type to a tensor during handling of the above exception another exception occur traceback most recent call last file home bj1123 pycharm gpt2 multi test variable py line 54 in l multi gpu step x y file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager def function py line 457 in call result self call args kwd file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 1822 in call graph function args kwargs self maybe define function args kwargs file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2107 in maybe define function args kwargs file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 1650 in canonicalize function input self flat input signature file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 1710 in convert input to signature format error message input input signature valueerror when input signature be provide all input to the python function must be convertible to tensor input perreplica 0 job localhost replica 0 task 0 device gpu 0 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 shape 2 49 dtype int64 1 job localhost replica 0 task 0 device gpu 1 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 shape 2 27 dtype int64 perreplica 0 job localhost replica 0 task 0 device gpu 0 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 shape 2 49 dtype int64 1 job localhost replica 0 task 0 device gpu 1 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 shape 2 27 dtype int64 input signature tensorspec shape none none dtype tf int64 name none tensorspec shape none none dtype tf int64 name none process finish with exit code 1 include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
person detection zip not present after update in reference to issue 33552 and pr 33579
Bug
this be issue be make in reference to issue 33552 and the pr 33579 when I click on the person detection zip link after update this pop out this xml file do not appear to have any style information associate with it the document tree be show below nosuchkey the specify key do not exist no such object tensorflow nightly github tensorflow tensorflow lite experimental micro tool make gen arduino x86 64 prj person detection person detection zip
tensorflowtensorflow
tflite support int8 quantization for pack with tflite builtins int8 opsset
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 3 6 describe the current behavior similar to the unpack node issue in the new tfliteconverter post training quantisation flow as describe in full integer quantization of weight and activation do not support quantization of pack stack operation when only integer operation be request in the output model when such conversion be attempt the follow error be report runtimeerror quantization not yet support for op pack code to reproduce the issue for example the script below python import tensorflow as tf import numpy as np def representative dataset gen input 1 np one 1 10 dtype np float32 input 2 np one 1 10 dtype np float32 for in range 10 yield input 1 input 2 tf graph input foo tf compat v1 placeholder float32 1 10 bar tf compat v1 placeholder float32 1 10 out stack tf stack foo bar axis 0 with tf compat v1 session as sess tf io write graph tf compat v1 get default graph pack pb as text false input name placeholder placeholder 1 output name stack tflite model name int8 pack tflite converter tf lite tfliteconverter from frozen graph pack pb input name output name converter optimization tf lite optimize default converter target op tf lite opsset tflite builtins int8 converter representative dataset representative dataset gen tflite model converter convert open tflite model name wb write tflite model load tflite model and allocate tensor interpreter tf lite interpreter tflite model name interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail test model on random input datum input shape input detail 0 shape input datum np array np random random sample input shape dtype np float32 interpreter set tensor input detail 0 index input datum interpreter invoke produce error as follow 2019 10 21 14 02 33 682706 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 21 14 02 33 708278 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 10 21 14 02 33 708892 I tensorflow compiler xla service service cc 168 xla service 0x33f5a60 execute computation on platform host device 2019 10 21 14 02 33 708924 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 10 21 14 02 33 717107 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 10 21 14 02 33 717228 I tensorflow core grappler cluster single machine cc 359 start new session info initialize tensorflow lite runtime traceback most recent call last file pack example py line 26 in tflite model converter convert file home jaszha02 work venvs audio lib python3 6 site package tensorflow lite python lite py line 908 in convert inference output type file home jaszha02 work venvs audio lib python3 6 site package tensorflow lite python lite py line 200 in calibrate quantize model inference output type allow float file home jaszha02 work venvs audio lib python3 6 site package tensorflow lite python optimize calibrator py line 78 in calibrate and quantize np dtype output type as numpy dtype num allow float file home jaszha02 work venvs audio lib python3 6 site package tensorflow lite python optimize tensorflow lite wrap calibration wrapper py line 115 in quantizemodel return tensorflow lite wrap calibration wrapper calibrationwrapper quantizemodel self input py type output py type allow float runtimeerror quantization not yet support for op pack both ktfliteuint8 and ktfliteint8 version of the pack operator be already implement in tflite see pack cc so it should be straightforward to support pack as well in the tflite converter
tensorflowtensorflow
failure to load and remap a 2 d tensor from checkpoint when variance scale initializer be use
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution os darwin os kernel version darwin kernel version 17 7 0 we d feb 27 00 43 23 pst 2019 root xnu 4570 71 35 1 release x86 64 os release version 17 7 0 os platform darwin 17 7 0 x86 64 i386 64bit linux distribution linux os distribution mac version 10 13 6 x86 64 tensorflow instal from source or binary no tensorflow version use command below tf version version 1 14 0 tf version git version v1 14 0 rc1 22 gaf24dc91b5 tf version compiler version 4 2 1 compatible apple llvm 10 0 0 clang 1000 11 45 5 python version major minor micro releaselevel serial 2 7 15 final 0 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no describe the current behavior I train with the estimator api and I wish to warm start a 2d variable which I use as embed my old model have different embed vocabulary than the new one so I pass a warmstartsetting structure include tf estimator vocabinfo if I pass tf estimator vocabinfo without specify a backup initializer the 0 initializer be use as default and the warm start and training finish successfully however when I pass a backup initializer like tf compat v1 initializer variance scale I get a typeerror which it look like be cause because the initializer require that its shape be a non tensor but the gen checkpoint op generate vocab remappe return tensor which be pass to the initilizer as shape describe the expect behavior ability to load and remap a 2 d tensor from a checkpoint when a custom initializer be use so eventually be able to warm start whenever my new model embedding have different vocabulary than the old code to reproduce the issue import tensorflow as tf from tensorflow python training import checkpoint op if name main initializer tf compat v1 initializer variance scaling scale 0 175 mode fan in distribution uniform old row vocab file old vocab path text file with 2 row new row vocab file new vocab path text file with 2 row checkpoint op load and remap matrix ckpt path old tensor name old tensor name new row vocab offset 0 num row to load 2 new col vocab size 3 initializer initializer old row vocab size 1 old row vocab file old row vocab file new row vocab file new row vocab file other info log file env lib python2 7 site package tensorflow estimator python estimator estimator py line 1365 in train with estimator spec warm starting util warm start self warm start setting file env lib python2 7 site package tensorflow python training warm starting util py line 460 in warm start axis vocab info axis file env lib python2 7 site package tensorflow python training warm starting util py line 301 in warm start var with vocab init shape v shape partition info partition info file env lib python2 7 site package tensorflow python training checkpoint op py line 414 in initializer max row in memory max row in memory file env lib python2 7 site package tensorflow python training checkpoint op py line 179 in load and remap matrix num row present num col present 1 file env lib python2 7 site package tensorflow python op init op py line 515 in call fan in fan out compute fan scale shape file env lib python2 7 site package tensorflow python op init op py line 1447 in compute fan return int fan in int fan out typeerror int argument must be a string or a number not tensor
tensorflowtensorflow
batchmatmul on gpu wrong result
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information linux ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below 2 0 python version 3 7 3 cuda cudnn version gpu model and memory quadro p4000 8 gb describe the current behavior batchmatmul on gpu give wrong result describe the expect behavior batchmatmul should give correct result code to reproduce the issue python import numpy as np import tensorflow as tf h w 480 640 xs tf random normal h w 3 m tf random normal 3 3 with tf device gpu 0 mvgpu tf linalg matvec m none none none xs none with tf device cpu 0 mvcpu tf linalg matvec m none none none xs none diff mvgpu mvcpu print diff a lot of large difference assert not np allclose diff 0 other info log python diff py tf tensor 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 7437444 2 3577256 3 3067198 1 0874022 0 7325933 2 4926019 2 3672104 1 6497741 2 1795664 1 4023315 0 68545413 1 9975882 0 44164103 1 6766946 0 6477224 2 5612988 0 7603723 4 286979 0 9761815 1 5332515 0 12404668 1 871354 0 73664904 1 0545558 1 2774239 1 6571116 2 36473 2 4847538 1 5389507 1 5169847 2 2117858 3 3593345 0 15468699 0 82464755 2 0175495 0 11506134 2 8697317 0 54362947 0 10824746 1 6531861 4 2324843 0 97695297 4 3966837 3 0250916 1 8032213 4 5854487 2 0374475 1 1027236 2 7206469 0 864337 1 3582373 0 98220587 0 53226703 4 277097 shape 1 480 640 3 dtype float32
tensorflowtensorflow
incorrect arduino person detection zip package
Bug
url s with the issue obtain and import the library description of issue what need change under the heading obtain and import the library the link incorrectly refer to the micro speech zip package instead of a person detection example
tensorflowtensorflow
tf keras model fit ignore class weight when pass tf datum dataset
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 or colab tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 1 15 0 rc3 python version 3 7 cuda cudnn version gpu model and memory none or colab you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf keras model fit ignore class weight when pass a tf datum dataset when pass an instance of tf datum dataset and a class weight dictionary with nonsensical label key it run without exception whereas it correctly raise valueerror when pass a pair of np ndarray describe the expect behavior tf keras model fit should apply class weight when pass a tf datum dataset it should raise an exception for incorrect class weight key code to reproduce the issue python import tensorflow as tf import numpy as np feature np array 1 1 dtype np float32 label np array 0 1 dtype np int32 dataset tf datum dataset from tensor slice feature label batch 1 model tf keras sequential tf keras layer dense 1 activation sigmoid model compile optimizer sgd loss binary crossentropy class weight bad negative label 0 5 bad positive label 0 5 fit op tf datum dataset lambda model fit dataset class weight class weight verbose 0 np ndarray lambda model fit feature label class weight class weight verbose 0 for key fit op in fit op try print f fitting key with bad class weight label fit op except valueerror as e print fail as it should have raise valueerror from e else print succede but should have fail other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach fit tf datum dataset with bad class weight label succede but should have fail fit np ndarray with bad class weight label fail as it should have traceback most recent call last file scratch py line 22 in fit op file scratch py line 17 in np ndarray lambda model fit feature label class weight class weight verbose 0 file usr local lib python3 7 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file usr local lib python3 7 site package tensorflow core python keras engine training v2 py line 224 in fit distribution strategy strategy file usr local lib python3 7 site package tensorflow core python keras engine training v2 py line 547 in process training input use multiprocesse use multiprocesse file usr local lib python3 7 site package tensorflow core python keras engine training v2 py line 594 in process input step step file usr local lib python3 7 site package tensorflow core python keras engine training py line 2530 in standardize user datum feed sample weight mode file usr local lib python3 7 site package tensorflow core python keras engine training py line 2529 in for ref sw cw mode in zip y sample weight class weight file usr local lib python3 7 site package tensorflow core python keras engine training util py line 946 in standardize weight class weight exist class exist class weight valueerror class weight must contain all class in the datum the class 0 1 exist in the datum but not in class weight the above exception be the direct cause of the follow exception traceback most recent call last file scratch py line 25 in raise valueerror from e valueerror process finish with exit code 1
tensorflowtensorflow
get error follow step in build standard tensorflow modelserver
Bug
url s with the issue description of issue what need change when I attempt to execute the tool run in docker sh python tensorflow serve example mnist save model py train iteration 100 model version 1 tmp mnist step in the in the tutorial I get the this error serve tool run in docker sh python tensorflow serve example mnist save model py train iteration 100 model version 1 tmp mnist pull docker image tensorflow serve nightly devel nightly devel pull from tensorflow serve 22e816666fd6 already exist 079b6d2a1e53 already exist 11048ebae908 already exist c58094023a2e already exist e9d1145448f7 pull complete 3b2b266356de pull complete 9f9b2b982b72 pull complete ede8854b3a01 pull complete 7bb55a638df9 pull complete bdd9b510b8a7 pull complete 90a5454f6928 pull complete 1941316fdbd3 pull complete c9c9a434ee49 pull complete digest sha256 3b52152115c73a6be79a86cda94c4c94569df9b490a3e40c2530d5a9a007afac status download new image for tensorflow serve nightly devel docker io tensorflow serve nightly devel run cmd sh c cd user remove github serve python tensorflow serve example mnist save model py train iteration 100 model version 1 tmp mnist traceback most recent call last file tensorflow serve example mnist save model py line 39 in tf app flag define integer training iteration 1000 attributeerror module object have no attribute app I be run this on mac os catalina w 8 gb ram allocate to the docker engine
tensorflowtensorflow
break link
Bug
hi follow url seem to be down bazel use that to download pybind11 and for some unknown reason fallback link be never be use as an alternative be that a fallback btw build log analyze target tensorflow tool pip package build pip package 192 package load 2655 target configure info call stack for the definition of repository pybind11 which be a tf http archive rule definition at root tensorflow 1 15 0 third party repo bzl 124 19 root tensorflow 1 15 0 tensorflow workspace bzl 925 5 root tensorflow 1 15 0 workspace 19 1 error an error occur during the fetch of repository pybind11 java io ioexception error download to root cache bazel bazel root a0cf5ef42c8f00571631f8815d38246b external pybind11 v2 3 0 tar gz all mirror be down get return 404 not find connect time out any workaround system information os platform and distribution ubuntu 19 10 tensorflow version 1 15 0 python version 3 7 bazel version 0 26 1
tensorflowtensorflow
error while try to use tf broadcast weight
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device google colab tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version 3 x bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior unable to import tf broadcast weight in tf 2 0 describe the expect behavior should be able to import tf broadcast weight in tf 2 0 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem method 1 plain python tf 2 0 import tensorflow as tf version 2 0 tf broadcast weight throw attributeerror module tensorflow have no attribute broadcast weight method 2 codelab I find this error in a recent tf 2 0 keras tutorial search for broadcast weight in this codelab run all cell before this modify code m update state 0 1 1 1 0 1 0 0 to m update state 0 1 1 1 0 1 0 0 sample weight 0 1 0 2 0 3 0 4 run this cell throw attributeerror module tensorflow have no attribute broadcast weight other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tf 2 0 use keras metric in tpu training result in error
Bug
I be try to train a bert model from on tpu in google colab I change the metric list pass to model in compile method to bert model compile optimizer optimizer loss loss fn metric get metric where get metric be a function which return a list of metric accuracy and instance for recall and precision build in tensorflow kera metric from tensorflow keras metric import recall precision def get metric return accuracy recall precision training result in the follow error after one epoch end before validation statistic be display i1018 16 34 07 313311 140541208393600 remote py 151 enter into master device scope job worker replica 0 task 0 2019 10 18 16 34 07 359467 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 10 18 16 34 07 465723 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit cuda error no device no cuda capable device be detect 2019 10 18 16 34 07 465842 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host 7b6f1b4d4089 proc driver nvidia version do not exist 2019 10 18 16 34 07 466260 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 10 18 16 34 07 472748 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2300000000 hz 2019 10 18 16 34 07 473076 I tensorflow compiler xla service service cc 168 xla service 0x3172f40 execute computation on platform host device 2019 10 18 16 34 07 473114 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 10 18 16 34 07 475920 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 10 29 203 98 8470 2019 10 18 16 34 07 475955 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job localhost 0 localhost 30501 2019 10 18 16 34 07 476742 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 30501 2019 10 18 16 34 07 497844 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 10 29 203 98 8470 2019 10 18 16 34 07 497905 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job localhost 0 localhost 30501 info tensorflow initialize the tpu system 10 29 203 98 8470 i1018 16 34 07 499603 140541208393600 tpu strategy util py 70 initialize the tpu system 10 29 203 98 8470 info tensorflow clear out eager cache i1018 16 34 15 119202 140541208393600 tpu strategy util py 94 clear out eager cache info tensorflow finish initialize tpu system i1018 16 34 15 121769 140541208393600 tpu strategy util py 114 finish initialize tpu system info tensorflow find tpu system i1018 16 34 15 128222 140541208393600 tpu system metadata py 148 find tpu system info tensorflow num tpu core 8 i1018 16 34 15 128440 140541208393600 tpu system metadata py 149 num tpu core 8 info tensorflow num tpu worker 1 i1018 16 34 15 129121 140541208393600 tpu system metadata py 150 num tpu worker 1 info tensorflow num tpu core per worker 8 i1018 16 34 15 129209 140541208393600 tpu system metadata py 152 num tpu core per worker 8 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 i1018 16 34 15 129295 140541208393600 tpu system metadata py 154 available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 i1018 16 34 15 129720 140541208393600 tpu system metadata py 154 available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 i1018 16 34 15 129811 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 i1018 16 34 15 129892 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 i1018 16 34 15 129969 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 i1018 16 34 15 130045 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 i1018 16 34 15 130121 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 i1018 16 34 15 130197 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 i1018 16 34 15 130281 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 i1018 16 34 15 130358 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 i1018 16 34 15 130436 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 i1018 16 34 15 130511 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 i1018 16 34 15 130593 140541208393600 tpu system metadata py 154 available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 i1018 16 34 15 248266 140541208393600 train py 212 training use tf 2 0 kera compile fit api with distrubute strategy warning tensorflow expect a shuffle dataset but input dataset x be not shuffle please invoke shuffle on input dataset w1018 16 35 33 236943 140541208393600 training util py 1547 expect a shuffle dataset but input dataset x be not shuffle please invoke shuffle on input dataset train on 129 step validate on 65 step epoch 1 5 2019 10 18 16 38 03 018892 I tensorflow core profiler lib profiler session cc 184 profiler session start 2019 10 18 16 38 03 020371 e tensorflow core platform default device tracer cc 70 cuda error cuda error no device 1 129 eta 5 12 28 loss 1 0083 accuracy 0 2031 recall 0 1719 precision 0 2619warne tensorflow method on train batch end be slow compare to the batch update 1 610206 check your callback w1018 16 38 06 456197 140541208393600 callback py 244 method on train batch end be slow compare to the batch update 1 610206 check your callback 128 129 eta 1s loss 0 5022 accuracy 0 7563 recall 0 5862 precision 0 81392019 10 18 16 38 45 271991 e tensorflow core distribute runtime eager remote tensor handle datum cc 50 unable to destroy remote tensor handle unable to find the relevant tensor remote handle op i d 55877 output num 0 additional grpc error information create 1571416725 271891392 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find the relevant tensor remote handle op i d 55877 output num 0 grpc status 3 2019 10 18 16 38 45 272429 e tensorflow core distribute runtime eager remote tensor handle datum cc 50 unable to destroy remote tensor handle unable to find the relevant tensor remote handle op i d 55877 output num 1 additional grpc error information create 1571416725 272350919 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find the relevant tensor remote handle op i d 55877 output num 1 grpc status 3 2019 10 18 16 38 45 272841 e tensorflow core distribute runtime eager remote tensor handle datum cc 50 unable to destroy remote tensor handle unable to find the relevant tensor remote handle op i d 55877 output num 2 additional grpc error information create 1571416725 272756237 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find the relevant tensor remote handle op i d 55877 output num 2 grpc status 3 2019 10 18 16 38 45 273165 e tensorflow core distribute runtime eager remote tensor handle datum cc 50 unable to destroy remote tensor handle unable to find the relevant tensor remote handle op i d 55877 output num 3 additional grpc error information create 1571416725 273105048 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find the relevant tensor remote handle op i d 55877 output num 3 grpc status 3 traceback most recent call last file usr lib python3 6 runpy py line 193 in run module as main main mod spec file usr lib python3 6 runpy py line 85 in run code exec code run global file gdrive my drive deeplearningbert nn train py line 340 in app run main file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file gdrive my drive deeplearningbert nn train py line 332 in main run bert strategy input meta datum file gdrive my drive deeplearningbert nn train py line 287 in run bert use keras compile fit flag use keras compile fit file gdrive my drive deeplearningbert nn train py line 226 in run bert classifier custom callback none file gdrive my drive deeplearningbert nn train py line 143 in run keras compile fit callback custom callback file usr local lib python3 6 dist package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file usr local lib python3 6 dist package tensorflow core python keras engine training distribute py line 685 in fit step name step per epoch file usr local lib python3 6 dist package tensorflow core python keras engine training array py line 439 in model iteration step name validation step file usr local lib python3 6 dist package tensorflow core python keras engine training array py line 299 in model iteration batch out f actual input file usr local lib python3 6 dist package tensorflow core python keras distribute distribute training util py line 878 in execution function return out numpy for out in distribute function input fn file usr local lib python3 6 dist package tensorflow core python eager def function py line 457 in call result self call args kwd file usr local lib python3 6 dist package tensorflow core python eager def function py line 526 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file usr local lib python3 6 dist package tensorflow core python eager function py line 1141 in filter call self capture input file usr local lib python3 6 dist package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 6 dist package tensorflow core python eager function py line 511 in call ctx ctx file usr local lib python3 6 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl unimplementederror compilation failure ask to propagate a dynamic dimension from hlo tuple 5198 pre f32 4 2 1 0 tuple pre convert 5196 f32 4 2 1 0 add 5004 metadata op type if op name metric precision assert great equal assert assertguard 1 0 to hlo conditional 5209 pre conditional pred convert 5196 pre f32 4 2 1 0 tuple 5198 pre f32 4 2 1 0 tuple 5198 true computation metric precision assert great equal assert assertguard true 127733 const 0 5199 false computation metric precision assert great equal assert assertguard false 127734 const 0 5204 metadata op type if op name metric precision assert great equal assert assertguard which be not implement tpu compilation fail node tpu compile succeed assert 6193329545322784681 7 additional grpc error information create 1571416725 270929013 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message compilation failure ask to propagate a dynamic dimension from hlo tuple 5198 pre f32 4 2 1 0 tuple pre convert 5196 f32 4 2 1 0 add 5004 metadata op type if op name metric precision assert great equal assert assertguard 1 0 to hlo conditional 5209 pre conditional pred convert 5196 pre f32 4 2 1 0 tuple 5198 pre f32 4 2 1 0 tuple 5198 true computation metric precision assert great equal assert assertguard true 127733 const 0 5199 false computation metric precision assert great equal assert assertguard false 127734 const 0 5204 metadata op type if op name metric precision assert great equal assert assertguard which be not implement n ttpu compilation fail n t node tpu compile succeed assert 6193329545322784681 7 grpc status 12 op inference distribute function 127913 function call stack distribute function distribute function 2019 10 18 16 38 53 401848 e tensorflow core distribute runtime rpc eager grpc eager client cc 72 remote eagercontext with i d 6450803200035565614 do not seem to exist with only accuracy return it work well finish all epoch with custom metric like def precision y true y pre y pre tf math rint y pre tp tf math reduce sum y pre y true fp tf math reduce sum y pre 1 y true precision tf math divide tp tp fp eps return precision it work as well but the value return be not correct I suppose this be happen because on the tpu there be x step per loop compute and somehow I didn t dig too much into it mess up the output metric I try with builtin function to verify the behavior but it result in the error previously mention snippet of the training call the function be call run keras compile fit in the github link I provide and it can be find in bert run classifier py with almost none custom code add with strategy scope training dataset train input fn evaluation dataset eval input fn bert model sub model model fn optimizer bert model optimizer if init checkpoint checkpoint tf train checkpoint model sub model checkpoint restore init checkpoint assert exist object match bert model compile optimizer optimizer loss loss fn metric get metric summary dir os path join model dir summary summary callback tf keras callback tensorboard summary dir checkpoint path os path join model dir checkpoint checkpoint callback tf keras callbacks modelcheckpoint checkpoint path save weight only true save well only true mode min if custom callback be not none custom callback summary callback checkpoint callback else custom callback summary callback checkpoint callback bert model fit x training dataset validation datum evaluation dataset step per epoch step per epoch epochs epoch validation step eval step callback custom callback return bert model in colab I instal the stable release of tensorflow 2 0 as the nightly version doesn t work well with colab s tpu s for now the keras metric be suppose to work with tpus or this be not yet a feature
tensorflowtensorflow
dataset map with tf data experimental autotune run out of memory when use batch size 1
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not try on mobile tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version cuda 10 0 cudnn 7 6 gpu model and memory rtx2070 8 gb describe the current behavior I use dataset map to normalize image when use tf datum experimental autotune and batch size of 1 memory consumption grow up till the program be kill the most intriguing part be that when set the batch size to great than 1 the program work correctly this issue happen both with tensorflow 2 0 0 and tensorflow gpu 2 0 0 describe the expect behavior code should work for batch size of 1 code to reproduce the issue import tensorflow as tf from tensorflow keras layers import flatten dense reshape from tensorflow keras loss import meansquarederror train image train label tf keras datasets mnist load datum train image train image reshape train image shape 0 28 28 1 astype float32 set this here to true will break the code toggle error true if toggle error batch size 1 else batch size 3 def map function train image return train image 127 5 127 5 train dataset tf datum dataset from tensor slice train image train dataset train dataset repeat train dataset train dataset map map function tf datum experimental autotune train dataset train dataset batch batch size train dataset train dataset prefetch 64 class autoencoder tf keras model def init self super autoencoder self init self flatten flatten self dense 1 dense 128 activation relu self dense 2 dense 784 activation relu self reshape reshape 28 28 1 tf function def call self input flatten self flatten input encode self dense 1 flatten decode self dense 2 encode return self reshape decode auto encoder autoencoder mse meansquarederror optimizer tf keras optimizer adam 1e 5 tf function def train step batch with tf gradienttape as tape auto encode auto encoder batch loss mse batch auto encode grad tape gradient loss auto encoder trainable variable optimizer apply gradient zip grad auto encoder trainable variable return loss for step image batch in enumerate train dataset loss train step image batch if step 1000 0 print loss other info log might be relate to issue 32052
tensorflowtensorflow
please fix docs converter tf lite tfliteconverter from keras model model doesn t work
Bug
please update doc this example doesn t work convert a keras model convert a keras model the follow example show how to convert a tf keras model into a tensorflow lite flatbuffer import tensorflow as tf create a simple keras model x 1 0 1 2 3 4 y 3 1 1 3 5 7 model tf keras model sequential tf keras layer dense unit 1 input shape 1 model compile optimizer sgd loss mean squared error model fit x y epoch 50 convert the model converter tf lite tfliteconverter from keras model model tflite model converter convert tf version 1 15 0 rc3 output in colab attributeerror traceback most recent call last in 11 12 convert the model 13 converter tf lite tfliteconverter get input array model 14 tflite model converter convert usr local lib python3 6 dist package tensorflow core lite python lite py in get input array self 1001 list of string 1002 1003 if self have valid tensor 1004 return get tensor name tensor for tensor in self input tensor 1005 else attributeerror sequential object have no attribute have valid tensor
tensorflowtensorflow
tflite slow than kera on rpi 4
Bug
when do inference on a raspberry pi 4 with kera and tensorflow instal use pip the inference time be slow use tflite the initial keras model be about 20 mb after convert it to tflite it be about 2 4 mb during inference the keras model process a sample in about 50 m and tflite do it in about 80 ms initially the pip instal version of tensorflow cause error with tflite so I instal tflite runtime use this information during inference in tflite I use the follow snippet python interpreter tflite interpreter model path checkpoint interpreter allocate tensor input index interpreter get input detail 0 index output index interpreter get output detail 0 index for I m in image t time time inp np expand dim np expand dim I m 1 0 interpreter set tensor input index inp interpreter invoke prediction interpreter get tensor output index print time format time time t end r do anybody have any experience with tflite on raspberry pi be I miss something in order accelerate inference far it seem wrong that inference should be fast in kera with 10x model size
tensorflowtensorflow
ctc tensorflow lite conversion problem
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 19 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 0 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use here be a list of operator for which you will need custom implementation ctc beam search decoder also please include a link to a graphdef or the model if possible import tensorflow as tf class basicmodel tf module def init self self const none tf function input signature tf tensorspec shape none 500 28 dtype tf float32 tf tensorspec shape none dtype tf int32 def decoder self logit seq len decode log prob tf nn ctc beam search decoder logit seq len return decode create the tf module object root basicmodel get the concrete function concrete func root decoder get concrete function converter tf lite tfliteconverter from concrete function concrete func converter allow custom op false converter target spec support op tf lite opsset tflite builtins tf lite opsset select tf op tflite model converter convert open ctc greedy decoder tflite wb write tflite model any other info log but accord to this link ctc beam search decoder be register as tflite op include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
out of date bert tutorial link
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change out of date link to bert tutorial clear description the above url contain an out of date link bert tutorial this be the correct link be you plan to also submit a pull request to fix the issue yes
tensorflowtensorflow
issue with tf keras mixed precision experimental lossscaleoptimizer
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 5 gpu model and memory titian rtx 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when attempt to use tf keras mixed precision experimental lossscaleoptimizer it fail to cast a matmul to float16 describe the expect behavior matmul should cast to float16 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf import numpy as np from tensorflow keras datasets import mnist gpu tf config experimental list physical device gpu if gpu try for gpu in gpu tf config experimental set memory growth gpu true logical gpu tf config experimental list logical device gpu print len gpu physical gpu len logical gpu logical gpu except runtimeerror as e print e tf keras backend set floatx float16 tf keras backend set epsilon 1e 4 model tf keras model sequential model add tf keras layer conv2d 32 3 3 activation relu input shape 28 28 1 model add tf keras layer maxpooling2d 2 2 padding same model add tf keras layer conv2d 64 3 3 activation relu pad same model add tf keras layer flatten model add tf keras layer dense 64 activation relu model add tf keras layer dense 10 activation softmax model summary train image train label test image test label mnist load datum train image train image reshape 60000 28 28 1 train image train image astype np float16 255 test image test image reshape 10000 28 28 1 test image test image astype np float16 255 train label tf keras util to categorical train label dtype float16 test label tf keras util to categorical test label dtype float16 opt tf keras optimizer rmsprop opt tf keras mixed precision experimental lossscaleoptimizer opt dynamic model compile optimizer opt loss categorical crossentropy metric accuracy model fit tf dtype cast train image tf float16 tf dtype cast train label tf float16 epoch 50 batch size 64 step per epoch 200 test loss test acc model evaluate test image test label test acc other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
in the pix2pix tutorial the dimension of upsample within up stack look wrong only the image size be double not filter
Bug
this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to
tensorflowtensorflow
resnet50 imagenet weight be different in tf keras vs keras
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 win10 pycharm same issue from colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pycharm tensorflow version use command below 2 0 and keras 2 3 0 same issue with tf 1 15 and kera 2 2 5 in google colab python version 3 7 describe the current behavior the weight of the kernel in layer 13 14 be different in tf keras and keras the weight of the layer 13 in tf keras be the one of the layer 14 in kera and the weight of the layer 14 in tf keras be the one of the layer 13 in kera with tf keras layer 13 conv2 block1 0 conv 0 00460704 0 0613995 0 04595907 0 12616335 0 00781816 0 03271283 0 00736084 0 00832207 0 00591875 0 16227128 0 00581011 0 01718325 layer 14 conv2 block1 3 conv 0 00412396 0 01779881 0 01002417 0 0397268 0 01897338 0 00012411 0 01601992 0 00197976 0 01605847 0 06464136 0 0353195 0 02405972 with keras layer 13 res2a branch2c 0 00412396 0 01779881 0 01002417 0 0397268 0 01897338 0 00012411 0 01601992 0 00197976 0 01605847 0 06464136 0 0353195 0 02405972 layer 14 res2a branch1 0 00460704 0 0613995 0 04595907 0 12616335 0 00781816 0 03271283 0 00736084 0 00832207 0 00591875 0 16227128 0 00581011 0 01718325 this be with include top true but include top false give the same thing describe the expect behavior I would expect the same weight not sure which weight be the right one code to reproduce the issue from tensorflow python keras application import resnet50 from keras application import resnet50 image size 224 model resnet50 input shape image size image size 3 include top true weight imagenet print model layer 13 name weight model layer 13 get weight 0 print weight print model layer 14 name weight model layer 14 get weight 0 print weight other info log the difference be come from the different path use to load the h5 file in tf keras in kera
tensorflowtensorflow
1 15 discrepancy between documentation behaviour in tf keras model save
Bug
url s with the issue in the 1 15 changelog tf keras model save model and model save now default to save a tensorflow savedmodel in the 1 15 docstring save filepath string path to savedmodel or h5 file to save the model overwrite whether to silently overwrite any exist file at the target location or provide the user with a manual prompt include optimizer if true save optimizer s state together save format either tf or h5 indicate whether to save the model to tensorflow savedmodel or hdf5 the default be currently h5 but will switch to tf in tensorflow 2 0 the tf option be currently disabled use tf keras experimental export save model instead description of issue what need change the changelog state that tf keras model be save use tf format by default the docstre state that the default save format in tf1 x be hdf5 and that tf be disabled tf save format be not disabled but can be pass as parameter l92 we can still save use tf format use tf keras model save however you can not load tf model usage example this be not a critical issue but this can be confusing to user read the changelog and read the docstring and see that tf behaviour be enable by default this will lead user to be confuse between behaviour think they need to update their codebase to switch to 1 15 see that tf format doesn t work in tf1 15 saving work but not reload which confirm the fact that tf save format doesn t work python I tf keras layers input shape 10 x tf keras layer dense 2 I o tf keras layers activation softmax x m tf keras model input I output o m save test model tf save format tf m2 tf keras model load model test model tf m2 summary text layer type output shape param valueerror traceback most recent call last in 1 m2 summary opt miniconda3 envs py36 tf1 15 lib python3 6 site package tensorflow core python keras engine network py in summary self line length position print fn 1459 line length line length 1460 position position 1461 print fn print fn 1462 1463 def validate graph input and output self opt miniconda3 envs py36 tf1 15 lib python3 6 site package tensorflow core python keras util layer util py in print summary model line length position print fn 224 for I in range len layer 225 if sequential like 226 print layer summary layer I 227 else 228 print layer summary with connection layer I opt miniconda3 envs py36 tf1 15 lib python3 6 site package tensorflow core python keras util layer util py in print layer summary layer 182 name layer name 183 cls name layer class name 184 field name cls name output shape layer count param 185 print row field position 186 opt miniconda3 envs py36 tf1 15 lib python3 6 site package tensorflow core python keras engine base layer py in count param self 1632 but the layer isn t build 1633 you can build it manually via self name 1634 build batch input shape 1635 return int sum np prod w shape as list for w in self weight 1636 valueerror you try to call count param on input 1 but the layer isn t build you can build it manually via input 1 build batch input shape this work python I tf keras layers input shape 10 x tf keras layer dense 2 I o tf keras layers activation softmax x m tf keras model input I output o m save test model hdf5 hdf5 m2 tf keras model load model test model hdf5 hdf5 m2 summary txt layer type output shape param input 4 inputlayer none 10 0 dense 3 dense none 2 22 activation 3 activation none 2 0 total param 22 trainable param 22 non trainable param 0
tensorflowtensorflow
ipython tab completion cause log warning
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 git version v1 15 0 rc3 22 g590d6ee 1 15 0 python version 3 6 6 describe the current behavior tab completion in ipython spam the console with the name be deprecate log warning code to reproduce the issue I be use python 3 6 6 with ipython 7 8 0 if you do ipython imporpython 3 6 6 package by conda forge default oct 12 2018 14 43 46 type copyright credit or license for more information ipython 7 8 0 an enhance interactive python type for help in 1 import tensorflow as tf in 2 tf and press tab after the you get dozen of warning like warn tensorflow from opt anaconda envs ctrldev lib python3 6 site package jedi evaluate compile access py 347 the name tf parse example be deprecate please use tf io parse example instead warn tensorflow from opt anaconda envs ctrldev lib python3 6 site package jedi evaluate compile access py 347 the name tf parse single example be deprecate please use tf io parse single example instead warn tensorflow from opt anaconda envs ctrldev lib python3 6 site package jedi evaluate compile access py 347 the name tf parse single sequence example be deprecate please use tf io parse single sequence example instead warn tensorflow from opt anaconda envs ctrldev lib python3 6 site package jedi evaluate compile access py 347 the name tf parse tensor be deprecate please use tf io parse tensor instead you can actually reproduce this with the docker image although that produce few deprecation warning than the release package do docker run it tensorflow tensorflow 1 15 0rc1 py3 jupyter ipython python 3 6 8 default aug 20 2019 17 12 48 type copyright credit or license for more information ipython 7 8 0 an enhance interactive python type for help in 1 import tensorflow as tf tf in 2 tf warning tensorflow from usr local lib python3 6 dist package jedi evaluate compile access py 347 the name tf auto reuse be deprecate please use tf compat v1 auto reuse instead unfortunately I wasn t able to find a more up to date official docker image
tensorflowtensorflow
tensorflow eager execution not work with tf math unsorted segment max gradient output be null
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 professional edition mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary instal use conda tensorflow version use command below unknown 1 14 0 python version 3 7 3 cuda cudnn version 10 0 7 6 gpu model and memory t1000 4 gb vram describe the current behavior when use tf math unsorted segment max with tensorflow eager execution and gradient tape the source code see below produce follow error traceback most recent call last file c project iotmap py segment max error py line 80 in grad tape gradient loss value model trainable weight file c programdata anaconda3 lib site package tensorflow python eager backprop py line 980 in gradient unconnected gradient unconnected gradient file c programdata anaconda3 lib site package tensorflow python eager imperative grad py line 76 in imperative grad compat as str unconnected gradient value file c programdata anaconda3 lib site package tensorflow python eager backprop py line 137 in gradient function return grad fn mock op out grad file c programdata anaconda3 lib site package tensorflow python ops math grad py line 349 in unsortedsegmentmaxgrad return unsortedsegmentminormaxgrad op grad file c programdata anaconda3 lib site package tensorflow python ops math grad py line 326 in unsortedsegmentminormaxgrad gatherdropnegative op output 0 op input 1 typeerror nonetype object be not subscriptable operation tf math segment max tf math segment mean and tf math unsorted segment mean be work ok though I need the unsorted version because in more complex code basis I a use several segment aggregation and concatenate they so I need to have fix size describe the expect behavior it should work without throw error code to reproduce the issue the code be here other info log exception throw be mention above
tensorflowtensorflow
tensorflow lite tensorlistfromtensor tensorlistreserve tensorliststack while
Bug
system information os platform and distribution e g linux ubuntu 16 04 arch linux tensorflow instal from source or binary binary from pacman tensorflow version or github sha if from source 2 0 0 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add concatenation conv 2d expand dim fill logistic maximum max pool 2d minimum mul pack pad relu reshape resize near neighbor shape stride slice sub tile transpose here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while the model be very similar to I be convert with target spec tf lite opsset tflite builtin tf lite opsset select tf op so my problem could be solve by add tensorlistfromtensor tensorlistreserve tensorliststack while to the whitelist here and use the standard tensorflow implementation
tensorflowtensorflow
tflite 2 0 gpu delegate error when input resize
Bug
hi for gpu delegate if resizeinput be call and then runformultipleinputsoutput there be an exception mobile samsung s9 mali g72 gpu tensorflow lite instal from binary version 2 0 development env android studio on fedora 29 example code fun resizeinput val option interpreter option interpreter option val del gpudelegate option adddelegate del val inter interpreter assetloader loadmappedbyte good model shape 4x416x224 float32 tflite option val sh inter getinputtensor 0 shape inter resizeinput 0 intarrayof 1 224 416 4 var innumbyte inter getinputtensor 0 numbyte var input array 1 bytebuffer allocatedirect innumbyte order byteorder nativeorder val outputs1 array inter outputtensorcount val tensor inter getoutputtensor it bytebuffer allocatedirect tensor numbyte order byteorder nativeorder val outputbuffer outputs1 mapindexed index bytebuffer index to bytebuffer tomap inter modifygraphwithdelegate del inter runformultipleinputsoutput input outputbuffer defaultlogger verbose tflite inference inter lastnativeinferencedurationnanosecond n inter resetvariabletensor inter resizeinput 0 intarrayof 1 416 224 4 innumbyte inter getinputtensor 0 numbyte input array 1 bytebuffer allocatedirect innumbyte order byteorder nativeorder inter runformultipleinputsoutput input outputbuffer defaultlogger verbose after resize tflite inference inter lastnativeinferencedurationnanosecond n java lang illegalstateexception internal error unexpected failure when prepare tensor allocation tflitegpudelegate init index be out of range tflitegpudelegate prepare delegate be not initialize node number 70 tflitegpudelegatev2 fail to prepare restore previous execution plan after delegate application failure at org tensorflow lite nativeinterpreterwrapper allocatetensor native method be this a bug or this sequence of operation be not support for gpu delegate I be use the java api regard naveen
tensorflowtensorflow
eager mode not be disable tf 1 14 0
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try out example from tensorflow probability with the follow code import tensorflow as tf import tensorflow probability as tfp import numpy as np tfd tfp distribution dtype np float32 with tf session graph tf graph as sess set up random seed for the optimizer tf set random seed 42 true mean dtype 0 0 0 true cov dtype 1 0 25 0 25 0 25 1 0 25 0 25 0 25 1 loss be define through the cholesky decomposition chol tf linalg cholesky true cov var 1 tf variable name var 1 initial value 1 1 var 2 tf variable name var 2 initial value 1 def loss fn var tf concat var 1 var 2 axis 1 loss part tf linalg cholesky solve chol tf expand dim var 1 return tf linalg matvec loss part var transpose a true set up the learning rate with a polynomial decay step tf variable 0 dtype tf int64 starter learning rate 3 end learn rate 1e 4 decay step 1e4 learning rate tf compat v1 train polynomial decay starter learning rate step decay step end learn rate power 1 set up the optimizer optimizer kernel tfp optimizer stochasticgradientlangevindynamic learn rate learning rate preconditioner decay rate 0 99 optimizer kernel iteration step optimizer optimizer kernel minimize loss fn var list var 1 var 2 number of training step training step 5000 record the step as and treat they as sample sample np zero training step 2 np zero training step 1 sess run tf compat v1 global variable initializer for step in range training step sess run optimizer sample sess run var 1 sess run var 2 sample 0 step sample 0 sample 1 step sample 1 sample np concatenate sample axis 1 sample mean np mean sample 0 print sample mean sample mean get follow error file user shashank gupta miniconda2 lib python2 7 site package tensorflow python keras optimizer v2 optimizer v2 py line 318 in minimize loss var list var list grad loss grad loss file user shashank gupta miniconda2 lib python2 7 site package tensorflow python keras optimizer v2 optimizer v2 py line 351 in compute gradient tape watch var list file user shashank gupta miniconda2 lib python2 7 site package tensorflow python eager backprop py line 816 in watch for t in nest flatten tensor attributeerror refvariable object have no attribute i d look like eager mode be enable have add a command to disable it still it s get activate