repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tf keras mobilenetv2 with weight none fail to train
Bug
system information google colab notebook tensorflow version 2 1 0 rc1 python version 3 7 cuda cudnn version 10 0 130 gpu model and memory tesla t4 12 gb describe the current behavior base on the tutorial format the datum start run the cell inside the notebook of the tutorial create the base model from the pre train model mobilenet v2 base model tf keras applications mobilenetv2 input shape img shape include top false weight none base model trainable true then train the model for 10 epoch with the parameter specify in the tutorial but the validation loss do not go down the accuracy remain stick the result of train epoch 1 10 582 582 87 149ms step loss 0 6606 accuracy 0 5788 val loss 0 6953 val accuracy 0 5216 epoch 2 10 582 582 80 138m step loss 0 6157 accuracy 0 6425 val loss 0 7064 val accuracy 0 5216 epoch 3 10 582 582 81 139ms step loss 0 5765 accuracy 0 6769 val loss 0 7014 val accuracy 0 5216 epoch 4 10 582 582 81 139ms step loss 0 5378 accuracy 0 7143 val loss 0 7488 val accuracy 0 4784 epoch 5 10 582 582 81 139ms step loss 0 5072 accuracy 0 7368 val loss 0 8380 val accuracy 0 4784 epoch 6 10 582 582 80 138m step loss 0 4777 accuracy 0 7601 val loss 0 9534 val accuracy 0 4784 epoch 7 10 582 582 81 138m step loss 0 4354 accuracy 0 7894 val loss 1 0138 val accuracy 0 4784 epoch 8 10 582 582 81 138m step loss 0 3937 accuracy 0 8110 val loss 1 2038 val accuracy 0 4784 epoch 9 10 582 582 80 138m step loss 0 3593 accuracy 0 8288 val loss 1 7442 val accuracy 0 4784 epoch 10 10 582 582 81 139ms step loss 0 3166 accuracy 0 8547 val loss 1 6888 val accuracy 0 4784 image describe the expect behavior if mobilenet v1 be use instead with the same weight initialization and same training parameter the result be the follow epoch 1 10 582 582 74 126ms step loss 0 6596 accuracy 0 5840 val loss 0 7098 val accuracy 0 5216 epoch 2 10 582 582 70 120ms step loss 0 6310 accuracy 0 6248 val loss 0 6099 val accuracy 0 6483 epoch 3 10 582 582 71 122ms step loss 0 6102 accuracy 0 6479 val loss 0 6191 val accuracy 0 6858 epoch 4 10 582 582 70 121ms step loss 0 5850 accuracy 0 6729 val loss 0 5983 val accuracy 0 6634 epoch 5 10 582 582 71 122ms step loss 0 5620 accuracy 0 6954 val loss 0 6043 val accuracy 0 6573 epoch 6 10 582 582 71 122ms step loss 0 5383 accuracy 0 7128 val loss 0 5575 val accuracy 0 6935 epoch 7 10 582 582 71 122ms step loss 0 5179 accuracy 0 7291 val loss 0 6238 val accuracy 0 7220 epoch 8 10 582 582 70 121ms step loss 0 4906 accuracy 0 7491 val loss 0 5965 val accuracy 0 6905 epoch 9 10 582 582 70 121ms step loss 0 4636 accuracy 0 7711 val loss 0 5580 val accuracy 0 7310 epoch 10 10 582 582 70 120ms step loss 0 4292 accuracy 0 7894 val loss 0 5737 val accuracy 0 7233 image in this case the loss and the accuracy be go into the right direction code to reproduce the issue open the google colab notebook run all the cell up to section name create the base model from the pre train convnet modify the cell first cell under this heading to the follow img shape img size img size 3 create the base model from the pre train model mobilenet v2 base model tf keras applications mobilenetv2 input shape img shape include top false weight imagenet weight none base model trainable true then proceed by run the follow cell inside the notebook which create the classification head global average layer tf keras layer globalaveragepooling2d feature batch average global average layer feature batch print feature batch average shape prediction layer keras layer dense 1 prediction batch prediction layer feature batch average print prediction batch shape model tf keras sequential base model global average layer prediction layer compile the model as in the tutorial with the same parameter base learning rate 0 0001 model compile optimizer tf keras optimizer rmsprop lr base learning rate loss tf keras loss binarycrossentropy from logit true metric accuracy then train the model the initial loss be 0 69 and initial accuracy be 0 51 num train num val num test metadata split train num example weight 10 for weight in split weight initial epoch 10 step per epoch round num train batch size validation step 20 loss0 accuracy0 model evaluate validation batch step validation step history model fit train batch epoch initial epoch validation datum validation batch
tensorflowtensorflow
request for tflite use android ndk documentaion
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change no resource anywhere about tflite usage use android ndk clear description I have a tflite model which involve lot of post processing after follow load and run a model in c I have write the code in cpp and test on linux and adb shell now I want to use this code in the sample android apk be there any documentation sample usage of tflite cpp in android I have only find this but not able to follow this any help would be desperately appreciate
tensorflowtensorflow
add new documentation on how to interact with astronomy with animation
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide description of issue what need change we need a new documentation to interact with astronomy clear description like we think astronomical animation be good for research we need a new documentaion how to interact with phoebe or with other by tensorflow
tensorflowtensorflow
tensorflow1 12 hang at mapandbatchdatasetop dataset iterator runnerthread
Bug
os platform and distribution aarch64 tensorflow instal from source or binary source tensorflow version 1 12 python version 3 6 1 bazel version if compile from source gcc compiler version if compile from source preprocesse script be mapandbatchdataset prefetchdataset when we run training for a long time we find that the training will be hang we find that when hang the value of batch result front num call in the follow logic of map and batch dataset op cc be always 1 status getnextinternal iteratorcontext ctx std vector out tensor bool end of sequence override std share ptr result mutex lock l mu ensurerunnerthreadstarte ctx while batch result empty batch result front num call 0 recordstop ctx cond var wait l recordstart ctx std swap result batch result front batch result pop front cond var notify all return processresult ctx result out tensor end of sequence we find another function which will reduce result num call by one we will modify the property of result num call to atomic and there will be no problem do this mean that this code do not have a memory barrier void callcomplete const std share ptr result lock exclude mu mutex lock l mu num call result num call cond var notify all one more thing to say be that we only find the problem on arm os aarch64 but not on linux ubuntu os x86 we want to know if there be a bug in this snippet of tensorflow on aarch64 and how to fix it here be the call stack when hang thread 374 lwp 20847 0 0x0000ffffb997d22c in pthread cond wait glibc 2 17 from target lib aarch64 linux gnu libpthread so 0 1 0x0000ffffb11adb30 in std condition variable wait std unique lock from target usr lib aarch64 linux gnu libstdc so 6 2 0x0000ffffb5501c5c in nsync nsync mu semaphore p with deadline nsync nsync semaphore s timespec from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 3 0x0000ffffb55012d4 in nsync nsync sem wait with cancel nsync waiter timespec nsync nsync note s from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 4 0x0000ffffb54fe7f0 in nsync nsync cv wait with deadline generic nsync nsync cv s void void void void void timespec nsync nsync note s from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 5 0x0000ffffb3c962d4 in tensorflow datum anonymous namespace mapandbatchdatasetop dataset iterator runnerthread std share ptr const from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 6 0x0000ffffb11b3e14 in from target usr lib aarch64 linux gnu libstdc so 6 7 0x0000ffffb9977088 in start thread from target lib aarch64 linux gnu libpthread so 0 8 0x0000ffffb98054ec in from target lib aarch64 linux gnu libc so 6 thread 372 lwp 20844 0 0x0000ffffb997d22c in pthread cond wait glibc 2 17 from target lib aarch64 linux gnu libpthread so 0 1 0x0000ffffb11adb30 in std condition variable wait std unique lock from target usr lib aarch64 linux gnu libstdc so 6 2 0x0000ffffb5501c5c in nsync nsync mu semaphore p with deadline nsync nsync semaphore s timespec from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 3 0x0000ffffb55012d4 in nsync nsync sem wait with cancel nsync waiter timespec nsync nsync note s from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 4 0x0000ffffb54fe7f0 in nsync nsync cv wait with deadline generic nsync nsync cv s void void void void void timespec nsync nsync note s from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 5 0x0000ffffb3c938e4 in tensorflow datum anonymous namespace mapandbatchdatasetop dataset iterator getnextinternal tensorflow datum iteratorcontext std vector bool from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 6 0x0000ffffb3c41918 in tensorflow datum datasetbaseiterator getnext tensorflow datum iteratorcontext std vector bool from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 7 0x0000ffffb3d25644 in std function handler m invoke std any datum const from target usr local lib python3 6 site package tensorflow python pywrap tensorflow internal so 8 0x0000ffffb11b3e14 in from target usr lib aarch64 linux gnu libstdc so 6 9 0x0000ffffb9977088 in start thread from target lib aarch64 linux gnu libpthread so 0 10 0x0000ffffb98054ec in from target lib aarch64 linux gnu libc so 6
tensorflowtensorflow
unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 manjaro kernel 5 3 18 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory rtx 2070 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior import a resnet pb model and run it the follow error will occur and whether set allow growth or not do not help 2020 01 18 12 56 02 661655 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2020 01 18 12 56 02 666829 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error traceback most recent call last file home abcdabcd987 pyenv version 3 7 4 lib python3 7 site package tensorflow core python client session py line 1367 in do call return fn args file home abcdabcd987 pyenv version 3 7 4 lib python3 7 site package tensorflow core python client session py line 1352 in run fn target list run metadata file home abcdabcd987 pyenv version 3 7 4 lib python3 7 site package tensorflow core python client session py line 1445 in call tf sessionrun run metadata tensorflow python framework error impl unknownerror 2 root error s find 0 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node resnet v1 50 conv1 conv2d output 3 1 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node resnet v1 50 conv1 conv2d 0 successful operation 0 derive error ignore describe the expect behavior run it successfully code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import os import numpy as np import tensorflow as tf model pb filename resnet v1 50 pb input tensor name input 0 output tensor name output 0 def main with tf device gpu 0 with tf compat v1 gfile gfile model pb filename rb as f graph def tf compat v1 graphdef graph def parsefromstring f read g in tf import graph def graph def name config tf compat v1 configproto config allow soft placement true config log device placement true config gpu option allow growth true sess tf compat v1 session graph g in config config batch x np random randn 1 224 224 3 output sess run output tensor name feed dict input tensor name batch x 0 print output print tf test be gpu available print config list map hex config serializetostre main other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach test tf log
tensorflowtensorflow
tensorflow lite new converter do not allow to use inference input type and inference output type with v2 apis
Bug
system information os platform and distribution linux macos tensorflow instal from official python wheel tensorflow version v2 1 0 python converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize optimize for size converter experimental new converter true converter experimental new quantizer true converter representative dataset representative datum gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite model converter convert above code segment be expect to produce uint8 inference input type and inference output type yet there be no uint8 conversion of the input layer and it end up in identify as float model when use in example label image cc l226 mobilenetv2 1 00 224 global average pooling2d mean mobilenetv2 1 00 224 global average pooling2d mean reduction indice mobilenetv2 1 00 224 out relu relu input 1 identity model use mobilenet v2 1 0 224 quant tgz failure detail if the conversion be successful but the generate model be wrong state what be wrong unable to produce true uint8 input output layer
tensorflowtensorflow
error optimize my tflite model for gpu usage
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 10 pro for workstation build 18363 592 tensorflow instal from source or binary pip install tf nightly tensorflow version or github sha if from source 2 2 0 dev20200115 command use to run the converter or code if you re use the python api tf lite converter py import tensorflow as tf from tensorflow core lite python interpreter import interpreter from tensorflow core lite python lite import tfliteconverter optimize model path tflite model path converter tfliteconverter from keras model file model path converter optimization append optimize default converter target spec support type append tf float16 converter experimental new converter true tflite model converter convert open tflite model path wb write tflite model the output from the converter invocation tf n d development tensorflow anpr python tf lite converter py 2020 01 18 14 20 47 702258 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll optimizer adagrad model path output adagrad glpr model h5 2020 01 18 14 20 53 240667 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 01 18 14 20 53 279154 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2070 computecapability 7 5 coreclock 1 83ghz corecount 36 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2020 01 18 14 20 53 287304 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 01 18 14 20 53 361116 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 01 18 14 20 53 421716 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 01 18 14 20 53 447055 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 01 18 14 20 53 530899 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 01 18 14 20 53 570409 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 01 18 14 20 53 702842 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 01 18 14 20 53 708873 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 18 14 20 53 715765 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 01 18 14 20 53 723989 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2070 computecapability 7 5 coreclock 1 83ghz corecount 36 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2020 01 18 14 20 53 734035 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 01 18 14 20 53 739179 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 01 18 14 20 53 744328 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 01 18 14 20 53 749625 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 01 18 14 20 53 754539 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 01 18 14 20 53 759888 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 01 18 14 20 53 764633 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 01 18 14 20 53 768758 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 18 14 20 56 460164 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 18 14 20 56 464829 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 18 14 20 56 467340 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 18 14 20 56 485286 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6302 mb memory physical gpu device 0 name geforce rtx 2070 pci bus i d 0000 01 00 0 compute capability 7 5 warning tensorflow no training configuration find in save file the model be not compile compile it manually 2020 01 18 14 20 58 024625 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 01 18 14 20 58 029712 I tensorflow core grappler cluster single machine cc 356 start new session 2020 01 18 14 20 58 039753 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2070 computecapability 7 5 coreclock 1 83ghz corecount 36 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2020 01 18 14 20 58 049235 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 01 18 14 20 58 055409 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 01 18 14 20 58 061247 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 01 18 14 20 58 066364 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 01 18 14 20 58 072194 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 01 18 14 20 58 077274 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 01 18 14 20 58 081844 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 01 18 14 20 58 086684 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 18 14 20 58 090252 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 18 14 20 58 094291 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 18 14 20 58 097674 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 18 14 20 58 101224 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6302 mb memory physical gpu device 0 name geforce rtx 2070 pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 18 14 20 58 210900 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item graph to optimize 2020 01 18 14 20 58 215597 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 402 node 0 510 edge 0 time 12 053ms 2020 01 18 14 20 58 223386 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 402 node 0 510 edge 0 time 3 914ms 2020 01 18 14 20 58 228466 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional 1 forward gru 1 while body 1537 2020 01 18 14 20 58 234325 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 239123 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 243351 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional 1 forward gru 1 while cond 1536 2020 01 18 14 20 58 249016 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 18 14 20 58 253361 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 257939 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional 1 backward gru 1 while body 1693 2020 01 18 14 20 58 263220 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 267683 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 271917 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional backward gru while body 1380 2020 01 18 14 20 58 277337 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 281620 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 18 14 20 58 286374 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional backward gru while cond 1379 2020 01 18 14 20 58 291542 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 296176 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 300253 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional forward gru while body 1224 2020 01 18 14 20 58 305713 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 309768 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 314440 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional forward gru while cond 1223 2020 01 18 14 20 58 319815 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 323996 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 328567 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item model 1 bidirectional 1 backward gru 1 while cond 1692 2020 01 18 14 20 58 333697 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 18 14 20 58 338136 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms traceback most recent call last file tf lite converter py line 30 in tflite model converter convert file c user andreas anaconda3 lib site package tensorflow core lite python lite py line 1051 in convert converter kwargs file c user andreas anaconda3 lib site package tensorflow core lite python convert py line 476 in toco convert impl enable mlir converter enable mlir converter file c user andreas anaconda3 lib site package tensorflow core lite python convert py line 215 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2020 01 18 14 20 59 719318 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 01 18 14 21 01 906381 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 108 ignore output format 2020 01 18 14 21 01 906537 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 111 ignore drop control dependency 2020 01 18 14 21 02 036186 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 01 18 14 21 02 040149 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 01 18 14 21 02 059114 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2070 computecapability 7 5 coreclock 1 83ghz corecount 36 devicememorysize 8 00gib devicememorybandwidth 417 29gib s 2020 01 18 14 21 02 059504 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 01 18 14 21 02 062864 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 01 18 14 21 02 065500 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 01 18 14 21 02 066696 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 01 18 14 21 02 069723 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 01 18 14 21 02 071795 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 01 18 14 21 02 076283 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 01 18 14 21 02 076923 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 18 14 21 02 582665 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 18 14 21 02 582866 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 18 14 21 02 582978 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 18 14 21 02 583637 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6287 mb memory physical gpu device 0 name geforce rtx 2070 pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 18 14 21 05 868343 e tensorflow lite tool optimize quantize weight cc 474 quantize weight tool only support tflite model with one subgraph traceback most recent call last file c user andreas anaconda3 lib runpy py line 193 in run module as main main mod spec file c user andreas anaconda3 lib runpy py line 85 in run code exec code run global file c user andreas anaconda3 script toco from protos exe main py line 7 in file c user andreas anaconda3 lib site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file c user andreas anaconda3 lib site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file c user andreas anaconda3 lib site package absl app py line 299 in run run main main args file c user andreas anaconda3 lib site package absl app py line 250 in run main sys exit main argv file c user andreas anaconda3 lib site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception quantize weight transformation fail also please include a link to the save model or graphdef download keras model download save model failure detail any other info log without use the gpu optimization I can convert the keras model to a tflite model this work correctly but under android unfortunately only very slowly github project
tensorflowtensorflow
custom layer go backwards do not work
Bug
I be implement a custom recurrent class that inherit tf layer layer when use the bidirectional wrapper I get the error keyerror traceback most recent call last in 1 a timedistribute bidirectional char recurrent cell opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras layers wrappers py in init self layer merge mode weight backward layer kwargs 434 if backward layer be none 435 self backward layer self recreate layer from config 436 layer go backwards true 437 else 438 self backward layer backward layer opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras layers wrappers py in recreate layer from config self layer go backwards 493 config layer get config 494 if go backwards 495 config go backwards not config go backwards 496 if custom object in tf inspect getfullargspec 497 layer class from config args keyerror go backwards this be the code for the layer itself class recurrentconfig baselayer basic configurable recurrent layer def init self args kwargs super init args kwargs self layer list layer layer stack layer self param self num layer self layer name def call self input np ndarray layer layer this function be a sequential functional call to this layer logic args input array to be process within this layer return input process through this layer process input for layer in self layer process layer process return process staticmethod def default param dict any any return unit 32 recurrent initializer glorot uniform dropout 0 recurrent dropout 0 activation none return sequence true I have attempt to add the go backwards to the config that be retrieve when get config be call but this result in another error typeerror traceback most recent call last in 1 a timedistribute bidirectional char recurrent cell opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras layers wrappers py in init self layer merge mode weight backward layer kwargs 430 recreate the forward layer from the original layer config so that it will 431 not carry over any state from the layer 432 self forward layer self recreate layer from config layer 433 434 if backward layer be none opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras layers wrappers py in recreate layer from config self layer go backwards 506 return layer class from config config custom object custom object 507 else 508 return layer class from config config 509 510 tf util shape type conversion opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras engine base layer py in from config cls config 517 a layer instance 518 519 return cls config 520 521 def compute output shape self input shape nlpv3 general nlp lib src main python mosaix py mosaix learn layers recurrent layer py in init self args kwargs 12 basic configurable recurrent layer 13 def init self args kwargs 14 super init args kwargs 15 self layer list layer layer stack layer self param 16 self num layer nlpv3 general nlp lib src main python mosaix py mosaix learn layers base layer py in init self param mode layer name num layer cust name kwargs 17 cust name str 18 kwargs 19 super init param mode kwargs 20 self layer name layer name 21 self cust name cust name nlpv3 general nlp lib src main python mosaix py mosaix learn configurable py in init self param mode kwargs 61 62 def init self param dict anystr any mode modekey kwargs 63 super init kwargs type ignore 64 self param parse param param self default param 65 self mode mode opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras engine base layer py in init self trainable name dtype dynamic kwargs 184 185 validate optional keyword argument 186 generic util validate kwargs kwargs allow kwargs 187 188 mutable property opt anaconda3 envs tensorflow p36 lib python3 6 site package tensorflow core python keras util generic util py in validate kwargs kwargs allow kwargs error message 716 for kwarg in kwargs 717 if kwarg not in allow kwargs 718 raise typeerror error message kwarg typeerror keyword argument not understand go backwards version info be tf version 2 1 0 dev20191125 git version v1 12 1 19144 gf39f4ea3fa
tensorflowtensorflow
compile tf keras model with hub keraslayer fail in distribute scope
Bug
system information have I write custom code yes os platform and distribution debian 9 11 google cloud tf2 late gpu image tensorflow instal from preinstalle on google cloud image tensorflow version v2 1 0 rc2 17 ge5bf8de python version 3 5 3 cuda cudnn version n a gpu model and memory none use google cloud tpu v3 8 describe the current behavior when compile a tf keras model that include a hub keraslayer tensorflow hub it fail to compile in a distribution strategy scope valueerror traceback most recent call last in 16 optimizer tf keras optimizer adam learn rate learning rate 17 loss tf keras loss binary crossentropy 18 metric accuracy 19 1 frame usr local lib python3 5 dist package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access usr local lib python3 5 dist package tensorflow core python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 469 with strategy scope n 470 model create model n 471 model compile v strategy 472 473 trackable no automatic dependency tracking valueerror variable be not create in the distribution strategy scope of it be most likely due to not all layer or the model or optimizer be create outside the distribution strategy scope try to make sure your code look similar to the following with strategy scope model create model model compile describe the expect behavior model should be able to compile code to reproduce the issue code use to create scope python3 tpu address grpc 10 0 0 2 8470 with tf compat v1 session tpu address as session print tpu device pprint pprint session list device resolver tf distribute cluster resolver tpuclusterresolver tpu address try tf config experimental connect to cluster resolver except tf error unimplementederror as uie print uie this appear to be cause by the tpu already be connect ignore sep n tf tpu experimental initialize tpu system resolver tpu strategy tf distribute experimental tpustrategy resolver code use to compile model python3 with tpu strategy scope in i d tf keras layers input shape max seq length name input ids dtype np int32 in mask tf keras layers input shape max seq length name input mask dtype np int32 in segment tf keras layers input shape max seq length name segment ids dtype np int32 bert input input ids in i d input mask in mask segment id in segment bert layer hub keraslayer bert model hub signature token output key pool output bert input bert layer trainable true dense tf keras layer dense 256 activation relu bert layer pre tf keras layer dense len unique label activation sigmoid dense model tf keras model input bert input output pre model compile optimizer tf keras optimizer adam learn rate learning rate loss tf keras loss binary crossentropy metric accuracy other info log previously open issue with tf hub here google cloud tpu v3 8 be run tpu software 2 1
tensorflowtensorflow
text classification rnn tutorial doesn t run under tf2 1
Bug
url s with the issue description of issue what need change the sample for lstm base text classification doesn t run under tensorflow 2 1 anymore the line train dataset train dataset padded batch batch size train dataset output shape do fail with the error attributeerror shuffledataset object have no attribute output shape
tensorflowtensorflow
tf function stable doc typo issue
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change original it also restrict the dhape and datatype of tensor that can be use so I think it should be update as it also restrict the shape and datatype of tensor that can be use clear description find a typo for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
interpreter invoke of tflite model cause abort core dump despite successful tflite conversion under tensorflow version 1 14 0
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux kubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip3 tensorflow version use command below 1 14 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior the mobilenetvlad model be successfuly convert to a tf lite model by the sample code provide from the tensorflow website to archieve this I add the parameter input shape to the from save model call converter tf lite tfliteconverter from save model save model dir input shape image 1 640 480 none but if inference be test with the accord sample code load and run a model in python from the tensorflow website the program be abort and the core dump describe the expect behavior sucessfull inference and output of a 1 4096 sized image descriptor code to reproduce the issue 1 download the mobilenetvlad model 2 with tensorflow 1 14 0 tf2 will not work convert the save model to tflite by set an input shape be set to none none none 1 in the model description but should be 640x480 accord to the paper pdf import tensorflow as tf save model dir hierarchical loc global loc model mobilenetvlad depth 0 35 converter tf lite tfliteconverter from save model save model dir input shape image 1 640 480 none only change of code beside filename tflite model converter convert open convert model 1 640 480 tflite wb write tflite model 3 run the inference with the sample code load and run a model in python import numpy as np import tensorflow as tf print tf version load tflite model and allocate tensor interpreter tf lite interpreter model path convert model 1 640 480 tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail print input detail print output detail test model on random input datum input shape input detail 0 shape input datum np array np random random sample input shape dtype np float32 print input data shape interpreter set tensor input detail 0 index input datum interpreter invoke output datum interpreter get tensor output detail 0 index print output datum other info log output of the tflite conversion edit copy the wrong output sorry and don t worry about the virtual environmnet name virtualenvironment tensorflow 1 15 I initially want to install tf 1 15 0 but only tf 1 14 0 work home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 516 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 517 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 518 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 519 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 520 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework dtype py 525 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 541 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 542 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 543 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 544 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 545 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorboard compat tensorflow stub dtype py 550 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 2020 01 22 14 00 03 226815 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 22 14 00 03 250753 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2194920000 hz 2020 01 22 14 00 03 251151 I tensorflow compiler xla service service cc 168 xla service 0x4a4df80 execute computation on platform host device 2020 01 22 14 00 03 251185 I tensorflow compiler xla service service cc 175 streamexecutor device 0 warn tensorflow from home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow lite python convert save model py 60 load from tensorflow python save model loader impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model loader load or tf compat v1 save model load there will be a new function for import savedmodel in tensorflow 2 0 warn tensorflow from home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python training saver py 1276 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix 2020 01 22 14 00 06 393038 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2020 01 22 14 00 10 226966 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2020 01 22 14 00 10 227122 I tensorflow core grappler cluster single machine cc 359 start new session 2020 01 22 14 00 10 347863 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2020 01 22 14 00 10 347910 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 002ms 2020 01 22 14 00 10 347930 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 0 001ms warning tensorflow from home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow lite python util py 238 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from home alex document virtualenvironment tensorflow 1 15 lib python3 6 site package tensorflow python framework graph util impl py 270 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph 2020 01 22 14 00 11 150707 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2020 01 22 14 00 11 151014 I tensorflow core grappler cluster single machine cc 359 start new session 2020 01 22 14 00 12 806255 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2020 01 22 14 00 12 806306 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 554 node 265 570 edge 265 time 1091 39502ms 2020 01 22 14 00 12 806326 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 554 node 0 570 edge 0 time 372 614ms the create model be then use within tfliteinference py import numpy as np import tensorflow as tf print tf version load tflite model and allocate tensor interpreter tf lite interpreter model path convert model 1 640 480 test tflite interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail print input detail print output detail test model on random input datum input shape input detail 0 shape input datum np array np random random sample input shape dtype np float32 print input data shape interpreter set tensor input detail 0 index input datum interpreter invoke output datum interpreter get tensor output detail 0 index print output datum output of gdb debug gdb ex r args python3 tfliteinference py abort py list thread 1 python3 receive signal sigabrt abort gi raise sig sig entry 6 at sysdep unix sysv linux raise c 51 51 sysdep unix sysv linux raise c no such file or directory gdb py list 104 105 def allocatetensor self 106 return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self 107 108 def invoke self 109 return tensorflow wrap interpreter wrapper interpreterwrapper invoke self 110 111 def inputindice self 112 return tensorflow wrap interpreter wrapper interpreterwrapper inputindice self 113 114 def outputindice self I hope anyone can help I with this cheer alex
tensorflowtensorflow
error while generate tensorflowlite file
Bug
system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert exception ignore in traceback most recent call last file usr local lib python3 6 dist package tensorflow python training track util py line 244 in del format pretty printer node names node i d file usr local lib python3 6 dist package tensorflow python training track util py line 93 in node name path to root node i d child local name file usr local lib python3 6 dist package tensorflow python training track object identity py line 76 in getitem return self storage self wrap key key keyerror convertererror traceback most recent call last in 13 convert the model to standard tensorflow lite model 14 converter tf lite tfliteconverter from concrete function concrete func 15 convert tflite model converter convert 16 open tflite model wb write convert tflite model 2 frame usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str 170 stderr try convert to unicode stderr 171 raise convertererror 172 toco fail see console for info n s n s n stdout stderr 173 finally 174 must manually cleanup file convertererror toco fail see console for info usr local lib python3 6 dist package tensorflow python framework dtype py 516 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 517 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 518 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 519 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 520 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 usr local lib python3 6 dist package tensorflow python framework dtype py 525 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 541 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 542 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 543 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 544 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 545 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 usr local lib python3 6 dist package tensorboard compat tensorflow stub dtype py 550 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 2020 01 17 11 53 28 108739 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation identityn 2020 01 17 11 53 28 144111 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 707 operator 1294 array 0 quantize 2020 01 17 11 53 28 168667 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 707 operator 1294 array 0 quantize 2020 01 17 11 53 28 306950 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 128 operator 326 array 0 quantize 2020 01 17 11 53 28 316129 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 125 operator 321 array 0 quantize 2020 01 17 11 53 28 319357 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 3 124 operator 319 array 0 quantize 2020 01 17 11 53 28 322587 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 124 operator 319 array 0 quantize 2020 01 17 11 53 28 325002 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 124 operator 319 array 0 quantize 2020 01 17 11 53 28 333533 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 11139584 byte theoretical optimal value 8297856 byte 2020 01 17 11 53 28 334861 e tensorflow lite toco toco tooling cc 462 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use average pool 2d concatenation conv 2d fully connect max pool 2d mean reshape softmax here be a list of operator for which you will need custom implementation identityn traceback most recent call last file usr local bin toco from protos line 8 in sys exit main file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 59 in main app run main execute argv sys argv 0 unparse file usr local lib python3 6 dist package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 33 in execute output str tensorflow wrap toco tococonvert model str toco str input str exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use average pool 2d concatenation conv 2d fully connect max pool 2d mean reshape softmax here be a list of operator for which you will need custom implementation identityn also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
what s the difference between keras application and tf model garden
Bug
url s with the issue description of issue what need change there be the pre train model for kera find in tf keras application and there be those model find on github in the model garden under model as link above now from read the doc both for the application as well as those in the model garden I don t get the difference why do we have those two different model repos
tensorflowtensorflow
attributeerror tensor object have no attribute datatype enum
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x catalina 10 15 2 tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 7 5 gpu model and memory intel iris pro 1536 mb describe the current behavior I get the error tensorflow python eager core fallbackexception this function do not handle the case of the path where all input be not already eagertensor then attributeerror tensor object have no attribute datatype enum and then attributeerror progbarlogger object have no attribute log value when I add the following callback to the list of callback of my model fit my callback tf keras callback lambdacallback on batch begin lambda batch log tf print my model loss describe the expect behavior no error code to reproduce the issue import tensorflow as tf def get model inp tf keras layers input shape 1 x tf keras layer dense 8 activity regularizer tf keras regularizer l1 0 01 inp x tf keras layer dense 16 activity regularizer tf keras regularizer l1 0 01 x out tf keras layer dense 1 x model tf keras model input inp output out return model def train my model get model my model compile optimizer adam loss mse my callback tf keras callback lambdacallback on batch begin lambda batch log tf print my model loss my model fit 1 2 3 4 0 1 0 2 0 4 0 2 callback my callback if name main train this issue may be relate to and note that if I don t use any regulariser tf print print an empty list and no error occur
tensorflowtensorflow
incorrect mention of dataset
Bug
url s with the issue screenshot from 2020 01 16 18 37 30 description of issue what need change I think this exercise doesn t make use of cat vs dog dataset right clear description in place of cat vs dog dataset rock paper scissor dataset should be mention submit a pull request yes shortly
tensorflowtensorflow
tf cast on native python float to dtype tf float64 lead to loss of precision
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary pip install tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 describe the current behavior tf cast on a python float e g a literal float constant not a numpy float array type do an implicit conversion to the tensorflow default dtype tf float32 which can result in loss of precision when intend to cast to tf float64 python tf cast 0 2 tf float64 describe the expect behavior python tf cast 0 2 tf float64 code to reproduce the issue see above other info log this be discover as a bug in gpflow which we build a work around for in but this be a pervade issue and it would be good to fix this upstream instead of have to write and use a gpflow cast everywhere just to work around have potentially pass in a python float
tensorflowtensorflow
gpu support instruction for cuda 10 ubuntu 16 04 refer to non existent package
Bug
the current gpu support instruction for cuda 10 on ubuntu 16 04 refer to a package version that do not exist in the nvidia ml repo libnvinfer5 6 0 1 1 cuda10 1 ubuntu 16 04 cuda 10 install tensorrt require that libcudnn7 be instal above sudo apt get install y no install recommend libnvinfer5 6 0 1 1 cuda10 1 libnvinfer dev 6 0 1 1 cuda10 1 this result in an error when try to install reading package list do building dependency tree read state information do e version 6 0 1 1 cuda10 1 for libnvinfer5 be not find change this to libnvinfer6 6 0 1 1 cuda10 1 seem to work and run happily
tensorflowtensorflow
error when set converter experimental new converter true
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow version or github sha if from source tf nightly 2 1 0 dev20200101 command use to run the converter or code if you re use the python api def model input image keras layer input shape c3 c4 c5 resnet graph input image p3 p4 p5 p6 p7 fpn graph c3 c4 c5 loc datum conf datum mask datum predict graph p3 p4 p5 p6 p7 proto datum protonet p3 anchor get prior config image shape refined box decodeboxe loc datum anchor batch multiclass non max suppression be in object detection core post process py box score class ids mask coef num detection batch multiclass non max suppression mask assemblymask mask coef proto out box model keras model model input image box class ids score mask num detection return model model model model load weight h5 converter tf lite tfliteconverter from keras model model converter experimental new converter true tflite model converter convert the output from the converter invocation pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 65ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 01 16 14 11 43 696917 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 11 43 696927 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 16 14 11 43 696936 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 16 14 11 43 696944 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 16 14 11 43 696952 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 16 14 11 43 696960 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 16 14 11 43 696968 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 16 14 11 43 697006 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 43 697323 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 43 697611 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 16 14 11 43 697635 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 11 43 698527 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 16 14 11 43 698539 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 16 14 11 43 698544 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 16 14 11 43 698621 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 43 698944 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 43 699248 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8422 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 16 14 11 58 365227 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 365704 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 01 16 14 11 58 365842 I tensorflow core grappler cluster single machine cc 356 start new session 2020 01 16 14 11 58 366283 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 366583 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 65ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 01 16 14 11 58 366616 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 11 58 366627 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 16 14 11 58 366636 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 16 14 11 58 366645 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 16 14 11 58 366655 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 16 14 11 58 366664 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 16 14 11 58 366674 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 16 14 11 58 366709 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 367016 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 367298 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 16 14 11 58 367315 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 16 14 11 58 367319 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 16 14 11 58 367323 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 16 14 11 58 367372 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 367682 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 11 58 367971 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8422 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 16 14 11 58 687280 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item graph to optimize 2020 01 16 14 11 58 687309 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 1511 node 0 3113 edge 0 time 51 53ms 2020 01 16 14 11 58 687313 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 1511 node 0 3113 edge 0 time 55 71ms 2020 01 16 14 11 58 687315 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item yolact yolact detection map while cond 24503 2020 01 16 14 11 58 687319 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 16 14 11 58 687322 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 16 14 11 58 687324 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item yolact yolact detection map while body 24504 2020 01 16 14 11 58 687327 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 002ms 2020 01 16 14 11 58 687330 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 16 14 12 00 043387 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 043733 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 01 16 14 12 00 043796 I tensorflow core grappler cluster single machine cc 356 start new session 2020 01 16 14 12 00 044138 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 044457 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 65ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 01 16 14 12 00 044485 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 12 00 044496 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 16 14 12 00 044505 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 16 14 12 00 044515 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 16 14 12 00 044524 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 16 14 12 00 044533 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 16 14 12 00 044542 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 16 14 12 00 044572 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 044887 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 045176 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 16 14 12 00 045194 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 16 14 12 00 045199 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 16 14 12 00 045202 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 16 14 12 00 045251 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 045555 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 00 045994 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8422 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 16 14 12 06 926724 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item graph to optimize 2020 01 16 14 12 06 926752 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 955 node 556 2636 edge 445 time 208 331ms 2020 01 16 14 12 06 926756 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 955 node 0 2636 edge 0 time 80 347m 2020 01 16 14 12 06 926759 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item yolact yolact detection map while cond 24503 frozen 2020 01 16 14 12 06 926762 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 19 node 0 8 edge 0 time 0 333ms 2020 01 16 14 12 06 926765 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 19 node 0 8 edge 0 time 0 201ms 2020 01 16 14 12 06 926767 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item yolact yolact detection map while body 24504 frozen 2020 01 16 14 12 06 926770 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 4210 node 252 5496 edge 407 time 6253 95508m 2020 01 16 14 12 06 926773 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 4210 node 0 5496 edge 0 time 99 047m traceback most recent call last file home chengxu deeplearning tensorflow 2 0 study yolact py line 1567 in tflite model converter convert file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python lite py line 490 in convert converter kwargs file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python convert py line 476 in toco convert impl enable mlir converter enable mlir converter file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python convert py line 215 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info warn tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client 2020 01 16 14 12 08 249581 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 108 ignore output format 2020 01 16 14 12 08 249608 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 114 ignore drop control dependency 2020 01 16 14 12 08 674483 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 16 14 12 08 698096 I tensorflow core platform profile util cpu util cc 101 cpu frequency 3600000000 hz 2020 01 16 14 12 08 698483 I tensorflow compiler xla service service cc 168 xla service 0x55bf05e1f380 initialize for platform host this do not guarantee that xla will be use device 2020 01 16 14 12 08 698496 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 01 16 14 12 08 700080 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 16 14 12 08 752405 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 752789 I tensorflow compiler xla service service cc 168 xla service 0x55bf05e3f110 initialize for platform cuda this do not guarantee that xla will be use device 2020 01 16 14 12 08 752801 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2020 01 16 14 12 08 752929 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 753249 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 65ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 01 16 14 12 08 753413 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 12 08 754500 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 16 14 12 08 755559 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 16 14 12 08 755751 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 16 14 12 08 756823 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 16 14 12 08 757307 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 16 14 12 08 759489 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 16 14 12 08 759597 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 759997 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 760301 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 16 14 12 08 760338 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 12 08 760970 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 16 14 12 08 760980 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 16 14 12 08 760999 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 16 14 12 08 761074 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 761435 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 12 08 761758 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8039 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 loc callsite yolact yolact detection map tensorarrayv2 5 home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python ops map fn py 425 0 at callsite home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python util deprecation py 574 0 at callsite home chengxu deeplearning tensorflow 2 0 study util shape util py 228 0 at callsite home chengxu deeplearning tensorflow 2 0 study core post process py 476 0 at callsite home chengxu deeplearning tensorflow 2 0 study yolact py 1118 0 at callsite home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python autograph impl api py 308 0 at callsite home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine base layer py 785 0 at callsite home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine network py 918 0 at callsite home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine network py 744 0 at home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine base layer py 785 0 error operand type tensor be not compatible with precede operand expect rank 1 traceback most recent call last file home chengxu anaconda3 envs tf nightly bin toco from protos line 8 in sys exit main file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home chengxu anaconda3 envs tf nightly lib python3 7 site package absl app py line 299 in run run main main args file home chengxu anaconda3 envs tf nightly lib python3 7 site package absl app py line 250 in run main sys exit main argv file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python ops map fn py 425 7 error operand type tensor be not compatible with precede operand expect rank 1 name name home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python util deprecation py 574 7 note call from return func args kwargs home chengxu deeplearning tensorflow 2 0 study util shape util py 228 9 note call from return tf map fn fn elem dtype parallel iteration back prop home chengxu deeplearning tensorflow 2 0 study core post process py 476 7 note call from parallel iteration parallel iteration home chengxu deeplearning tensorflow 2 0 study yolact py 1118 98 note call from mask mask datum home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python autograph impl api py 308 7 note call from return func args kwargs home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine base layer py 785 19 note call from output call fn cast input args kwargs home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine network py 918 11 note call from output tensor layer compute tensor kwargs home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine network py 744 9 note call from convert kwargs to constant base layer util call context save home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python keras engine base layer py 785 19 note call from output call fn cast input args kwargs process finish with exit code 1 if remove converter experimental new converter true warning absl please consider switch to use new converter by set experimental new converter to true old converter toco be deprecate and flow will be switch on by default to use new converter soon traceback most recent call last file home chengxu deeplearning tensorflow 2 0 study yolact py line 1567 in tflite model converter convert file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python lite py line 490 in convert converter kwargs file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python convert py line 476 in toco convert impl enable mlir converter enable mlir converter file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite python convert py line 215 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info warn tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client 2020 01 16 14 54 09 455905 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 16 14 54 09 477996 I tensorflow core platform profile util cpu util cc 101 cpu frequency 3600000000 hz 2020 01 16 14 54 09 478678 I tensorflow compiler xla service service cc 168 xla service 0x55641b1a91a0 initialize for platform host this do not guarantee that xla will be use device 2020 01 16 14 54 09 478691 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 01 16 14 54 09 480333 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 16 14 54 09 539873 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 540262 I tensorflow compiler xla service service cc 168 xla service 0x55641b23d750 initialize for platform cuda this do not guarantee that xla will be use device 2020 01 16 14 54 09 540278 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2020 01 16 14 54 09 540443 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 540744 I tensorflow core common runtime gpu gpu device cc 1558 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 65ghz corecount 68 devicememorysize 10 76gib devicememorybandwidth 573 69gib s 2020 01 16 14 54 09 540885 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 54 09 542169 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 16 14 54 09 543344 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 16 14 54 09 543537 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 16 14 54 09 544789 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 16 14 54 09 545460 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 16 14 54 09 548110 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 16 14 54 09 548224 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 548649 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 548942 I tensorflow core common runtime gpu gpu device cc 1700 add visible gpu device 0 2020 01 16 14 54 09 548974 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 16 14 54 09 549580 I tensorflow core common runtime gpu gpu device cc 1099 device interconnect streamexecutor with strength 1 edge matrix 2020 01 16 14 54 09 549591 I tensorflow core common runtime gpu gpu device cc 1105 0 2020 01 16 14 54 09 549598 I tensorflow core common runtime gpu gpu device cc 1118 0 n 2020 01 16 14 54 09 549669 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 550038 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 16 14 54 09 550342 I tensorflow core common runtime gpu gpu device cc 1244 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 8020 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 01 00 0 compute capability 7 5 2020 01 16 14 54 09 621430 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2020 01 16 14 54 09 621470 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 621481 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2020 01 16 14 54 09 621489 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 621497 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2020 01 16 14 54 09 621505 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 621513 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2020 01 16 14 54 09 621520 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 621527 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2020 01 16 14 54 09 621535 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 621542 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2020 01 16 14 54 09 621553 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622587 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2020 01 16 14 54 09 622602 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622657 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2020 01 16 14 54 09 622667 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622707 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2020 01 16 14 54 09 622716 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622737 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2020 01 16 14 54 09 622744 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622759 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation while 2020 01 16 14 54 09 622776 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622782 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622788 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622793 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622798 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622804 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622809 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622814 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622819 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622824 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2020 01 16 14 54 09 622833 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2020 01 16 14 54 09 622843 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2020 01 16 14 54 09 622852 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2020 01 16 14 54 09 622861 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2020 01 16 14 54 09 622871 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2020 01 16 14 54 09 641969 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 655 operator 1254 array 0 quantize 2020 01 16 14 54 09 652164 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 655 operator 1254 array 0 quantize 2020 01 16 14 54 09 701043 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 236 operator 568 array 0 quantize 2020 01 16 14 54 09 704528 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 233 operator 562 array 0 quantize 2020 01 16 14 54 09 707893 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 3 233 operator 562 array 0 quantize 2020 01 16 14 54 09 711235 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 233 operator 562 array 0 quantize 2020 01 16 14 54 09 713664 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 233 operator 562 array 0 quantize 2020 01 16 14 54 09 715367 I tensorflow lite toco graph transformation graph transformation cc 39 before identify near upsample 233 operator 562 array 0 quantize 2020 01 16 14 54 09 726982 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 18249152 byte theoretical optimal value 11829248 byte 2020 01 16 14 54 09 727774 I tensorflow lite toco toco tooling cc 471 number of parameter 8641389 2020 01 16 14 54 09 729089 e tensorflow lite toco toco tooling cc 498 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d depthwise conv 2d div exp expand dim fully connect greater equal less logistic maximum minimum mul pack reduce max reduce min relu reshape resize bilinear stride slice sub sum tanh transpose here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while traceback most recent call last file home chengxu anaconda3 envs tf nightly bin toco from protos line 8 in sys exit main file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home chengxu anaconda3 envs tf nightly lib python3 7 site package absl app py line 299 in run run main main args file home chengxu anaconda3 envs tf nightly lib python3 7 site package absl app py line 250 in run main sys exit main argv file home chengxu anaconda3 envs tf nightly lib python3 7 site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d depthwise conv 2d div exp expand dim fully connect greater equal less logistic maximum minimum mul pack reduce max reduce min relu reshape resize bilinear stride slice sub sum tanh transpose here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while process finish with exit code 1
tensorflowtensorflow
conv2dtranspose shape get none when export savedmodel
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below tf2 1 python version 3 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 2 7 6 gpu model and memory v100 32 gb describe the current behavior the shape information after conv2dtranspose layer be incomplete and the spatial dimension be miss when export the save model describe the expect behavior the shape information should include the spatial dimension when export the save model code to reproduce the issue import tensorflow as tf def crop and concat input residual input factor input get shape dim 1 value residual input get shape dim 1 value return tf concat input tf image central crop residual input factor axis 1 class unet tf keras model def init self name super unet self init name self conv1 tf keras layer conv2d filter 8 kernel size 3 3 activation tf nn relu self conv2 tf keras layer conv2d filter 8 kernel size 3 3 activation tf nn relu self maxpool tf keras layers maxpool2d pool size 2 2 stride 2 self deconv tf keras layer conv2dtranspose filter 16 kernel size 2 2 stride 2 2 padding same activation tf nn relu self conv3 tf keras layer conv2d filter 8 kernel size 3 3 activation tf nn relu tf function def call self x print input shape x shape out self conv1 x print conv1 shape out shape skip self conv2 out print conv2 shape skip shape out self maxpool skip print maxpool shape out shape out self deconv out the deconv shape will be none none none 16 when export save model print deconv shape out shape out self conv3 out out crop and concat out skip return out model unet dummy re model predict tf one 1 400 400 1 print finish prediction tf keras model save model model result savedmodel save format tf overwrite true include optimizer false other info log input shape none 400 400 1 conv1 shape none 398 398 8 conv2 shape none 396 396 8 maxpool shape none 198 198 8 deconv shape none 396 396 16 input shape none 400 400 1 conv1 shape none 398 398 8 conv2 shape none 396 396 8 maxpool shape none 198 198 8 deconv shape none 396 396 16 finish prediction input shape none 400 400 1 conv1 shape none 398 398 8 conv2 shape none 396 396 8 maxpool shape none 198 198 8 deconv shape none none none 16 the above log show that in the prediction all shape be correctly infer but when we be export the save model the shape become incomplete and the spatial info be lose only after the deconv conv2dtranspose and all the other layer still look fine with correct shape info so we have two question 1 why do we need to calculate the shape again when export the save model 2 why be the spatial info lose only after deconv and this one look like a bug fyi nluehr
tensorflowtensorflow
typeerror object of type nonetype have no len
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary from pip binary tensorflow version use command below 2 1 python version 3 7 cuda cudnn version no gpu model and memory no describe the current behavior and code to reproduce I be run this code to train an rbm by contrastive divergence briefly I be use the keras sub class api and I view the multiple monte carlo sample as layer however it produce the follow error typeerror object of type nonetype have no len please find the full error message and stack trace as a comment on the above gist link
tensorflowtensorflow
documentation on tensorflow 2 1 with tpu
Bug
be you go to update your tpu documentation with 2 1 release
tensorflowtensorflow
deprecation notice for collection sequence in recurrent py
Bug
qlzh727 describe the current behavior when run unit test the follow deprecation note appear lib python3 7 site package tensorflow core python keras layers recurrent py 808 deprecationwarne use or import the abc from collection instead of from collection abc be deprecate since python 3 3 and in 3 9 it will stop work if isinstance input collection sequence describe the expect behavior no deprecation notice line 808 need a minor modification if isinstance input collection abc sequence see python 3 doc collection abc sequence
tensorflowtensorflow
load model fail on inference
Bug
system information tf version 2 1 0 and I have also have this error on tf 2 2 0 dev20200114 python 3 7 describe the current behavior similar to 35527 when I save and then load my model it fail upon actually use the model cite the input be different to what be expect when I run my code see below I get the follow error valueerror could not find matching function to call load from the savedmodel get positional argument 3 total false none keyword argument expect these argument to match one of the follow 4 option s option 1 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input ids tensorspec shape none 10 dtype tf int32 name attention mask true none keyword argument option 2 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input ids tensorspec shape none 10 dtype tf int32 name attention mask false none keyword argument option 3 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input 0 tensorspec shape none 10 dtype tf int32 name input 1 true none keyword argument option 4 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input 0 tensorspec shape none 10 dtype tf int32 name input 1 false none keyword argument my reasoning for make this issue when 35527 already exist be that my code to reproduce the issue be more succinct and so should hopefully be easy to troubleshoot describe the expect behavior the model should load and behave in the exact same manner in which it be save I e not crash when do inference on datum this be currently not the case code to reproduce the issue import tensorflow as tf create a model use tf and the popular transformer nlp package class tagmodelcreator def init self language model self language model language model def create self num class max seq len get token type ids false input module input module append tf keras layers input shape max seq len dtype int32 name input ids input module append tf keras layers input shape max seq len dtype int32 name attention mask lang layer self language model input module linear layer tf keras layer timedistribute tf keras layer dense num class name classifier lang layer 0 model tf keras model input input module output linear layer return model from transformer import tfautomodel model name bert base uncased language model tfautomodel from pretraine model name tagging model creator tagmodelcreator language model arbitrary class num 2 arbitrary sequence length 10 tagging model tagging model creator create arbitrary class num arbitrary sequence length create some spoof datum to see how the model handle the datum def data generator yield 0 arbitrary sequence length 1 arbitrary sequence length input type tf int32 tf int32 input shape tf tensorshape none tf tensorshape none tf dataset tf datum dataset from generator data generator input type input shape batch 7 use the spoof datum on the model to confirm that it do inference on the datum without error for example input in tf dataset test output tagging model example input break print test output print inference be do correctly before re load the model save and reload the model tf keras model save model model tagging model filepath test model save tf save format tf include optimizer true reload model tf keras model load model filepath test model save tf try to repeat the inference as above for example input in tf dataset test output reload model example input break other info log the full output of the above code include printout and the error stack trace be tf tensor 0 6191008 0 12756673 0 89110005 0 06499487 0 8666591 0 02111167 0 8456675 0 08551306 0 853022 0 15643758 0 8632274 0 20486367 0 8571876 0 24682882 0 8400811 0 2774819 0 8864943 0 32766515 0 8612056 0 3529073 shape 1 10 2 dtype float32 inference be do correctly before re load the model warn tensorflow from c user peter appdata roam python python37 site package tensorflow core python op resource variable op py 1786 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer info tensorflow asset write to test model save tf asset valueerror traceback most recent call last in 80 try to repeat the inference as above 81 for example input in tf dataset 82 test output reload model example input 83 break 84 appdata roam python python37 site package tensorflow core python keras engine base layer py in call self input args kwargs 820 with base layer util autocast context manager 821 self compute dtype 822 output self call cast input args kwargs 823 self handle activity regularization input output 824 self set mask metadata input output input mask appdata roam python python37 site package tensorflow core python keras save save model util py in return output and add loss args kwargs 57 input args inputs arg index 58 args args inputs arg index 1 59 output loss fn input args kwargs 60 layer add loss loss input 61 return output appdata roam python python37 site package tensorflow core python keras save save model util py in wrap with training arg args kwargs 111 training 112 lambda replace training and call true 113 lambda replace training and call false 114 115 create arg spec for decorate function if training be not define in the appdata roam python python37 site package tensorflow core python keras util tf util py in smart cond pre true fn false fn name 57 pre true fn true fn false fn false fn name name 58 return smart module smart cond 59 pre true fn true fn false fn false fn name name 60 61 appdata roam python python37 site package tensorflow core python framework smart cond py in smart cond pre true fn false fn name 54 return true fn 55 else 56 return false fn 57 else 58 return control flow op cond pre true fn true fn false fn false fn appdata roam python python37 site package tensorflow core python keras save save model util py in 111 training 112 lambda replace training and call true 113 lambda replace training and call false 114 115 create arg spec for decorate function if training be not define in the appdata roam python python37 site package tensorflow core python keras save save model util py in replace training and call training 106 def replace training and call training 107 set training arg training training arg index args kwargs 108 return wrap call args kwargs 109 110 return tf util smart cond appdata roam python python37 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count appdata roam python python37 site package tensorflow core python eager def function py in call self args kwd 604 in this case we have not create variable on the first call so we can 605 run the first trace but we should fail if variable be create 606 result self stateful fn args kwd 607 if self create variable 608 raise valueerror create variable on a non first call to a function appdata roam python python37 site package tensorflow core python eager function py in call self args kwargs 2360 call a graph function specialize to the input 2361 with self lock 2362 graph function args kwargs self maybe define function args kwargs 2363 return graph function filter call args kwargs pylint disable protect access 2364 appdata roam python python37 site package tensorflow core python eager function py in maybe define function self args kwargs 2701 2702 self function cache miss add call context key 2703 graph function self create graph function args kwargs 2704 self function cache primary cache key graph function 2705 return graph function args kwargs appdata roam python python37 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2591 arg name arg name 2592 override flat arg shape override flat arg shape 2593 capture by value self capture by value 2594 self function attribute 2595 tell the concretefunction to clean up its graph once it go out of appdata roam python python37 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 976 convert func 977 978 func output python func func args func kwargs 979 980 invariant func output contain only tensor compositetensor appdata roam python python37 site package tensorflow core python eager def function py in wrap fn args kwd 437 wrap allow autograph to swap in a converted function we give 438 the function a weak reference to itself to avoid a reference cycle 439 return weak wrap fn wrap args kwd 440 weak wrap fn weakref ref wrap fn 441 appdata roaming python python37 site package tensorflow core python save model function deserialization py in restore function body args kwargs 260 format pretty format positional args kwargs 261 len save function concrete function 262 n n join signature description 263 264 concrete function object valueerror could not find matching function to call load from the savedmodel get positional argument 3 total false none keyword argument expect these argument to match one of the follow 4 option s option 1 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input ids tensorspec shape none 10 dtype tf int32 name attention mask true none keyword argument option 2 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input ids tensorspec shape none 10 dtype tf int32 name attention mask false none keyword argument option 3 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input 0 tensorspec shape none 10 dtype tf int32 name input 1 true none keyword argument option 4 positional argument 3 total tensorspec shape none 10 dtype tf int32 name input 0 tensorspec shape none 10 dtype tf int32 name input 1 false none keyword argument
tensorflowtensorflow
alphadropout mixed float16 op have type float32 that do not match type float16
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux fedora 31 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below v2 1 0 1 ga9af83a149 2 1 0 python version 3 7 5 bazel version if compile from source 0 29 1 gcc compiler version if compile from source 8 3 1 cuda cudnn version 10 2 89 7 6 5 33 gpu model and memory nvidia geforce gtx 1070 ti 8 gb describe the current behavior traceback most recent call last file home phemmer local lib python3 7 site package tensorflow core python framework op def library py line 468 in apply op helper prefer dtype default dtype file home phemmer local lib python3 7 site package tensorflow core python framework op py line 1290 in convert to tensor dtype name value dtype name value valueerror tensor conversion request dtype float16 for tensor with dtype float32 during handling of the above exception another exception occur traceback most recent call last file tmp test py line 6 in dropout tf keras layers alphadropout 0 5 input file home phemmer local lib python3 7 site package tensorflow core python keras engine base layer py line 773 in call output call fn cast input args kwargs file home phemmer local lib python3 7 site package tensorflow core python keras layers noise py line 202 in call return k in train phase drop input input training training file home phemmer local lib python3 7 site package tensorflow core python keras backend py line 4303 in in train phase x switch training x alt file home phemmer local lib python3 7 site package tensorflow core python keras backend py line 4236 in switch x control flow op cond condition then expression fn else expression fn file home phemmer local lib python3 7 site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file home phemmer local lib python3 7 site package tensorflow core python op control flow op py line 1174 in cond return cond v2 cond v2 pre true fn false fn name file home phemmer local lib python3 7 site package tensorflow core python op cond v2 py line 83 in cond v2 op return value pre file home phemmer local lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home phemmer local lib python3 7 site package tensorflow core python keras layers noise py line 197 in drop input x input keep idx alpha p 1 keep idx file home phemmer local lib python3 7 site package tensorflow core python ops math op py line 902 in binary op wrapper return func x y name name file home phemmer local lib python3 7 site package tensorflow core python ops math op py line 1201 in mul dispatch return gen math op mul x y name name file home phemmer local lib python3 7 site package tensorflow core python ops gen math op py line 6125 in mul mul x x y y name name file home phemmer local lib python3 7 site package tensorflow core python framework op def library py line 504 in apply op helper infer from input arg type attr typeerror input y of mul op have type float32 that do not match type float16 of argument x describe the expect behavior no exception code to reproduce the issue import tensorflow as tf from tensorflow keras mix precision import experimental as mixed precision mixed precision set policy mixed precision policy mixed float16 input tf keras input shape 1 dropout tf keras layers alphadropout 0 5 input
tensorflowtensorflow
multiworkermirroredstrategy keras example hang
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 gpu model and memory tesla k80 1 gpu per worker 2 worker describe the current behavior when I run the distribute training example for the multiworkermirroredstrategy with kera document here approximately half the time the training will be successful and otherwise the worker will hang after the first epoch describe the expect behavior the training should successfully complete every time with no hang code to reproduce the issue python import os json os environ tf config json dump cluster worker x x x x 2000 x x x x 2000 task type worker index 0 import tensorflow dataset as tfds import tensorflow as tf tf config optimizer set jit true tfds disable progress bar strategy tf distribute experimental multiworkermirroredstrategy communication tf distribute experimental collectivecommunication nccl nccl vs ring print number of device format strategy num replicas in sync buffer size 10000 batch size 64 num worker 2 def make dataset unbatched scale mnist datum from 0 255 to 0 1 def scale image label image tf cast image tf float32 image 255 return image label dataset info tfds load name mnist with info true as supervise true return dataset train map scale cache shuffle buffer size train dataset make dataset unbatched batch batch size def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model global batch size 64 num worker with strategy scope train dataset make dataset unbatched batch global batch size multi worker model build and compile cnn model multi worker model fit x train dataset epoch 3 step per epoch 5 other info log 2020 01 14 20 53 41 329726 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 14 20 53 41 393549 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 394307 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name tesla k80 major 3 minor 7 memoryclockrate ghz 0 8235 pcibusid 0000 00 1e 0 2020 01 14 20 53 41 394552 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 01 14 20 53 41 396280 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 01 14 20 53 41 397495 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 01 14 20 53 41 397794 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 01 14 20 53 41 399430 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 01 14 20 53 41 400687 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 01 14 20 53 41 404610 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 14 20 53 41 404722 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 405478 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 406161 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2020 01 14 20 53 41 406769 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 fma 2020 01 14 20 53 41 414319 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2300070000 hz 2020 01 14 20 53 41 414709 I tensorflow compiler xla service service cc 168 xla service 0x56414dcdbd50 execute computation on platform host device 2020 01 14 20 53 41 414747 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2020 01 14 20 53 41 722508 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 723344 I tensorflow compiler xla service service cc 168 xla service 0x56414dcf9870 execute computation on platform cuda device 2020 01 14 20 53 41 723406 I tensorflow compiler xla service service cc 175 streamexecutor device 0 tesla k80 compute capability 3 7 2020 01 14 20 53 41 723661 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 724362 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name tesla k80 major 3 minor 7 memoryclockrate ghz 0 8235 pcibusid 0000 00 1e 0 2020 01 14 20 53 41 724432 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 01 14 20 53 41 724476 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 01 14 20 53 41 724515 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 01 14 20 53 41 724558 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 01 14 20 53 41 724594 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 01 14 20 53 41 724633 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 01 14 20 53 41 724670 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 14 20 53 41 724757 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 725490 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 726167 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2020 01 14 20 53 41 726224 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 01 14 20 53 41 727650 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 01 14 20 53 41 727681 I tensorflow core common runtime gpu gpu device cc 1165 0 2020 01 14 20 53 41 727699 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2020 01 14 20 53 41 728328 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 729081 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 729792 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10805 mb memory physical gpu device 0 name tesla k80 pci bus i d 0000 00 1e 0 compute capability 3 7 2020 01 14 20 53 41 731085 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 731843 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name tesla k80 major 3 minor 7 memoryclockrate ghz 0 8235 pcibusid 0000 00 1e 0 2020 01 14 20 53 41 731906 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 01 14 20 53 41 731950 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 01 14 20 53 41 731993 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 01 14 20 53 41 732036 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 01 14 20 53 41 732078 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 01 14 20 53 41 732119 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 01 14 20 53 41 732161 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 14 20 53 41 732267 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 733006 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 733681 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2020 01 14 20 53 41 733719 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 01 14 20 53 41 733746 I tensorflow core common runtime gpu gpu device cc 1165 0 2020 01 14 20 53 41 733768 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2020 01 14 20 53 41 734378 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 735121 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 14 20 53 41 735829 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job worker replica 0 task 1 device gpu 0 with 10805 mb memory physical gpu device 0 name tesla k80 pci bus i d 0000 00 1e 0 compute capability 3 7 d0114 20 53 41 736038696 13433 log cc 95 warning insecure environment read function getenv use d0114 20 53 41 736065073 13433 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 41 736085632 13433 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 41 736249487 13433 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 41 736271507 13433 be epollexclusive available cc 86 epoll ctl with epollexclusive epolloneshot succeed this be evidence of no epollexclusive support not use epollex polling engine i0114 20 53 41 736291557 13433 ev epollex linux cc 1633 skip epollex because it be not support i0114 20 53 41 736308984 13433 ev epoll1 linux cc 116 grpc epoll fd 22 d0114 20 53 41 736329223 13433 ev posix cc 170 use polling engine epoll1 d0114 20 53 41 736356694 13433 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 41 736392859 13433 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 41 736408436 13433 dns resolver cc 334 use native dns resolver d0114 20 53 41 736428850 13433 env linux cc 71 warning insecure environment read function getenv use e0114 20 53 41 737641232 13433 socket util common posix cc 198 check for so reuseport create 1579035221 737631945 description so reuseport unavailable on compile system file external grpc src core lib iomgr socket util common posix cc file line 166 2020 01 14 20 53 41 737822 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 x x x x 2000 1 localhost 2000 2020 01 14 20 53 41 738932 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 2000 number of device 2 d0114 20 53 43 049644334 13552 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 43 049683824 13552 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 43 049689625 13552 env linux cc 71 warning insecure environment read function getenv use d0114 20 53 43 049770214 13552 dns resolver cc 275 start resolve i0114 20 53 43 050377745 13518 subchannel cc 1025 new connected subchannel at 0x7f2a04006d60 for subchannel 0x7f2a18008070 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow modelcheckpoint callback be not provide worker will need to restart training if any fail warn tensorflow modelcheckpoint callback be not provide worker will need to restart training if any fail train for 5 step epoch 1 3 2020 01 14 20 53 46 911991 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 01 14 20 53 48 937723 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 the worker hang indefinitely after successfully open dynamic library libcudnn so 7
tensorflowtensorflow
add usage example for maxpool2d
Bug
add usage example for maxpool2d
tensorflowtensorflow
keras estimator fail on regression task while underlying model work
Bug
system information have I write custom code yes os platform and distribution window 10 tensorflow instal from binary tensorflow version same issue use tf2 0 0 beta1 cpu and tf1 14 0 gpu python version 3 6 9 cuda cudnn version cuda 10 1 168 cudnn 7 6 2 gpu model and memory nvidia geforce rtx 2060 6 gb dedicated memory describe the current behavior a convolutional reggression last layer have linear activation and one neuron network build with tf keras be show to fit the mnist dataset I know that mnist be a classification task this be an example when convert to a dataset when the same model be package into an estimator use tf keras estimator model to estimator no error message occur however the model no long fit the loss do not decrease I have make a stackoverflow question about this with no traction whatsoever after some more try to get it to work I believe it be a bug describe the expect behavior the keras estimator should have the same behaviour as the underlying model change the use estimator variable to see that the underlying model work code to reproduce the issue python 3 6 test with tensorflow gpu 1 14 and tensorflow cpu 2 0 import tensorflow as tf import numpy as np def get model I m width 28 num color channel 1 create a very simple convolutional neural network use a tf keras functional model input tf keras input shape I m width I m width num color channel x tf keras layer conv2d 32 3 activation relu input x tf keras layer maxpooling2d 3 x x tf keras layer conv2d 64 3 activation relu x x tf keras layer maxpooling2d 3 x x tf keras layer flatten x x tf keras layer dense 64 activation relu x output tf keras layer dense 1 activation linear x model tf keras model input input output output model compile optimizer adam loss mae metric mae model summary return model def input fun train true load mnist and return the training or test set as a tf datum dataset valid input function for tf estimator train image train label eval image eval label tf keras datasets mnist load datum train image train image reshape 60 000 28 28 1 astype np float32 255 eval image eval image reshape 10 000 28 28 1 astype np float32 255 train label train label astype np float32 these two line don t affect behaviour eval label eval label astype np float32 for a neural network with one neuron in the final layer it doesn t seem to matter if target datum be float or int if train dataset tf datum dataset from tensor slice train image train label dataset dataset shuffle buffer size 100 repeat none batch 32 prefetch 1 else dataset tf datum dataset from tensor slice eval image eval label dataset dataset batch 32 prefetch 1 note prefetching do not affect behaviour return dataset model get model train input fn lambda input fun train true eval input fn lambda input fun train false num epoch step per epoch 4 1875 1875 number of train image 60 000 batch size 32 use estimator false change this to compare model estimator estimator perform much bad for no apparent reason if use estimator estimator tf keras estimator model to estimator keras model model model dir model directory config tf estimator runconfig save checkpoint step 200 save summary step 200 train spec tf estimator trainspec input fn train input fn max step step per epoch num epoch eval spec tf estimator evalspec input fn eval input fn throttle sec 0 tf estimator train and evaluate estimator train spec eval spec print training complete evaluate estimator print estimator evaluate eval input fn final train loss with estimator 2 5 mean abs error else dataset train input fn model fit dataset step per epoch step per epoch epoch num epoch print training complete evaluate keras model print model evaluate eval input fn final train loss with keras model 0 4 mean abs error
tensorflowtensorflow
bug in transfer learning distribute training
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version 3 7 cuda cudnn version cuda 10 0 cudnn 7 gpu model and memory quadro p6000 24 gb describe the problem under the distribute environment if the model be update the trainable weight of the model be not update in the distribute training loop please see follow code for reproduce the bug I first create 2 conv2d layer I create the first model use only 1 conv2d layer and it work fine then I update the mode to create a 2 conv2d model then there s the bug outside the training loop there be trainable weight 2 kernel 2 bias but inside the training loop there s only 2 trainable weight source code log import numpy as np import tensorflow as tf from tensorflow import kera tf config set soft device placement true gpu tf config experimental list physical device gpu for gpu in gpu tf config experimental set memory growth gpu true begin x in np random randn 2 64 64 3 astype np float32 gt np random randn 2 64 64 3 astype np float32 layer1 keras layer conv2d input shape none none none 3 filter 3 kernel size 3 stride 1 padding same name conv1 layer2 keras layer conv2d input shape none none none 3 filter 3 kernel size 3 stride 1 padding same name conv2 strategy tf distribute mirroredstrategy tf function def train def train step with tf gradienttape as tape loss tf reduce mean model x in gt 2 grad tape gradient loss model trainable weight tf print length of trainable weight len model trainable weight length of grad len grad optimizer apply gradient zip grad model trainable weight strategy experimental run v2 train step print first model with strategy scope x keras input 64 64 3 model keras model inputs x outputs layer1 x optimizer keras optimizer adam 0 1 amsgrad true print length of trainable weight len model trainable weight print model trainable weight 1 value 0 name model trainable weight 1 value 0 numpy print length of trainable weight 2 conv1 bias 0 0 0 0 for I in range 2 train print length of trainable weight 2 length of grad 2 length of trainable weight 2 length of grad 2 print length of trainable weight len model trainable weight print model trainable weight 1 value 0 name model trainable weight 1 value 0 numpy print length of trainable weight 2 conv1 bias 0 0 03003622 0 03190297 0 02856238 print change model with strategy scope x keras input 64 64 3 model keras model inputs x output layer2 layer1 x print length of trainable weight len model trainable weight print model trainable weight 1 value 0 name model trainable weight 1 value 0 numpy print length of trainable weight 4 conv2 bias 0 0 0 0 for I in range 2 train print length of trainable weight 2 length of grad 2 length of trainable weight 2 length of grad 2 print length of trainable weight len model trainable weight print model trainable weight 1 value 0 name model trainable weight 1 value 0 numpy print length of trainable weight 4 conv2 bias 0 0 0 0
tensorflowtensorflow
layernormalization dtype issue with mixed precision policy mixed float16
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab notebook tensorflow version use command below tf nightly 2 2 0 dev20200113 gpu model and memory gpu 0 tesla t4 I m make small edit to the colab notebook here when use mixed precision computation via python policy mix precision policy mixed float16 mixed precision set policy policy layernormalization doesn t appear to work python input keras input shape 784 name digit if tf config list physical device gpu print the model will run with 4096 unit on a gpu num unit 4096 else use few unit on cpus so the model finish in a reasonable amount of time print the model will run with 64 unit on a cpu num unit 64 dense1 layer dense num unit activation relu name dense 1 x dense1 input dense2 layer dense num unit activation relu name dense 2 x dense2 x layer norm layer layernormalization x layer norm x the model will run with 4096 unit on a gpu typeerror traceback most recent call last in 12 x dense2 x 13 layer norm layer layernormalization 14 x layer norm x 5 frame usr local lib python3 6 dist package tensorflow core python framework op def library py in satisfiestypeconstraint dtype attr def param name 59 allow value s 60 param name dtype as dtype dtype name 61 join dtype as dtype x name for x in allow list 62 63 typeerror value pass to parameter scale have datatype float16 not in list of allow value float32 replace layernormalization with batchnormalization work without issue
tensorflowtensorflow
batch image first or format they first
Bug
under create a dataset from image and label we have this code train batch train example cache shuffle num example 4 batch batch size map format example prefetch 1 and similar for the validation and test example be this a right practise shouldn t we first format the raw image and then batch they together rather than opposite way
tensorflowtensorflow
build the posenet example on io be fail
Bug
system information os platform and distribution macos 10 15 and ipado 13 3 mobile device ipad 2018 tensorflow instal from source or binary pod describe the problem I do all the step from the readme of but get next error cocoapod could not find compatible version for pod tensorflowliteswift in snapshot podfile lock tensorflowliteswift 0 0 1 nightly in podfile tensorflowliteswift 0 0 1 nightly none of your spec source contain a spec satisfy the dependency tensorflowliteswift 0 0 1 nightly tensorflowliteswift 0 0 1 nightly you have either out of date source repos which you can update with pod repo update or with pod install repo update mistype the name or version not add the source repo that host the podspec to your podfile the solution to make it work I just remove the podfile lock file and run again the command pod install maybe the source repo should also be update thank p s initially it be post there
tensorflowtensorflow
typo error in tflite c02 transfer learn ipynb
Bug
url s with the issue description of issue what need change screenshot from 2020 01 13 12 53 58 in description it should be cat vs dog submit a pull request yes
tensorflowtensorflow
incompatible shape when use tf keras backend ctc decode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux debian buster mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when use a tf keras backend ctc decode with a batch size the size of the model input a valueerror be raise relate to failure to broadcast input shape describe the expect behavior I expect shape to be consistent and therefore no valueerror to be raise code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf import numpy as np def ctcdecoder def decoder y pre input shape tf keras backend shape y pre input length tf one shape input shape 0 tf keras backend cast input shape 1 float32 return tf keras backend ctc decode y pre input length 0 0 return tf keras layers lambda decoder name decode input layer tf keras layers input 48 37 x ctcdecoder input layer model tf keras model model input input layer outputs x this never raise a valueerror the batch size be equal to the length of the input y model predict np random uniform size 100 48 37 batch size 100 this usually raise a valueerror y model predict np random uniform size 100 48 37 batch size 32 this always raise a valueerror y model predict np random uniform size 100 48 37 batch size 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the full traceback for an example exception valueerror traceback most recent call last in 1 y model predict np random uniform size 100 48 37 batch size 1 usr src venv lib python3 7 site package tensorflow core python keras engine train py in predict self x batch size verbose step callback max queue size worker use multiprocesse 1011 max queue size max queue size 1012 worker worker 1013 use multiprocesse use multiprocesse 1014 1015 def reset metric self usr src venv lib python3 7 site package tensorflow core python keras engine training v2 py in predict self model x batch size verbose step callback max queue size worker use multiprocesse kwargs 496 model modekey predict x x batch size batch size verbose verbose 497 step step callback callback max queue size max queue size 498 worker worker use multiprocesse use multiprocesse kwargs 499 500 usr src venv lib python3 7 site package tensorflow core python keras engine training v2 py in model iteration self model mode x y batch size verbose sample weight step callback max queue size worker use multiprocesse kwargs 473 mode mode 474 training context training context 475 total epoch 1 476 cbks make logs model epoch log result mode 477 usr src venv lib python3 7 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 177 batch out 178 batch start step batch size 179 batch end step batch size current batch size 180 cbks make logs model batch log batch out mode 181 step 1 usr src venv lib python3 7 site package tensorflow core python keras engine training util py in aggregate self batch out batch start batch end 345 batch out nest flatten up to self structure batch out 346 for batch element result in zip batch out self result 347 result aggregate batch element batch start batch end 348 349 def finalize self usr src venv lib python3 7 site package tensorflow core python keras engine training util py in aggregate self batch element batch start batch end 278 num element np prod batch element shape 279 if num element self binary size threshold 280 self result batch start batch end batch element 281 else 282 be finish threading event valueerror could not broadcast input array from shape 1 46 into shape 1 48
tensorflowtensorflow
the tflite only utilize single core of cpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu18 04 and raspbian 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device raspberry pi 4 tensorflow instal from source or binary binary tensorflow version use command below for ubuntu v1 14 0 rc1 22 gaf24dc9 1 14 0 for raspberry pi v1 12 1 14948 g43dcb71 1 14 0 python version 3 7 describe the current behavior the tflite only utilize single core of cpu I test it on pc and raspberry pi 4 code to reproduce the issue interpreter tf lite interpreter model content tflite model input detail interpreter get input detail output detail interpreter get output detail interpreter allocate tensor interpreter set tensor input detail 0 index inp interpreter invoke result interpreter get tensor output detail 0 index the model I use be here 1578756226 tar gz
tensorflowtensorflow
model function kwargs be silently ignore
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow see below os platform and distribution e g linux ubuntu 16 04 windows linux tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 7 6 describe the current behavior in this example no error be raise although session kwarg be not support in eager mode kwargs be simply silently ignore import tensorflow as tf from tensorflow import kera fetch lambda whatever I write here be ignore var tf variable 3 0 model keras model sequential kera layer dense 1 input shape 1 model compile loss mse optimizer adam model function kwargs fetch fetch should fail ignore as well model fit 7 0 9 0 epoch 2 describe the expect behavior an error should be raise I be aware that model function kwargs be not part of the public api but kera as oppose to tf keras do raise an error here import kera import tensorflow as tf fetch lambda whatever I write here be ignore var tf variable 3 0 model keras model sequential kera layer dense 1 input shape 1 model compile loss mse optimizer adam model function kwargs fetch fetch should fail ignore as well model fit 7 0 9 0 epoch 2 output exception have occur valueerror session keyword argument be not support during eager execution you pass fetch at 0x7f94681be3b0 should fail ignore as well file home bersbersber pyenv version 3 7 6 lib python3 7 site package tensorflow core python keras backend py line 3759 in function somewhat relate
tensorflowtensorflow
autograph failure with
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory nvidia rtx 2080 describe the current behavior tensorflow show warn about failure of autograph warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find the warning seem to be cause by the backslash describe the expect behavior there should be no such warning code to reproduce the issue python import tensorflow as tf class c object def f self error disappear if in the follow line be remove a 1 return a obj c tf function def func mem obj f return mem def main print func if name main main
tensorflowtensorflow
tf2 1 k parameter be ignore in tf linalg diag part
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 64bit also happen in wsl mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 python version 3 7 cuda cudnn version 10 1 also happen in tensorflow cpu gpu model and memory gtx 1050 2 gb describe the current behavior use the k parameter of tf linalg diag part do not affect anything the result be still the same describe the expect behavior it should select the super or subdiagonal dependent on k code to reproduce the issue import tensorflow as tf import numpy as np input np array 1 2 3 4 input shape 2 3 4 5 6 7 8 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7 8 this work as expect tf linalg diag part input 1 6 7 output shape 2 3 5 2 7 this do not tf linalg diag part input k 1 1 6 7 5 2 7 still return the same output 2 7 6 4 3 8 be the expect output this example be take from the documentation which be also incorrect see 35760
tensorflowtensorflow
diag part documentation outdate
Bug
url s with the issue description of issue what need change the diag part documentation contain old example of non exist apis it use tf matrix diag part in the example which do not exist
tensorflowtensorflow
tflite micro softmax op be still version 1
Bug
tensorflow micro system information host os platform and distribution ubuntu 18 04 tensorflow instal from source tensorflow version 4b3c1199a97cb36b8866d98e7036f4ec3e70abd6 target platform apollo3 describe the problem the tflite micro softmax op in tensorflow lite micro kernels softmax cc already have int8 input support from what I understand this should be version 2 in tensorflow lite micro kernel all op resolver cc l26
tensorflowtensorflow
add a warning that tfds load can not be use for own dataset
Bug
url s with the issue description of issue what need change add a warning that tfds load can not be use for the user own dataset I e that he create himself to a new user try to to load a dataset from a set of file it be not obvious that this method be only for pre make immutable dataset although it do say load the name dataset into a tf datum dataset I initially interpret it such that my own dataset can be assign a name I be look for a way to split a dataset into train and validation subset and stumble upon this documentation I be redirect from which come up as one of the most prominent search result when search for tensorflow dataset split result a user who visit will not spend 1 h of try to understand all the documentation but will immediately realize that this be only for immutable pre make dataset
tensorflowtensorflow
multi gpu training issue tensorflow 2 0 0
Bug
I be try to train my code on multiple gpu and have follow the tutorial online on use mirroredstrategy use mnist dataset below be the error valueerror handle be not available outside the replica context or a tf distribute strategy update call I have a feeling that the issue be due to the fact that when I call on my gpu they be name xla and not device gpu this lead to I to believe it be a bug below be my full code and follow that the full error I be currently use tensorflow 2 0 0 cuda 10 1 and cento 7 6 1810 my code be below import tensorflow as tf from tensorflow import kera from tensorflow keras application import xception from tensorflow keras util import multi gpu model import numpy as np num sample 1000 height 224 width 224 num class 1000 this put the model weight on the cpu with tf device device name 1 model xception weight none input shape height width 3 class num class this split up the training amongst multiple gpu mirror strategy tf distribute mirroredstrategy device device name 3 device name 4 from future import absolute import division print function unicode literal import tensorflow dataset as tfds import tensorflow as tf tfds disable progress bar import os print tf version 2 0 0 dataset info tfds load name mnist with info true as supervised true mnist train mnist test dataset train dataset test this split up the training amongst multiple specific gpu strategy tf distribute mirroredstrategy device device name 2 device name 3 print number of device format strategy num replicas in sync print name of device be device name 2 and device name 3 number of device 2 name of device be device xla gpu 0 and device xla gpu 1 the benefit of use multiple gpu be that you can train with the large batchsize then you can just tweak the learning rate accordingly num train example info split train num example num test example info split test num example buffer size 10000 batch size per replica 64 def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale cache shuffle buffer size batch batch size eval dataset mnist test map scale batch batch size with strategy scope the scope portion indicate which part of the code will be distribute model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax import datetime log dir log fit datetime datetime now strftime y m d h m s tensorboard callback tf keras callbacks tensorboard log dir log dir histogram freq 1 model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy def decay epoch if epoch 3 return 1e 3 elif epoch 3 and epoch 7 return 1e 4 else return 1e 5 checkpoint dir training checkpoint checkpoint prefix os path join checkpoint dir ckpt epoch class printlr tf keras callbacks callback def on epoch end self epoch log none print nlearne rate for epoch be format epoch 1 model optimizer lr numpy callback tensorboard callback tf keras callbacks modelcheckpoint filepath checkpoint prefix save weight only true tf keras callback learningratescheduler decay printlr model fit train dataset epoch 30 callback callback full error valueerror traceback most recent call last in 1 model fit train dataset epoch 30 callback callback anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 726 max queue size max queue size 727 worker worker 728 use multiprocesse use multiprocesse 729 730 def evaluate self anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq kwargs 322 mode modekey train 323 training context training context 324 total epoch epoch 325 cbks make logs model epoch log training result modekeys train 326 anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 121 step step mode mode size current batch size as batch log 122 try 123 batch out execution function iterator 124 except stopiteration error outofrangeerror 125 todo kaftan file bug about tf function and error outofrangeerror anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training v2 util py in execution function input fn 84 numpy translate tensor to value in eager mode 85 return nest map structure non none constant value 86 distribute function input fn 87 88 return execution function anaconda3 envs tf lib python3 6 site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace anaconda3 envs tf lib python3 6 site package tensorflow core python eager def function py in call self args kwd 501 this be the first call of call so we have to initialize 502 initializer map object identity objectidentitydictionary 503 self initialize args kwd add initializer to initializer map 504 finally 505 at this point we know that the initialization be complete or less anaconda3 envs tf lib python3 6 site package tensorflow core python eager def function py in initialize self args kwd add initializer to 406 self concrete stateful fn 407 self stateful fn get concrete function internal garbage collect pylint disable protect access 408 args kwd 409 410 def invalid creator scope unused args unused kwd anaconda3 envs tf lib python3 6 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1846 if self input signature 1847 args kwargs none none 1848 graph function self maybe define function args kwargs 1849 return graph function 1850 anaconda3 envs tf lib python3 6 site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs anaconda3 envs tf lib python3 6 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2039 arg name arg name 2040 override flat arg shape override flat arg shape 2041 capture by value self capture by value 2042 self function attribute 2043 tell the concretefunction to clean up its graph once it go out of anaconda3 envs tf lib python3 6 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor anaconda3 envs tf lib python3 6 site package tensorflow core python eager def function py in wrap fn args kwd 356 wrap allow autograph to swap in a converted function we give 357 the function a weak reference to itself to avoid a reference cycle 358 return weak wrap fn wrap args kwd 359 weak wrap fn weakref ref wrap fn 360 anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training v2 util py in distribute function input iterator 71 strategy distribution strategy context get strategy 72 output strategy experimental run v2 73 per replica function args model x y sample weight 74 out of perreplica output reduce or pick value to return 75 all output dist util unwrap output dict anaconda3 envs tf lib python3 6 site package tensorflow core python distribute distribute lib py in experimental run v2 self fn args kwargs 758 fn autograph tf convert fn ag ctx control status ctx 759 convert by default false 760 return self extend call for each replica fn args args kwargs kwargs 761 762 def reduce self reduce op value axis anaconda3 envs tf lib python3 6 site package tensorflow core python distribute distribute lib py in call for each replica self fn args kwargs 1785 kwargs 1786 with self container strategy scope 1787 return self call for each replica fn args kwargs 1788 1789 def call for each replica self fn args kwargs anaconda3 envs tf lib python3 6 site package tensorflow core python distribute distribute lib py in call for each replica self fn args kwargs 2130 self container strategy 2131 replica i d in sync group constant op constant 0 dtype int32 2132 return fn args kwargs 2133 2134 def reduce to self reduce op value destination anaconda3 envs tf lib python3 6 site package tensorflow core python autograph impl api py in wrapper args kwargs 290 def wrapper args kwargs 291 with ag ctx controlstatusctx status ag ctx status disable 292 return func args kwargs 293 294 if inspect isfunction func or inspect ismethod func anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training v2 util py in train on batch model x y sample weight class weight reset metric 262 y 263 sample weight sample weight 264 output loss metric model output loss metric 265 266 if reset metric anaconda3 envs tf lib python3 6 site package tensorflplease make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template ow core python keras engine training eager py in train on batch model input target sample weight output loss metric 309 sample weight sample weight 310 training true 311 output loss metric output loss metric 312 if not isinstance out list 313 out out anaconda3 envs tf lib python3 6 site package tensorflow core python keras engine training eager py in process single batch model input target output loss metric sample weight training 270 loss scale optimizer lossscaleoptimizer 271 grad model optimizer get unscaled gradient grad 272 model optimizer apply gradient zip grad trainable weight 273 else 274 log warn the list of trainable weight be empty make sure that anaconda3 envs tf lib python3 6 site package tensorflow core python keras optimizer v2 optimizer v2 py in apply gradient self grad and var name 439 functool partial self distribute apply apply state apply state 440 args grad and var 441 kwargs name name 442 443 def distribute apply self distribution grad and var name apply state anaconda3 envs tf lib python3 6 site package tensorflow core python distribute distribute lib py in merge call self merge fn args kwargs 1915 if kwargs be none 1916 kwargs 1917 return self merge call merge fn args kwargs 1918 1919 def merge call self merge fn args kwargs anaconda3 envs tf lib python3 6 site package tensorflow core python distribute distribute lib py in merge call self merge fn args kwargs 1922 distribution strategy context crossreplicathreadmode self strategy pylint disable protect access 1923 try 1924 return merge fn self strategy args kwargs 1925 finally 1926 pop per thread mode anaconda3 envs tf lib python3 6 site package tensorflow core python keras optimizer v2 optimizer v2 py in distribute apply self distribution grad and var name apply state 480 delay see b 136304694 481 with backend name scope 482 scope name distribution extend colocate var with var 483 update op extend 484 distribution extend update anaconda3 envs tf lib python3 6 contextlib py in enter self 79 def enter self 80 try 81 return next self gen 82 except stopiteration 83 raise runtimeerror generator didn t yield from none anaconda3 envs tf lib python3 6 site package tensorflow core python framework op py in colocate with for gradient self op gradient uid ignore exist 4218 def colocate with for gradient self op gradient uid 4219 ignore exist false 4220 with self colocate with op ignore exist 4221 if gradient uid be not none and self control flow context be not none 4222 self control flow context entergradientcolocation op gradient uid anaconda3 envs tf lib python3 6 contextlib py in enter self 79 def enter self 80 try 81 return next self gen 82 except stopiteration 83 raise runtimeerror generator didn t yield from none anaconda3 envs tf lib python3 6 site package tensorflow core python framework op py in colocate with self op ignore exist 4267 raise valueerror try to reset colocation op be none but 4268 ignore exist be not true 4269 op op to colocate with op self 4270 4271 by default colocate with reset the device function stack anaconda3 envs tf lib python3 6 site package tensorflow core python framework op py in op to colocate with v graph 6601 happen soon perhaps this hack to work around the circular 6602 import dependency be acceptable 6603 if hasattr v handle and hasattr v handle op and isinstance 6604 v handle op operation 6605 if graph building function anaconda3 envs tf lib python3 6 site package tensorflow core python distribute value py in handle self 715 device distribute lib get update device 716 if device be none 717 raise valueerror handle be not available outside the replica context 718 or a tf distribute strategy update call 719 return self get device device handle valueerror handle be not available outside the replica context or a tf distribute strategy update call
tensorflowtensorflow
disable eager execution reset random seed set before
Bug
this be on tf2 1 from pip on window 10 describe the current behavior import tensorflow compat v1 as tf1 tf1 random set random seed 0 tf1 disable eager execution print tf1 keras backend get session run tf1 random uniform 0 1 this print a different number every time I have a similar example with more tf2 relevant code where the same thing happen import tensorflow as tf import tensorflow compat v1 as tf1 tf random set seed 0 tf1 disable eager execution print tf1 keras backend get session run tf random uniform 0 1 a workaround be to set the seed after disable eager execution
tensorflowtensorflow
convert save model to tflite model use tf 2 0
Bug
system information google colab tensorflow 2 0 0 I be work on convert custom object detection model train use ssd and inception network to quantize tflite model I can able to convert custom object detection model from frozen graph to quantize tflite model use the follow code snippet use tensorflow 1 4 converter tf lite tfliteconverter from frozen graph args model input shape normalize input image tensor 1 300 300 3 input array normalize input image tensor output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 converter allow custom op true converter post training quantize true tflite model converter convert open args output wb write tflite model however tf lite tfliteconverter from frozen graph class method be not available for tensorflow 2 0 refer this link export a savedmodel so I try to convert the model use tf lite tfliteconverter from save model class method the code snippet be show below converter tf lite tfliteconverter from save model content path to save model directory converter optimization tf lite optimize default tflite model converter convert the above code snippet throw the follow error valueerror none be only support in the 1st dimension tensor image tensor have invalid shape none none none 3 I try to pass input shape as argument converter tf lite tfliteconverter from save model content input shape image tensor 1 300 300 3 but it throw the follow error typeerror from save model get an unexpected keyword argument input shape be I miss something please feel free to correct I
tensorflowtensorflow
wrong window size while train the model
Bug
url s with the issue description of issue what need change window size should be 30 instead of 20 in the description under forecasting with machine learn clear description screenshot from 2020 01 10 12 05 13 as we can see under linear model window size 30 while in description it be mention as model forecast give previous 20 step submit a pull request yes
tensorflowtensorflow
recurrent dropout be wrong
Bug
I ve review one design in depth and two other superficially but keras tf s recurrent dropout do not implement any of they publication link below 1 I see some potentially severe problem with tf s implementation in light of the paper I ve read which explicitly advocate against the used scheme this say what be tensorflow keras s justification rationale of its own implementation 2 the implementation be inconsistent see below the docstring only mention a performance difference but there s also a reproducibility and design difference 1 use different mask per gate whereas 2 use a share mask the two be neither theoretically nor practically identical second s fixable via a docstring but first involve significant change to recurrent dropout logic for lstm gru and maybe other rnn this say be tensorflow kera open to change its base implementation of recurrent dropout if so I can go ahead and clarify 1 in detail and maybe even do the re implement myself in a pr per paper 1 inconsistency implementation 1 vs implementation 2 python if 0 self recurrent dropout 1 implementation 1 h tm1 I h tm1 rec dp mask 0 h tm1 f h tm1 rec dp mask 1 h tm1 c h tm1 rec dp mask 2 h tm1 o h tm1 rec dp mask 3 python if 0 self recurrent dropout 1 implementation 2 h tm1 rec dp mask 0 source code kera l2014 tf keras l2391 publication 1 recurrent dropout without memory loss 2 a theoretically ground application of dropout in recurrent neural network 3 rnndrop a novel dropout for rnns
tensorflowtensorflow
parallel for no converter define for matrixsolve
Bug
I need to calculate the jacobian of a function where tf linalg solve be part of the function I can usually use parallel for but pfor do not support the matrixsolve op require a fallback to a slow while loop it would be great to add a converter for matrixsolve to parallel for note that below my example can be easily mathematically simplify but in my actually use case it can not be system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 3 code to reproduce the issue python import tensorflow as tf from tensorflow python op parallel for gradient import jacobian import numpy as np sess tf interactivesession print tf version x tf compat v1 placeholder tf float64 shape 3 y tf compat v1 placeholder tf float64 shape 3 z tf reshape tf linalg solve tf linalg diag x tf reshape y 1 1 1 jac jacobian z x print sess run jac feed dict x np array 1 1 1 y np array 1 2 3 describe the current behavior python valueerror no converter define for matrixsolve name loop body gradient matrixsolve grad matrixsolve op matrixsolve input matrixdiag input loop body gradient reshape 1 grad reshape attr key t value type dt double attr key adjoint value b true input wrappedtensor t be stack false be sparse stack false wrappedtensor t be stack true be sparse stack false either add a converter or set op conversion fallback to while loop true which may run slow describe the expect behavior I should get array 1 0 0 0 2 0 0 0 3
tensorflowtensorflow
tensorflow 1 15 documentation redirect to github
Bug
url s with the issue for example description of issue what need change the link above currently redirect to github the 1 15 link work I have project use tensorflow 1 14 so I would like to use the 1 14 doc for reference will the 1 14 doc be back up
tensorflowtensorflow
tf distribute experimental tpustrategy doesn t render stable correctly
Bug
url s with the issue description of issue what need change on the documentation page for tf distribute experimental tpustrategy the stable documentation be show as the raw text of the documentation look like it s a combination of html and markdown example below clicking see nightly it render correctly but click see stable again it still show the raw text again
tensorflowtensorflow
tf range fail when limit be type of tf int32 and dtype be tf int64
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos catalina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior the behavior of tf range change between 2 0 0 and 2 1 0 such that tf range limit dtype dtype fail when limit be type of tf int32 and dtype be tf int64 not sure if this be a bug or a feature but I would expect this to still work the documentation nor the 2 1 0 release note don t explicitly mention anything about this describe the expect behavior the behavior as it be in 2 0 0 I e no exception be raise code to reproduce the issue python import tensorflow as tf tf range tf constant 4 dtype tf int32 dtype tf int64 other info log with tensorflow 2 1 0 bash python c import tensorflow as tf print tf version print tf range tf constant 4 dtype tf int32 dtype tf int64 2 1 0 2020 01 09 16 45 39 137901 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 09 16 45 39 151651 I tensorflow compiler xla service service cc 168 xla service 0x7fa652c190b0 initialize for platform host this do not guarantee that xla will be use device 2020 01 09 16 45 39 151667 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version traceback most recent call last file line 1 in file user hartikainen conda envs bae lib python3 7 site package tensorflow core python ops math op py line 1430 in range limit op convert to tensor limit dtype dtype name limit file user hartikainen conda envs bae lib python3 7 site package tensorflow core python framework op py line 1290 in convert to tensor dtype name value dtype name value valueerror tensor conversion request dtype int64 for tensor with dtype int32 with tensorflow 2 0 0 bash python c import tensorflow as tf print tf version print tf range tf constant 4 dtype tf int32 dtype tf int64 2 0 0 2020 01 09 16 40 11 425955 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 09 16 40 11 439063 I tensorflow compiler xla service service cc 168 xla service 0x7f8c6ccdfd00 execute computation on platform host device 2020 01 09 16 40 11 439079 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version tf tensor 0 1 2 3 shape 4 dtype int64
tensorflowtensorflow
correction in calculation of error while naive forecasting
Bug
url s with the issue description of issue what need change it should be mean absolute error instead of squared error while naive forecasting screenshot from 2020 01 09 12 10 09 submit a pull request yes I ll be submit one shortly
tensorflowtensorflow
runtimeerror encounter unresolved custom op enter node number 8 enter fail to prepare
Bug
I have convert tf keras model to tf lite successfully however when I use it for inference I get an error be there anyone who can resolve it thank code interpreter tf lite interpreter model path e object detection efficientdet region anchor opt mbconv head ckpt tflites ckpt b0 image size 768 mbconv se head 1e 5 unfreeze backbone freeze bn csv 04 0 6736 0 7484 opt tflite interpreter allocate tensor error runtimeerror traceback most recent call last in 1 interpreter tf lite interpreter model content tflite model 2 interpreter allocate tensor 3 help tf lite interpreter appdata roam python python36 site package tensorflow core lite python interpreter py in allocate tensor self 242 def allocate tensor self 243 self ensure safe 244 return self interpreter allocatetensor 245 246 def safe to run self appdata roam python python36 site package tensorflow core lite python interpreter wrapper tensorflow wrap interpreter wrapper py in allocatetensor self 104 105 def allocatetensor self 106 return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self 107 108 def invoke self runtimeerror encounter unresolved custom op enter node number 8 enter fail to prepare
tensorflowtensorflow
wrapper from config mutates its input
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 platform independent tensorflow instal from source or binary binary tensorflow version use command below 2 1 python version 3 7 describe the current behavior tf keras layers wrapper from config modify its config parameter which can cause unexpected side effect in call code l83 l87 specifically config pop in line 86 above mutate the config dict in a way that persist outside the from config function call elsewhere e g in tf keras layers bidirectional from config this be avoid by copy the config dict l743 l745 describe the expect behavior be able to call tf keras layers wrapper from config config without config change I have a use case where I be subclasse the wrapper class and rely on its from config method my workaround be to call from config config copy but I don t think this should be require code to reproduce the issue python import tensorflow as tf class mywrapper tf keras layers wrapper def call self input args kwargs return self layer input args kwargs wrapper mywrapper tf keras layer dense 1 config wrapper get config config copy config copy assert config config copy wrapper from config mywrapper from config config new config wrapper get config assert new config config copy assert config config copy fail the layer key have be pop from config
tensorflowtensorflow
tensorflowlite lstm keras tutorial ipynb update
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change the example script on how to make lstm layer ready for tf lite be outdate and not work anymore because the request tf nightly package cause issue I would like to get an update tutorial or a well alternative to use the tflite converter with the experimental flag set to true work with lstm layer but do not allow post training quantization as a general question I would like to know if this would be possible with the the model that be build in the example script
tensorflowtensorflow
explanation regard seed parameter
Bug
url s with the issue l307 def white noise time noise level 1 seed none description of issue what need change I think we should add explanation of seed parameter here since it s quite an important one clear description some explanation about how seed affect generation of random number every time along with link for reference can be add submit a pull request yes
tensorflowtensorflow
tf keras estimator model to estimator invalidargumenterror while set up xla gpu jit device number 2
Bug
I get an error similar to the issue describe in when use tf keras estimator model to estimator in tf 1 14 and 1 15 I get the invalidargumenterror describe below the problem doesn t occur in tf 1 13 2 I use linux mint with python 3 6 with 2 gpu nvidia gtx1080ti to be complete there be 3 nvidia video card in the machine that seem relevant if I understand the error correctly invalidargumenterror traceback most recent call last in 2 keras model model f 3 custom object merge merge 4 model dir datum estimator python3 6 site package tensorflow core python keras estimator init py in model to estimator keras model keras model path custom object model dir config checkpoint format 105 config config 106 checkpoint format checkpoint format 107 use v2 estimator false 108 109 python3 6 site package tensorflow estimator python estimator keras py in model to estimator keras model keras model path custom object model dir config checkpoint format use v2 estimator 574 if keras model be graph network 575 warm start path save first checkpoint keras model custom object 576 config save object ckpt 577 elif keras model build 578 log warning you be create an estimator from a keras model manually python3 6 site package tensorflow estimator python estimator keras py in save first checkpoint keras model custom object config save object ckpt 390 391 save to checkpoint 392 with session session config config session config as sess 393 if keras weight 394 model set weight keras weight python3 6 site package tensorflow core python client session py in init self target graph config 1583 protocol buffer with configuration option for the session 1584 1585 super session self init target graph config config 1586 note mrry create these on first enter to avoid a reference cycle 1587 self default graph context manager none python3 6 site package tensorflow core python client session py in init self target graph config 697 try 698 pylint disable protect access 699 self session tf session tf newsessionref self graph c graph opt 700 pylint enable protect access 701 finally invalidargumenterror invalid device ordinal value 2 valid range be 0 1 while set up xla gpu jit device number 2
tensorflowtensorflow
assign get an unexpected keyword argument validate shape
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow import tensorflow as tf a tf variable 2 a assign 5 assert a numpy 5 valueerror shape and 2 be incompatible a assign 1 2 typeerror assign get an unexpected keyword argument validate shape a assign 1 2 validate shape false valueerror shape and 2 be incompatible tf compat v1 assign a 1 2 validate shape false os platform and distribution e g linux ubuntu 16 04 window 10 linux tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 2 1 0 python version 3 7 describe the current behavior tf assign have a validate shape parameter that variable assign seem to be miss in addition the doc say if you want to change the shape of a variable later you have to use an assign op with validate shape false how should one change the shape of a variable code to reproduce the issue see above
tensorflowtensorflow
bug when convert to tflite model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 gpu gtx 1660 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 7 describe the current behavior convert to tflite fail code to reproduce the issue from tensorflow python ops import gen math op from tensorflow python framework import op from tensorflow python framework import sparse tensor from tensorflow python framework import constant op import tensorflow as tf import numpy as np class repeatlayer tf keras layers layer def init self axis 0 super repeatlayer self init self axis axis def all dimension self x if isinstance x op tensor and x get shape ndim be not none return constant op constant np arange x get shape ndim dtype tf int32 if isinstance x sparse tensor sparsetensor and x dense shape get shape be fully define r x dense shape get shape dim 0 value return constant op constant tf arange r dtype tf int32 return gen math op range 0 rank x 1 def tile one dimension self data axis multiple if datum shape ndim be not none multiple 1 datum shape ndim multiple axis multiple else one value tf one tf rank data tf int32 multiple tf concat one value axis multiple one value axis 1 axis 0 return tf tile datum multiple def repeat with axis self datum repeat axis datum tf convert to tensor data name datum b max len d repeat tf cast tf convert to tensor repeat name repeat tf int32 b max len datum shape tf shape datum max repeat gen math op maximum 0 gen math op max repeat self all dimension repeat mask tf sequence mask repeat max repeat b max len max value of repeat expand tf expand dim datum axis 1 b max len 1 d tile self tile one dimension expand axis 1 max repeat b max len max value of repeat d mask tf boolean mask tile mask result shape tf concat data shape axis 1 datum shape axis 1 axis 0 result tf reshape mask result shape return result def call self encoder h repeat return self repeat with axis datum encoder h repeat repeat axis self axis repeat repeatlayer axis 1 a tf keras input shape 35 384 dtype tf float32 b tf keras input shape 35 dtype tf int32 output repeat a b model tf keras model model a b output output model summary converter tf lite tfliteconverter from keras model model tflite model converter convert interpreter tf lite interpreter model content tflite model interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail input shape 1 input detail 0 shape input shape 2 input detail 1 shape input datum 1 np array np random random sample input shape 1 dtype np float32 input datum 2 np array np random random sample input shape 2 dtype np int32 interpreter set tensor input detail 0 index input datum 1 interpreter set tensor input detail 1 index input datum 2 interpreter invoke interpreter invoke other info log runtimeerror traceback most recent call last in 1 interpreter invoke anaconda3 lib python3 7 site package tensorflow core lite python interpreter py in invoke self 491 492 self ensure safe 493 self interpreter invoke 494 495 def reset all variable self anaconda3 lib python3 7 site package tensorflow core lite python interpreter wrapper tensorflow wrap interpreter wrapper py in invoke self 111 112 def invoke self 113 return tensorflow wrap interpreter wrapper interpreterwrapper invoke self 114 115 def inputindice self runtimeerror tensorflow lite kernel range cc 39 start limit delta 0 start limit delta 0 be not true node number 6 range fail to invoke
tensorflowtensorflow
invalidargumenterror if np ndarray be register as sequence type
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory cpu describe the current behavior for reference numpy be plan to register numpy ndarray as a sequence the sample resnet50 code from run fine but if ndarray be register as a sequence then tf2 throw an invalidargumenterror code to reproduce the issue from keras application resnet50 import resnet50 from keras preprocesse import image from keras application resnet50 import preprocess input decode prediction import numpy as np import type type sequence register np ndarray model resnet50 weight imagenet img path elephant jpg img image load img img path target size 224 224 x image img to array img x np expand dim x axis 0 x preprocess input x pred model predict x decode the result into a list of tuple class description probability one such list for each sample in the batch print predict decode prediction pred top 3 0 predict u n02504013 u indian elephant 0 82658225 u n01871265 u tusker 0 1122357 u n02504458 u african elephant 0 061040461 two extra line add to the sample resnet50 code be import type type sequence register np ndarray the error be 2020 01 07 13 48 16 421816 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument the first dimension of padding s must be the rank of input 4 2 node conv1 pad pad traceback most recent call last file test d3 m import py line 16 in pred model predict x file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package kera engine training py line 1462 in predict callback callback file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package kera engine training array py line 324 in predict loop batch out f in batch file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python keras backend py line 3740 in call output self graph fn convert input file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python eager function py line 1081 in call return self call impl args kwargs file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python eager function py line 1121 in call impl return self call flat args self capture input cancellation manager file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python eager function py line 511 in call ctx ctx file datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror the first dimension of padding must be the rank of input 4 2 node conv1 pad pad define at datum dsbox kyao miniconda3 envs dsbox eval 2019 winter lib python3 6 site package tensorflow core python framework op py 1751 o p inference keras scratch graph 10370 function call stack keras scratch graph
tensorflowtensorflow
malforme trainable attribute note in batch normalization documentation
Bug
url s with the issue description of issue what need change after the reference list there be a stray malformed tag trainable attribute note I suspect that this be suppose to resolve to the note that move mean and move variance be place in update op and need to be execute alongside the training op this note be present in the tf layer doc but without know what this tag refer to I can t really say for sure the tag be present in only three place in the tensorflow codebase and show up malformed on the website in each case
tensorflowtensorflow
tf string low and tf string upper have todo docstring
Bug
url s with the issue low upper description of issue what need change clear description the first line of the docstring for these function be the auto generate todo add doc instead of an actual summary submit a pull request yes I ll be submit one shortly
tensorflowtensorflow
tpu support in tf2 1 release candidate 2
Bug
I m try to run prototype of a code for train a model on tpu with bfloat16 precision I m do it in google colab notebook to do it I install tensorflow 2 1 0rc2 and run the follow code import tensorflow as tf import os from tensorflow keras mix precision import experimental as mixed precision def create model model tf keras model sequential model add tf keras layer conv2d 128 3 3 input shape 32 32 3 model add tf keras layer maxpooling2d pool size 2 2 stride 2 2 model add tf keras layers activation elu model add tf keras layer flatten model add tf keras layer dense 10 model add tf keras layers activation softmax dtype float32 return model this be for bfloat16 precision policy mix precision policy mixed float16 mixed precision set policy policy resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to host resolver master tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver with strategy scope model create model model compile optimizer tf keras optimizer adam learning rate 1e 3 loss tf keras loss sparse categorical crossentropy metric tf keras metric sparse categorical accuracy after run this code I receive an error notfounderror traceback most recent call last in 18 resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr 19 tf config experimental connect to host resolver master 20 tf tpu experimental initialize tpu system resolver 21 strategy tf distribute experimental tpustrategy resolver 22 3 frame usr local lib python3 6 dist package six py in raise from value from value notfounderror inference tpu init fn 14 be neither a type of a primitive operation nor a name of a function register in binary run on n 88be52b9 w 0 make sure the operation or function be register in the binary run in this process here be notebook with code this code work on tf2 0 but only without bfloat16 support see block with comment in code what can I do if I need a tpu on tf2 with bfloat16 support
tensorflowtensorflow
how many item do tf gradient return
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change it s unclear how many list item be return from tf gradient the second paragraph state that it return a list of tensor of length len xs where each tensor be the sum dy dx for y in ys the return section say a list of sum dy dx for each x in xs so which one be it sum dy dx for x in xs or sum dy dx for y in ys besides the inconsistency the summation notation in this documentation be ambiguous when it say sum dy dx for x in xs do that mean dy dx be sum over the ys axis and there be one element produce for each xs or the other way around a clarify example would help and a statement along the line of return a list of with as many element as xs or ys I don t know clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
typo error in l07c01 saving and loading model ipynb
Bug
url s with the issue l579 the differnece in output should be zero description of issue what need change differnece should be difference submit a pull request yes
tensorflowtensorflow
lite micro kernels cmis nn
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 18 04 tensorflow instal from source or binary source tensorflow version commit sha if source 2274eacd794d7e501849567811637b7921e52820 target platform e g arm mbe os arduino nano 33 etc arm cmis nn describe the problem issue report by on device ai co ltd on tensorflow sig micro gitter lite micro example use cmis nn kernel no long compile root cause the pr lite kernel util refactore 27019 do not refactor the cmis nn specific add and mul kernel will submit pr request with the miss change please provide the exact sequence of command step when you run into the problem make f tensorflow lite micro tool make makefile target sparkfun edge tag cmsis nn micro speech bin arm none eabi g o3 dndebug std c 11 g dtf lite static memory fno rtti dpart apollo3 dam package bga dam part apollo3 dgemmlowp allow slow scalar fallback dtf lite static memory dndebug dtf lite mcu debug log d fpu present 1 darm math cm4 fno rtti fmessage length 0 fno exception fno unwind table fno builtin ffunction section fdata section funsigne char mmd mcpu cortex m4 mthumb mfpu fpv4 sp d16 mfloat abi hard std gnu 11 wvla wall wextra wno unused parameter wno miss field initializer wno write string wno sign compare fno delete null pointer check fomit frame pointer fpermissive nostdlib ggdb o3 darm math dsp darm math loopunroll I itensorflow lite micro tool make download itensorflow lite micro tool make download gemmlowp itensorflow lite micro tool make download flatbuffer include isystemtensorflow lite micro tool make download cmsis cmsis core include isystemtensorflow lite micro tool make download cmsis cmsis dsp include itensorflow lite micro tool make download cmsis ext itensorflow lite micro tool make download gcc embed arm none eabi itensorflow lite micro tool make download ambiqsuite rel2 0 0 mcu apollo3 itensorflow lite micro tool make download ambiqsuite rel2 0 0 cmsis ambiqmicro include itensorflow lite micro tool make download ambiqsuite rel2 0 0 board sparkfun tensorflow apollo3 bsp bsp itensorflow lite micro tool make download ambiqsuite rel2 0 0 device itensorflow lite micro tool make download ambiqsuite rel2 0 0 util itensorflow lite micro tool make download cmsis cmsis core include itensorflow lite micro tool make download cmsis cmsis nn include itensorflow lite micro tool make download cmsis cmsis dsp include itensorflow lite micro tool make download kissfft itensorflow lite micro tool make download ambiqsuite rel2 0 0 board sparkfun tensorflow apollo3 bsp example example1 edge test src tf accelerometer itensorflow lite micro tool make download ambiqsuite rel2 0 0 board sparkfun tensorflow apollo3 bsp example example1 edge test src tf adc c tensorflow lite micro kernels cmsis nn add cc o tensorflow lite micro tool make gen sparkfun edge cortex m4 obj tensorflow lite micro kernels cmsis nn add o tensorflow lite micro kernels cmsis nn add cc in function tflitestatus tflite op micro add calculateopdata tflitecontext tfliteaddparam const tflitetensor const tflitetensor tflitetensor tflite op micro add opdata tensorflow lite micro kernels cmsis nn add cc 89 7 error calculateactivationrangeuint8 be not declare in this scope calculateactivationrangeuint8 param activation output tensorflow lite micro kernels cmsis nn add cc 89 7 note suggest alternative calculateactivationrange calculateactivationrangeuint8 param activation output calculateactivationrange tensorflow lite micro kernels cmsis nn add cc 93 7 error calculateactivationrangeint8 be not declare in this scope calculateactivationrangeint8 param activation output tensorflow lite micro kernels cmsis nn add cc 93 7 note suggest alternative calculateactivationrange calculateactivationrangeint8 param activation output calculateactivationrange
tensorflowtensorflow
xla tpu it should not be possible to run out of vmem please file a bug against xla
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v1 12 1 21643 g03f1214 2 1 0 dev20200105 describe the current behavior I modify this script 2020 01 06 08 30 26 371423 w tensorflow core distribute runtime eager remote tensor handle datum cc 75 unable to destroy remote tensor handle if you be run a tf function it usually indicate some op in the graph get an error function node inference tpu function 106662 compilation failure run out of memory in memory space vmem it should not be possible to run out of vmem please file a bug against xla large program allocation in vmem xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope tpu compilation fail node tpu compile succeed assert 7693574632605057830 9 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info traceback most recent call last file train eval py line 482 in main file train eval py line 458 in main global step tr loss train args model class tokenizer config strategy file train eval py line 129 in train running loss smooth running loss 1 smooth float loss file usr local lib python3 6 site package tensorflow core python framework op py line 867 in float return float self numpy file usr local lib python3 6 site package tensorflow core python framework op py line 918 in numpy six raise from core status to exception e code e message none file line 3 in raise from tensorflow python framework error impl resourceexhaustederror function node inference tpu function 106662 compilation failure run out of memory in memory space vmem it should not be possible to run out of vmem please file a bug against xla large program allocation in vmem xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope xla label fusion 5840 f32 512 1024 1 0 t 8 128 fusion s32 512 0 t 512 f32 512 1024 1 0 t 8 128 f32 t 256 f32 t 256 kind kcustom call fuse computation 5742 allocation type scope tpu compilation fail node tpu compile succeed assert 7693574632605057830 9 hint if you want to see a list of allocate tensor when oom happen add report tensor allocation upon oom to runoption for current allocation info I debug this error a bit and find that use the custom optimizer cause the error that be optimizer optimization create optimizer args learning rate num step per epoch args num train epoch warmup step with a default optimizer the program run fine optimizer tf keras optimizer adam learn rate args learning rate describe the expect behavior custom optimizer should work and the error message should be more specific code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I be not able to create a minimal example to reproduce a minimal example but the two line make the difference between a failed and successful compilation other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
census tutorial documentation refer to python 2 but code use python 3
Bug
url s with the issue python check import and global description of issue what need change the text say first we ll make sure that we re use python 2 and then go ahead and install and import the stuff we need but the code below indicate we need to use python 3 clear description the text state first we ll make sure that we re use python 2 and then go ahead and install and import the stuff we need the code example say python import sys confirm that we re use python 3 assert sys version info major be 3 oop not run python 3 use runtime change runtime type
tensorflowtensorflow
sparkfun tensorflow codelab update code
Bug
the codelab still link to the experimental folder for the makefile which be incorrect visual image update the codelab at this link 3 code should read make f tensorflow lite micro tool make makefile target sparkfun edge micro speech bin url s with the issue 3
tensorflowtensorflow
the report a mistake link do not work
Bug
url s with the issue please provide a link to the documentation entry for example 3 description of issue what need change clear description the report a mistake link at the bottom left do not work and go to a github 404 instead image currently it link to
tensorflowtensorflow
tf linalg expm be incompatible with vectorized map
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 2 tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 dev20191221 python version 3 7 describe the current behavior vectorize tf linalg expm throw the error unrecognizedflagerror unknown command line flag f describe the expect behavior no error code to reproduce the issue tf vectorized map lambda x tf linalg expm x tf reshape tf range 8 0 2 2 2 other info log this may not be relate specifically to expm as it occur with other function as well
tensorflowtensorflow
a bug with inaccurate metric accuracy when use closure loss
Bug
I just want to report a bug with closure loss in tf that give we inaccurate accuracy result let I first by provide a simple code that build image classification on mnist dataset successfully in 1 I train the model with 1 epoch and I get a very decent result result 60000 60000 10 168us sample loss 0 2097 acc 0 9362 val loss 0 0459 val acc 0 9850 however when I change the loss to use a closure loss as in 2 I get a different result regard to the accuracy 60000 60000 10 165us sample loss 0 1980 acc 0 0989 val loss 0 0433 val acc 0 0995 indeed I find the report accuracy acc and val acc be very inaccurate which I believe be a bug you can check the prediction output for some example which be highly accurate meanwhile the loss be compute correctly I think code for 1 from future import print function import tensorflow as tf import tensorflow kera as keras from tensorflow keras datasets import mnist from tensorflow keras model import sequential from tensorflow keras layer import dense dropout flatten from tensorflow keras layer import conv2d maxpooling2d from tensorflow keras import backend as k import numpy as np num class 10 epoch 1 input image dimension img row img col 28 28 the datum split between train and test set x train y train x test y test mnist load datum if k image data format channel first x train x train reshape x train shape 0 1 img row img col x test x test reshape x test shape 0 1 img row img col input shape 1 img row img col else x train x train reshape x train shape 0 img row img col 1 x test x test reshape x test shape 0 img row img col 1 input shape img row img col 1 x train x train astype float32 x test x test astype float32 x train 255 x test 255 def create model input sequence tf keras layers input dtype float32 shape input shape name input sequence conv1 conv2d 32 kernel size 3 3 activation relu input sequence conv2d conv2d 64 3 3 activation relu conv1 conv2d dropout 0 25 maxpooling2d pool size 2 2 conv2d conv2d flatten conv2d conv2d dropout 0 5 dense 128 activation relu conv2d output dense num class activation softmax conv2d model tf keras model model input input sequence output output model compile loss tf keras loss sparse categorical crossentropy optimizer keras optimizer adam metric accuracy return model model create model batch size 64 model fit np array x train np array y train verbose 1 batch size batch size epoch epoch validation datum x test np array y test a model predict x test for x y in zip a 20 y test 20 print x np argmax x y code for 2 from future import print function import tensorflow as tf import tensorflow kera as keras from tensorflow keras datasets import mnist from tensorflow keras model import sequential from tensorflow keras layer import dense dropout flatten from tensorflow keras layer import conv2d maxpooling2d from tensorflow keras import backend as k import numpy as np num class 10 epoch 1 input image dimension img row img col 28 28 the datum split between train and test set x train y train x test y test mnist load datum if k image data format channel first x train x train reshape x train shape 0 1 img row img col x test x test reshape x test shape 0 1 img row img col input shape 1 img row img col else x train x train reshape x train shape 0 img row img col 1 x test x test reshape x test shape 0 img row img col 1 input shape img row img col 1 x train x train astype float32 x test x test astype float32 x train 255 x test 255 def create model input sequence tf keras layers input dtype float32 shape input shape name input sequence conv1 conv2d 32 kernel size 3 3 activation relu input sequence conv2d conv2d 64 3 3 activation relu conv1 conv2d dropout 0 25 maxpooling2d pool size 2 2 conv2d conv2d flatten conv2d conv2d dropout 0 5 dense 128 activation relu conv2d output dense num class activation softmax conv2d model tf keras model model input input sequence output output def custom loss def loss y true y predict return tf keras loss sparse categorical crossentropy y true y predict return loss model compile loss custom loss optimizer keras optimizer adam metric accuracy return model model create model batch size 64 model fit np array x train np array y train verbose 1 batch size batch size epoch epoch validation datum x test np array y test a model predict x test for x y in zip a 20 y test 20 print x np argmax x y
tensorflowtensorflow
typo error in l05c03 exercise flower with datum augmentation ipynb
Bug
url s with the issue description of issue what need change in the directory structure it should be daisy instead of diasy screenshot from 2020 01 03 18 39 11 submit a pull request yes
tensorflowtensorflow
line break miss in tf keras model fit documentation
Bug
url s with the issue description of issue what need change in the validation datum part of model fit the third alternative read dataset for the first two case batch size must be provide for the last case validation step must be provide I feel a link break should be insert after dataset
tensorflowtensorflow
tflite expermimental new converter error with tf keras bidirectional wrapper or attribute go backwards true
Bug
system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary tensorflow version or github sha if from source use to build model 2 0 0 use to run converter 2 1 0 dev20191227 command use to run the converter or code if you re use the python api import tensorflow as tf import os os environ cuda visible device 1 print tf version model tf keras model load model home amish pycharmproject myproject script temp h5 model summary converter tf lite tfliteconverter from keras model model converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op converter experimental new converter true add this line tflite model converter convert the output from the converter invocation warn tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client 2 1 0 dev20191227 2020 01 03 02 45 22 187710 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 03 02 45 22 193838 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit cuda error no device no cuda capable device be detect 2020 01 03 02 45 22 193908 I tensorflow stream executor cuda cuda diagnostic cc 169 retrieve cuda diagnostic information for host enigma 2020 01 03 02 45 22 193928 I tensorflow stream executor cuda cuda diagnostic cc 176 hostname enigma 2020 01 03 02 45 22 207575 I tensorflow stream executor cuda cuda diagnostic cc 200 libcuda report version be 430 50 0 2020 01 03 02 45 22 207706 I tensorflow stream executor cuda cuda diagnostic cc 204 kernel report version be 430 50 0 2020 01 03 02 45 22 207730 I tensorflow stream executor cuda cuda diagnostic cc 310 kernel version seem to match dso 430 50 0 2020 01 03 02 45 22 208264 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 03 02 45 22 384583 I tensorflow core platform profile util cpu util cc 101 cpu frequency 2496000000 hz 2020 01 03 02 45 22 391769 I tensorflow compiler xla service service cc 168 xla service 0x55cbc700ede0 initialize for platform host this do not guarantee that xla will be use device 2020 01 03 02 45 22 391868 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version model sequential 14 layer type output shape param mask 14 mask none 50 36 0 bidirectional 6 bidirection none 50 256 168960 dropout 16 dropout none 50 256 0 dense 16 dense none 50 10 2570 total param 171 530 trainable param 171 530 non trainable param 0 2020 01 03 02 45 24 787867 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2020 01 03 02 45 24 788386 I tensorflow core grappler cluster single machine cc 356 start new session 2020 01 03 02 45 25 204179 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item graph to optimize 2020 01 03 02 45 25 204249 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 255 node 0 310 edge 0 time 121 027m 2020 01 03 02 45 25 204268 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer graph size after 255 node 0 310 edge 0 time 9 261ms 2020 01 03 02 45 25 204281 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 forward lstm 56 while body 3164 2020 01 03 02 45 25 204300 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 004ms 2020 01 03 02 45 25 204316 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 03 02 45 25 204332 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 forward lstm 56 while cond 3163 2020 01 03 02 45 25 204345 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 002ms 2020 01 03 02 45 25 204360 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 03 02 45 25 204376 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 backward lstm 56 while body 3360 2020 01 03 02 45 25 204391 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 002ms 2020 01 03 02 45 25 204406 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 001ms 2020 01 03 02 45 25 204422 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 backward lstm 56 while cond 3359 2020 01 03 02 45 25 204440 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0 002ms 2020 01 03 02 45 25 204460 I tensorflow core grappler optimizer meta optimizer cc 817 function optimizer function optimizer do nothing time 0ms 2020 01 03 02 45 25 373374 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2020 01 03 02 45 25 373537 I tensorflow core grappler cluster single machine cc 356 start new session 2020 01 03 02 45 25 447623 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item graph to optimize 2020 01 03 02 45 25 447671 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 177 node 78 225 edge 85 time 50 718m 2020 01 03 02 45 25 447678 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 177 node 0 225 edge 0 time 3 89ms 2020 01 03 02 45 25 447702 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 backward lstm 56 while body 3360 frozen 2020 01 03 02 45 25 447707 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 71 node 1 96 edge 0 time 2 225ms 2020 01 03 02 45 25 447730 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 71 node 0 96 edge 0 time 1 116ms 2020 01 03 02 45 25 447752 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 forward lstm 56 while body 3164 frozen 2020 01 03 02 45 25 447757 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 71 node 1 96 edge 0 time 2 118m 2020 01 03 02 45 25 447761 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 71 node 0 96 edge 0 time 1 179ms 2020 01 03 02 45 25 447766 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 backward lstm 56 while cond 3359 frozen 2020 01 03 02 45 25 447771 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 17 node 0 4 edge 0 time 0 339ms 2020 01 03 02 45 25 447795 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 17 node 0 4 edge 0 time 0 214ms 2020 01 03 02 45 25 447800 I tensorflow core grappler optimizer meta optimizer cc 815 optimization result for grappler item sequential 14 bidirectional 6 forward lstm 56 while cond 3163 freeze 2020 01 03 02 45 25 447817 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 17 node 0 4 edge 0 time 0 311ms 2020 01 03 02 45 25 447822 I tensorflow core grappler optimizer meta optimizer cc 817 constant folding graph size after 17 node 0 4 edge 0 time 0 213ms traceback most recent call last file convert py line 15 in tflite model converter convert file home amish anaconda3 lib python3 7 site package tensorflow core lite python lite py line 490 in convert converter kwargs file home amish anaconda3 lib python3 7 site package tensorflow core lite python convert py line 476 in toco convert impl enable mlir converter enable mlir converter file home amish anaconda3 lib python3 7 site package tensorflow core lite python convert py line 215 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info warn tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client 2020 01 03 02 45 27 257206 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 108 ignore output format 2020 01 03 02 45 27 257267 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 114 ignore drop control dependency 2020 01 03 02 45 27 478616 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 03 02 45 27 500564 I tensorflow core platform profile util cpu util cc 101 cpu frequency 2496000000 hz 2020 01 03 02 45 27 500991 I tensorflow compiler xla service service cc 168 xla service 0x555ae8caec30 initialize for platform host this do not guarantee that xla will be use device 2020 01 03 02 45 27 501033 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 01 03 02 45 27 502780 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 03 02 45 27 505700 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit cuda error no device no cuda capable device be detect 2020 01 03 02 45 27 505725 I tensorflow stream executor cuda cuda diagnostic cc 169 retrieve cuda diagnostic information for host enigma 2020 01 03 02 45 27 505732 I tensorflow stream executor cuda cuda diagnostic cc 176 hostname enigma 2020 01 03 02 45 27 505775 I tensorflow stream executor cuda cuda diagnostic cc 200 libcuda report version be 430 50 0 2020 01 03 02 45 27 505796 I tensorflow stream executor cuda cuda diagnostic cc 204 kernel report version be 430 50 0 2020 01 03 02 45 27 505801 I tensorflow stream executor cuda cuda diagnostic cc 310 kernel version seem to match dso 430 50 0 loc callsite sequential 14 bidirectional 6 backward lstm 56 reversev2 1 home amish anaconda3 lib python3 7 site package tensorflow core python eager def function py 853 0 at callsite home amish anaconda3 lib python3 7 site package tensorflow core python eager def function py 947 0 at callsite home amish anaconda3 lib python3 7 site package tensorflow core lite python lite py 409 0 at convert py 9 0 error tfl reverse v2 op operand 0 must be tensor of 32 bit float or 16 bit integer or 32 bit integer or 64 bit integer value but get tensor 50x1x1xi1 traceback most recent call last file home amish anaconda3 bin toco from protos line 8 in sys exit main file home amish anaconda3 lib python3 7 site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file home amish anaconda3 lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home amish anaconda3 lib python3 7 site package absl app py line 299 in run run main main args file home amish anaconda3 lib python3 7 site package absl app py line 250 in run main sys exit main argv file home amish anaconda3 lib python3 7 site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception home amish anaconda3 lib python3 7 site package tensorflow core python eager def function py 853 9 error tfl reverse v2 op operand 0 must be tensor of 32 bit float or 16 bit integer or 32 bit integer or 64 bit integer value but get tensor 50x1x1xi1 self initialize args kwargs add initializer to initializer home amish anaconda3 lib python3 7 site package tensorflow core python eager def function py 947 5 note call from concrete self get concrete function garbage collect args kwargs home amish anaconda3 lib python3 7 site package tensorflow core lite python lite py 409 5 note call from concrete func func get concrete function convert py 9 1 note call from converter tf lite tfliteconverter from keras model model this issue cause due to bidirectional wrapper also same error occur when go backwards true in the lstm layer please tell a workaround if any for this so that I can temporarily fix this
tensorflowtensorflow
deprecation warning just from try to see package in ipython
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux redhat 7 6 tensorflow instal from source or binary from conda install keras tensorflow version use command below 1 15 0 python version 3 7 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory use cpu version so n a describe the current behavior while I be in the ipython I do a standard import tensorflow and then press tab to see what package be there after a second I be give an immediate deluge of warn message before I even press enter or try to import anything after the crazy deluge I get prompt to press enter 7 time for something after which it finally give I my ipython line back and I be able to run away from whatever terrible thing I have just do funnily enough if I m in the same ipython session and try and do this again I get no warning and I can browse package name like normal if I restart the session I have to go through it all over again describe the expect behavior I expect none of that to happen the only thing I expect to happen be a list to appear of top level module in tensorflow like what happen for every other library I hadn t even import anything code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow and then press tab in an ipython terminal other info log if it help I m use an anaconda environment I know this isn t the late tensorflow but I m use the version that come with my kera install here s the full dump it give I it s a doozy warning tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf auto reuse be deprecate please use tf compat v1 auto reuse instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf attrvalue be deprecate please use tf compat v1 attrvalue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf compiler version be deprecate please use tf version compiler version instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf cxx11 abi flag be deprecate please use tf sysconfig cxx11 abi flag instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf conditionalaccumulator be deprecate please use tf compat v1 conditionalaccumulator instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf conditionalaccumulatorbase be deprecate please use tf compat v1 conditionalaccumulatorbase instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf configproto be deprecate please use tf compat v1 configproto instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf dimension be deprecate please use tf compat v1 dimension instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf event be deprecate please use tf compat v1 event instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fifoqueue be deprecate please use tf queue fifoqueue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fixedlenfeature be deprecate please use tf io fixedlenfeature instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fixedlensequencefeature be deprecate please use tf io fixedlensequencefeature instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf git version be deprecate please use tf version git version instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf gpuoption be deprecate please use tf compat v1 gpuoption instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graph def version be deprecate please use tf version graph def version instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graph def version min consumer be deprecate please use tf version graph def version min consumer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graph def version min producer be deprecate please use tf version graph def version min producer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graphdef be deprecate please use tf compat v1 graphdef instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graphkey be deprecate please use tf compat v1 graphkey instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf graphoption be deprecate please use tf compat v1 graphoption instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf histogramproto be deprecate please use tf compat v1 histogramproto instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf interactivesession be deprecate please use tf compat v1 interactivesession instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf logmessage be deprecate please use tf compat v1 logmessage instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf monolithic build be deprecate please use tf sysconfig monolithic build instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf metagraphdef be deprecate please use tf compat v1 metagraphdef instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf nameattrlist be deprecate please use tf compat v1 nameattrlist instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf nogradient be deprecate please use tf no gradient instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf nodedef be deprecate please use tf compat v1 nodedef instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf notdifferentiable be deprecate please use tf no gradient instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf operror be deprecate please use tf error operror instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf optimizeroption be deprecate please use tf compat v1 optimizeroption instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf paddingfifoqueue be deprecate please use tf queue paddingfifoqueue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf priorityqueue be deprecate please use tf queue priorityqueue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf quantize dtype be deprecate please use tf dtype quantize dtype instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf queuebase be deprecate please use tf queue queuebase instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf randomshufflequeue be deprecate please use tf queue randomshufflequeue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf readerbase be deprecate please use tf compat v1 readerbase instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf runmetadata be deprecate please use tf compat v1 runmetadata instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf runoption be deprecate please use tf compat v1 runoption instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf session be deprecate please use tf compat v1 session instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sessionlog be deprecate please use tf compat v1 sessionlog instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparseconditionalaccumulator be deprecate please use tf compat v1 sparseconditionalaccumulator instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparsefeature be deprecate please use tf io sparsefeature instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparsetensorvalue be deprecate please use tf compat v1 sparsetensorvalue instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf summary be deprecate please use tf compat v1 summary instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf summarymetadata be deprecate please use tf compat v1 summarymetadata instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf tensorinfo be deprecate please use tf compat v1 tensorinfo instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf version be deprecate please use tf version version instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf varlenfeature be deprecate please use tf io varlenfeature instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf variablescope be deprecate please use tf compat v1 variablescope instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf accumulate n be deprecate please use tf math accumulate n instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf add check numeric op be deprecate please use tf compat v1 add check numeric op instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf add to collection be deprecate please use tf compat v1 add to collection instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf add to collection be deprecate please use tf compat v1 add to collection instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf angle be deprecate please use tf math angle instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf arg max be deprecate please use tf argmax instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf arg min be deprecate please use tf argmin instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert equal be deprecate please use tf compat v1 assert equal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert great be deprecate please use tf compat v1 assert great instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert great equal be deprecate please use tf compat v1 assert great equal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert integer be deprecate please use tf compat v1 assert integer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert less be deprecate please use tf compat v1 assert less instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert less equal be deprecate please use tf compat v1 assert less equal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert near be deprecate please use tf compat v1 assert near instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert negative be deprecate please use tf compat v1 assert negative instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert non negative be deprecate please use tf compat v1 assert non negative instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert non positive be deprecate please use tf compat v1 assert non positive instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert none equal be deprecate please use tf compat v1 assert none equal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert positive be deprecate please use tf compat v1 assert positive instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert proper iterable be deprecate please use tf debugging assert proper iterable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert rank be deprecate please use tf compat v1 assert rank instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert rank at least be deprecate please use tf compat v1 assert rank at least instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert rank in be deprecate please use tf compat v1 assert rank in instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert same float dtype be deprecate please use tf debugging assert same float dtype instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert scalar be deprecate please use tf compat v1 assert scalar instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert type be deprecate please use tf compat v1 assert type instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assert variable initialize be deprecate please use tf compat v1 assert variable initialize instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assign be deprecate please use tf compat v1 assign instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assign add be deprecate please use tf compat v1 assign add instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf assign sub be deprecate please use tf compat v1 assign sub instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf batch to space nd be deprecate please use tf batch to space instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf betainc be deprecate please use tf math betainc instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf bincount be deprecate please use tf math bincount instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf ceil be deprecate please use tf math ceil instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf check numeric be deprecate please use tf debugging check numeric instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf cholesky be deprecate please use tf linalg cholesky instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf cholesky solve be deprecate please use tf linalg cholesky solve instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf confusion matrix be deprecate please use tf math confusion matrix instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf conj be deprecate please use tf math conj instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf container be deprecate please use tf compat v1 container instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf control flow v2 enable be deprecate please use tf compat v1 control flow v2 enable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf convert to tensor or index slice be deprecate please use tf compat v1 convert to tensor or indexed slice instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf convert to tensor or sparse tensor be deprecate please use tf compat v1 convert to tensor or sparse tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf cross be deprecate please use tf linalg cross instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf cumprod be deprecate please use tf math cumprod instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf decode base64 be deprecate please use tf io decode base64 instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf decode compress be deprecate please use tf io decode compress instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf decode csv be deprecate please use tf io decode csv instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf decode json example be deprecate please use tf io decode json example instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf delete session tensor be deprecate please use tf compat v1 delete session tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf depth to space be deprecate please use tf compat v1 depth to space instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf dequantize be deprecate please use tf quantization dequantize instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf deserialize many sparse be deprecate please use tf io deserialize many sparse instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf diag be deprecate please use tf linalg tensor diag instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf diag part be deprecate please use tf linalg tensor diag part instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf digamma be deprecate please use tf math digamma instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf dimension at index be deprecate please use tf compat dimension at index instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf dimension value be deprecate please use tf compat dimension value instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf disable control flow v2 be deprecate please use tf compat v1 disable control flow v2 instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf disable eager execution be deprecate please use tf compat v1 disable eager execution instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf disable tensor equality be deprecate please use tf compat v1 disable tensor equality instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf disable v2 behavior be deprecate please use tf compat v1 disable v2 behavior instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf disable v2 tensorshape be deprecate please use tf compat v1 disable v2 tensorshape instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf div no nan be deprecate please use tf math divide no nan instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable control flow v2 be deprecate please use tf compat v1 enable control flow v2 instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable eager execution be deprecate please use tf compat v1 enable eager execution instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable resource variable be deprecate please use tf compat v1 enable resource variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable tensor equality be deprecate please use tf compat v1 enable tensor equality instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable v2 behavior be deprecate please use tf compat v1 enable v2 behavior instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf enable v2 tensorshape be deprecate please use tf compat v1 enable v2 tensorshape instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf encode base64 be deprecate please use tf io encode base64 instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf erf be deprecate please use tf math erf instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf erfc be deprecate please use tf math erfc instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf expm1 be deprecate please use tf math expm1 instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max args be deprecate please use tf quantization fake quant with min max args instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max args gradient be deprecate please use tf quantization fake quant with min max args gradient instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max var be deprecate please use tf quantization fake quant with min max var instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max var gradient be deprecate please use tf quantization fake quant with min max var gradient instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max var per channel be deprecate please use tf quantization fake quant with min max var per channel instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fake quant with min max var per channel gradient be deprecate please use tf quantization fake quant with min max var per channel gradient instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fft be deprecate please use tf signal fft instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fft2d be deprecate please use tf signal fft2d instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fft3d be deprecate please use tf signal fft3d instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf fix size partitioner be deprecate please use tf compat v1 fix size partitioner instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf floor div be deprecate please use tf math floordiv instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf floordiv be deprecate please use tf math floordiv instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf floormod be deprecate please use tf math floormod instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get collection be deprecate please use tf compat v1 get collection instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get collection ref be deprecate please use tf compat v1 get collection ref instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get default graph be deprecate please use tf compat v1 get default graph instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get default session be deprecate please use tf compat v1 get default session instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get local variable be deprecate please use tf compat v1 get local variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get seed be deprecate please use tf compat v1 get seed instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get session handle be deprecate please use tf compat v1 get session handle instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get session tensor be deprecate please use tf compat v1 get session tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get variable be deprecate please use tf compat v1 get variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf get variable scope be deprecate please use tf compat v1 get variable scope instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf global norm be deprecate please use tf linalg global norm instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf global variable be deprecate please use tf compat v1 global variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf ifft be deprecate please use tf signal ifft instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf ifft2d be deprecate please use tf signal ifft2d instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf ifft3d be deprecate please use tf signal ifft3d instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf igamma be deprecate please use tf math igamma instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf igammac be deprecate please use tf math igammac instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf imag be deprecate please use tf math imag instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf invert permutation be deprecate please use tf math invert permutation instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be finite be deprecate please use tf math be finite instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be inf be deprecate please use tf math be inf instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be nan be deprecate please use tf math be nan instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be non decrease be deprecate please use tf math be non decrease instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be numeric tensor be deprecate please use tf debugging be numeric tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be strictly increase be deprecate please use tf math be strictly increase instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf be variable initialize be deprecate please use tf compat v1 be variable initialize instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf lbeta be deprecate please use tf math lbeta instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf lgamma be deprecate please use tf math lgamma instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf lin space be deprecate please use tf linspace instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf local variable be deprecate please use tf compat v1 local variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf local variable initializer be deprecate please use tf compat v1 local variable initializer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf log be deprecate please use tf math log instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf log1p be deprecate please use tf math log1p instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf log sigmoid be deprecate please use tf math log sigmoid instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf logical xor be deprecate please use tf math logical xor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf make template be deprecate please use tf compat v1 make template instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf make tensor proto be deprecate please use tf compat v1 make tensor proto instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matching file be deprecate please use tf io match file instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix band part be deprecate please use tf linalg band part instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix determinant be deprecate please use tf linalg det instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix diag be deprecate please use tf linalg diag instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix diag part be deprecate please use tf linalg diag part instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix inverse be deprecate please use tf linalg inv instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix set diag be deprecate please use tf linalg set diag instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix solve be deprecate please use tf linalg solve instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix solve ls be deprecate please use tf linalg lstsq instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix transpose be deprecate please use tf linalg matrix transpose instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf matrix triangular solve be deprecate please use tf linalg triangular solve instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf min max variable partitioner be deprecate please use tf compat v1 min max variable partitioner instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf mod be deprecate please use tf math mod instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf model variable be deprecate please use tf compat v1 model variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf move average variable be deprecate please use tf compat v1 move average variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf no regularizer be deprecate please use tf compat v1 no regularizer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf op scope be deprecate please use tf compat v1 op scope instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf parse example be deprecate please use tf io parse example instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf parse single example be deprecate please use tf io parse single example instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf parse single sequence example be deprecate please use tf io parse single sequence example instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf parse tensor be deprecate please use tf io parse tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf placeholder be deprecate please use tf compat v1 placeholder instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf placeholder with default be deprecate please use tf compat v1 placeholder with default instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf polygamma be deprecate please use tf math polygamma instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf qr be deprecate please use tf linalg qr instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf quantize be deprecate please use tf quantization quantize instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf quantize concat be deprecate please use tf quantization quantize concat instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random crop be deprecate please use tf image random crop instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random gamma be deprecate please use tf random gamma instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random normal be deprecate please use tf random normal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random poisson be deprecate please use tf random poisson instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random shuffle be deprecate please use tf random shuffle instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf random uniform be deprecate please use tf random uniform instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf read file be deprecate please use tf io read file instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf real be deprecate please use tf math real instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf reciprocal be deprecate please use tf math reciprocal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf regex replace be deprecate please use tf string regex replace instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf report uninitialized variable be deprecate please use tf compat v1 report uninitialized variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf reset default graph be deprecate please use tf compat v1 reset default graph instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf resource variable enable be deprecate please use tf compat v1 resource variable enable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf reverse v2 be deprecate please use tf reverse instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf rint be deprecate please use tf math rint instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf rsqrt be deprecate please use tf math rsqrt instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter add be deprecate please use tf compat v1 scatter add instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter div be deprecate please use tf compat v1 scatter div instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter max be deprecate please use tf compat v1 scatter max instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter min be deprecate please use tf compat v1 scatter min instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter mul be deprecate please use tf compat v1 scatter mul instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter nd add be deprecate please use tf compat v1 scatter nd add instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter nd sub be deprecate please use tf compat v1 scatter nd sub instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter nd update be deprecate please use tf compat v1 scatter nd update instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter sub be deprecate please use tf compat v1 scatter sub instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf scatter update be deprecate please use tf compat v1 scatter update instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf segment max be deprecate please use tf math segment max instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf segment mean be deprecate please use tf math segment mean instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf segment min be deprecate please use tf math segment min instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf segment prod be deprecate please use tf math segment prod instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf segment sum be deprecate please use tf math segment sum instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf self adjoint eig be deprecate please use tf linalg eigh instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf self adjoint eigval be deprecate please use tf linalg eigvalsh instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf serialize many sparse be deprecate please use tf compat v1 serialize many sparse instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf serialize sparse be deprecate please use tf compat v1 serialize sparse instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf serialize tensor be deprecate please use tf io serialize tensor instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf set random seed be deprecate please use tf compat v1 set random seed instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf space to batch nd be deprecate please use tf space to batch instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf space to depth be deprecate please use tf compat v1 space to depth instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse fill empty row be deprecate please use tf sparse fill empty row instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse mask be deprecate please use tf sparse mask instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse matmul be deprecate please use tf linalg matmul instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse maximum be deprecate please use tf sparse maximum instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse minimum be deprecate please use tf sparse minimum instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse placeholder be deprecate please use tf compat v1 sparse placeholder instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse reorder be deprecate please use tf sparse reorder instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse reset shape be deprecate please use tf sparse reset shape instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse reshape be deprecate please use tf sparse reshape instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse retain be deprecate please use tf sparse retain instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse segment mean be deprecate please use tf compat v1 sparse segment mean instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse segment sqrt n be deprecate please use tf compat v1 sparse segment sqrt n instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse segment sum be deprecate please use tf compat v1 sparse segment sum instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse slice be deprecate please use tf sparse slice instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse softmax be deprecate please use tf sparse softmax instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse tensor dense matmul be deprecate please use tf sparse sparse dense matmul instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse tensor to dense be deprecate please use tf sparse to dense instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse to indicator be deprecate please use tf sparse to indicator instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf sparse transpose be deprecate please use tf sparse transpose instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf squared difference be deprecate please use tf math square difference instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string join be deprecate please use tf string join instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string strip be deprecate please use tf string strip instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string to hash bucket be deprecate please use tf string to hash bucket instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string to hash bucket fast be deprecate please use tf string to hash bucket fast instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string to hash bucket strong be deprecate please use tf string to hash bucket strong instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf string to number be deprecate please use tf string to number instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf svd be deprecate please use tf linalg svd instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf table initializer be deprecate please use tf compat v1 table initializer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf tensor scatter add be deprecate please use tf tensor scatter nd add instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf tensor scatter sub be deprecate please use tf tensor scatter nd sub instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf tensor scatter update be deprecate please use tf tensor scatter nd update instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf trace be deprecate please use tf linalg trace instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf trainable variable be deprecate please use tf compat v1 trainable variable instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf truncated normal be deprecate please use tf random truncated normal instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment max be deprecate please use tf math unsorted segment max instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment mean be deprecate please use tf math unsorted segment mean instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment min be deprecate please use tf math unsorted segment min instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment prod be deprecate please use tf math unsorted segment prod instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment sqrt n be deprecate please use tf math unsorted segment sqrt n instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf unsorted segment sum be deprecate please use tf math unsorted segment sum instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf variable axis size partitioner be deprecate please use tf compat v1 variable axis size partitioner instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf variable op scope be deprecate please use tf compat v1 variable op scope instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf variable scope be deprecate please use tf compat v1 variable scope instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf variable initializer be deprecate please use tf compat v1 variable initializer instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf verify tensor all finite be deprecate please use tf compat v1 verify tensor all finite instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf where v2 be deprecate please use tf compat v2 where instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf wrap function be deprecate please use tf compat v1 wrap function instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf write file be deprecate please use tf io write file instead warn tensorflow from path to anaconda3 envs thermo lib python3 7 site package ipython core completerlib py 153 the name tf zeta be deprecate please use tf math zeta instead press enter to continue press enter to continue press enter to continue press enter to continue press enter to continue press enter to continue press enter to continue press enter to continue
tensorflowtensorflow
cloud tpu console spam on every tensorflow import
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 glinux like debian mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 21487 g2e8d5e5 2 1 0 dev20200102 python version python 3 7 5rc1 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior import tensorflow print an unnecessary and unhelpful warning warning tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client describe the expect behavior import tensorflow should not print any message about cloud tpus this be a normal desktop installation that doesn t have anything to do with tpus and doesn t need they code to reproduce the issue shell python c import tensorflow 2 1 diff u dev null other info log likely introduce by 5364121e858b
tensorflowtensorflow
signal module import
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 python version 3 6 8 describe the current behavior the signal module be not find when do from tensorflow signal I have this error modulenotfounderror traceback most recent call last in 1 from tensorflow signal import fft2d modulenotfounderror no module name tensorflow signal describe the expect behavior I would like from tensorflow signal to work code to reproduce the issue python from tensorflow signal import fft2d ifft2d other info log in version 1 14 this be work also I can still do the follow python import tensorflow as tf fft2d tf signal fft2d ifft2d tf signal ifft2d but it s obviously not very handy I also have open an so question but it didn t get a lot of attention
tensorflowtensorflow
deadlock on recursive tf function decorate function
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 10 tensorflow instal from source or binary binary conda tensorflow version use command below unknown 2 0 0 python version 3 7 describe the current behavior recursive call with a tf function decorate python function result in a deadlock a minimal example be provide below where it get stick by recursively call maybe define function which internally require a lock line 2118 in tensorflow core python eager function py this seem to deadlock when try to create a graph function again invoke itself about use case yes there be but yes in principle there be workaround but I assume this be not in general intend to produce a deadlock anyway should it if so I would rather propose a loud failure code to reproduce the issue tf function autograph false def func1 depth 0 if depth 1 return depth else return func1 depth 1 func1 0
tensorflowtensorflow
model crash in distribute strategy in tf2 1
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 rc2 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 gpu model and memory rtx titan 24 gb describe the current behavior when the model be to be train on distribute set model crash with the message of error occur when finalize generatordataset iterator cancel operation be cancel when it be set on single gpu everything work fine describe the expect behavior code to reproduce the issue import tensorflow as tf import tensorflow kera as keras import random import os os environ cuda visible device 0 1 class model keras model def init self super model self init self emb keras layer embed 51 100 self layer keras layer dense 51 def call self x x self emb x x self layer x return x strategy tf distribute mirroredstrategy datum I for I in range random randint 10 50 for j in range 400 def iterator for I in range len datum yield datum I data I with strategy scope model model optimizer keras optimizer adam dataset tf datum dataset from generator iterator output type tf int64 tf int64 batchfi dataset padded batch 4 padded shape none none batchfi strategy experimental distribute dataset batchfi tf function input signature batchfi element spec def multi gpu step x y def example update step x y with tf gradienttape as tape y model x batch loss keras loss sparse categorical crossentropy y true y y pre y from logit true loss batch loss strategy num replicas in sync step grad tape gradient loss model trainable variable optimizer apply gradient zip step grad model trainable variable return tf reduce mean batch loss 1 example loss strategy experimental run v2 example update step args x y loss sum strategy reduce tf distribute reduceop sum example loss axis 0 return loss sum for x y in batchfi multi gpu step x y provide a reproducible test case that be the bare minimum necessary to generate the problem other info log 2020 01 02 13 56 10 710246 w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator cancel operation be cancel 2020 01 02 13 56 10 711123 w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator cancel operation be cancel traceback most recent call last file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2240 in convert input to signature value dtype hint spec dtype file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework op py line 1314 in convert to tensor ret conversion func value dtype dtype name name as ref as ref file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 317 in constant tensor conversion function return constant v dtype dtype name name file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 258 in constant allow broadcast true file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 266 in constant impl t convert to eager tensor value ctx dtype file home bj1123 anaconda3 lib python3 7 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype valueerror attempt to convert a value perreplica 0 job localhost replica 0 task 0 device gpu 0 1 job localhost replica 0 task 0 device gpu 1 with an unsupported type to a tensor during handling of the above exception another exception occur traceback most recent call last file multi test variable py line 56 in multi gpu step x y file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager def function py line 632 in call return self stateless fn args kwd file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2362 in call graph function args kwargs self maybe define function args kwargs file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2661 in maybe define function args kwargs file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2185 in canonicalize function input self flat input signature file home bj1123 anaconda3 lib python3 7 site package tensorflow core python eager function py line 2246 in convert input to signature format error message input input signature valueerror when input signature be provide all input to the python function must be convertible to tensor input perreplica 0 job localhost replica 0 task 0 device gpu 0 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 shape 2 48 dtype int64 1 job localhost replica 0 task 0 device gpu 1 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 shape 2 48 dtype int64 perreplica 0 job localhost replica 0 task 0 device gpu 0 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 shape 2 48 dtype int64 1 job localhost replica 0 task 0 device gpu 1 tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 shape 2 48 dtype int64 input signature tensorspec shape none none dtype tf int64 name none tensorspec shape none none dtype tf int64 name none
tensorflowtensorflow
correction in the course material
Bug
description of issue what need change do we still need the step per epoch parameter while fit the model to training set in the tensorflow tutorial which be very similar to the mnist tutorial of intro to deep learning course there be no such parameter url s with the issue udacity course notebook scrollto s5uhzt6vvib2 screenshot from 2020 01 01 23 27 10 tensorflow tutorial screenshot from 2020 01 01 23 26 50 parameter define step per epoch parameter in model fit should be remove submit a pull request I ll submit a pr right away if this issue be relevant
tensorflowtensorflow
tflite gpu delegate on io fail assertion can not create a buffer of zero length
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ipad os 13 2 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device ipad pro 2018 tensorflow instal from source or binary binary tflitegpuexperimental tensorflow version use command below python version na bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory ipad gpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when run the model on ipad use cpu we be able to get the output but when do gpu delegate we get the error the follow error fail assertion can not create a buffer of zero length describe the expect behavior no error code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem delegate newgpudelegate nullptr interpreter modifygraphwithdelegate delegate other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
cudnn base lstm implementation not use when eager execution be disable
Bug
system information os debian 10 x64 gpu geforce gtx 1060 6 gb python 3 7 3 tensorflow 2 0 0 instal from source git tag v2 0 0 bazel 0 26 1 gcc 6 5 0 cuda 10 1 cudnn 7 6 4 sample code python import numpy import tensorflow as tf from tensorflow keras import model from tensorflow keras layers import input lstm uncommente this cause performance issue tf compat v1 disable eager execution I input shape 1024 32 o lstm unit 32 I m model I o m compile sgd mse m fit numpy zero 512 1024 32 numpy zero 512 32 output with eager execution enable train on 512 sample 2019 12 30 17 06 25 988147 w tensorflow core grappler optimizer implementation selector cc 310 skip optimization due to error while loading function librarie invalid argument function inference backward cudnn lstm with fallback 2026 2206 and inference backward cudnn lstm with fallback 2026 2206 specialize for statefulpartitionedcall at inference distribute function 2818 both implement lstm c56c9d1d 36f6 4e94 9410 ecd020c8700a but their signature do not match 512 512 2s 3ms sample loss 0 0000e 00 output with eager execution disabled warning tensorflow from home virtualenvs tensorflow 2 0 0 lib python3 7 site package tensorflow core python op resource variable op py 1630 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer train on 512 sample 512 512 8s 15ms sample loss 0 0000e 00 discussion note how the runtime be very significantly slow when eager execution be disabled it seem that the cudnn base implementation of lstm be not use whenever eager execution be disabled as be see on this line in the code l1066 where self could use cudnn end up be false this seem wrong to I as there s no reason not to use cudnn in that situation the warning about the skip optimization be discuss in 30263 and apparently can be ignore as per qlzh727 s comment
tensorflowtensorflow
distribut datum io problem and I want to know way
Bug
hi I find some bug code be import tensorflow as tf if name main def gen for I in range 10 yield i 2 3 4 5 6 7 8 dataset tf datum dataset from generator gen output type tf float32 tf float32 tf int32 tf int32 tf int32 tf float32 tf int32 tf int32 output shape none args none for one batch in dataset print one batch one batch print end num gpu 1 device device gpu format I for I in range num gpu strategy tf distribute mirroredstrategy device input context tf distribute inputcontext num input pipeline 1 input pipeline i d 0 num replicas in sync 1 with strategy scope def dataset fn input context dataset tf datum dataset from generator gen output type tf float32 tf float32 tf int32 tf int32 tf int32 tf float32 tf int32 tf int32 output shape none args none return dataset shard input context num input pipeline input context input pipeline i d train dist dataset strategy experimental distribute dataset from function dataset fn for one batch in train dist dataset print one batch one batch the code can be run but in distribut for one batch in train dist dataset at the end batch will be error traceback most recent call last file usr local python35 lib python3 5 pdb py line 1665 in main pdb runscript mainpyfile file usr local python35 lib python3 5 pdb py line 1546 in runscript self run statement file usr local python35 lib python3 5 bdb py line 431 in run exec cmd global local file line 1 in file search speech hubo git tf code acoustic tf2 0 model io test py line 45 in for one batch in train dist dataset file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 275 in next return self get next file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 304 in get next global have value replicas get next as optional self self strategy file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 200 in get next as optional iterator iterator I get next as list new name pylint disable protect access file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 878 in get next as list lambda dummy tensor fn data value structure file usr local python35 lib python3 5 site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file usr local python35 lib python3 5 site package tensorflow core python op control flow op py line 1204 in cond result false fn file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 878 in lambda dummy tensor fn data value structure file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 801 in dummy tensor fn result append create dummy tensor feature shape feature type file usr local python35 lib python3 5 site package tensorflow core python distribute input lib py line 784 in create dummy tensor for dim in feature shape dim typeerror nonetype object be not iterable uncaught exception enter post mortem debugging run cont or step will restart the program I want to know why
tensorflowtensorflow
why doesn t tf keras loss binary crossentropy raise error
Bug
x np arange 10 dtype np float64 reshape 10 1 x shape 10 1 y np arange 10 dtype np float64 y shape 10 tf keras loss binary crossentropy y true y y pre x this line do t raise error tf keras metric binaryaccuracy y true y y pre x this line neither tf keras metric precision y true y y pre x this line raise an error I think binary crossentropy and binaryaccuracy should raise an valueerror like tf keras metrics precision valueerror shape 128 1 and 128 be incompatible
tensorflowtensorflow
variational autoencoder code sample error in tf2 0
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue put it all together an end to end example description of issue what need change the code give in the guide do not run if the latent dimension be set 1 it run fine for every latent dimension 1 clear description when the model be build and train with code vae variationalautoencoder 784 64 1 optimizer tf keras optimizer adam learning rate 1e 3 vae compile optimizer loss tf keras loss meansquarederror vae fit x train x train epoch 3 batch size 64 it throw the error valueerror the last dimension of the input to dense should be define find none correct link n a parameter define latent dim 1 vae variationalautoencoder 784 64 1 return define n a raise list and define valueerror the last dimension of the input to dense should be define find none usage example code in the guide class sample layer layer use z mean z log var to sample z the vector encode a digit def call self input z mean z log var input batch tf shape z mean 0 dim tf shape z mean 1 epsilon tf keras backend random normal shape batch dim return z mean tf exp 0 5 z log var epsilon class encoder layer layer map mnist digit to a triplet z mean z log var z def init self latent dim 32 intermediate dim 64 name encoder kwargs super encoder self init name name kwargs self dense proj layer dense intermediate dim activation relu self dense mean layer dense latent dim self dense log var layer dense latent dim self sampling sample def call self input x self dense proj input z mean self dense mean x z log var self dense log var x z self sample z mean z log var return z mean z log var z class decoder layer layer convert z the encode digit vector back into a readable digit def init self original dim intermediate dim 64 name decoder kwargs super decoder self init name name kwargs self dense proj layer dense intermediate dim activation relu self dense output layer dense original dim activation sigmoid def call self input x self dense proj input return self dense output x class variationalautoencoder tf keras model combine the encoder and decoder into an end to end model for training def init self original dim intermediate dim 64 latent dim 32 name autoencoder kwargs super variationalautoencoder self init name name kwargs self original dim original dim self encoder encoder latent dim latent dim intermediate dim intermediate dim self decoder decoder original dim intermediate dim intermediate dim def call self input z mean z log var z self encoder input reconstruct self decoder z add kl divergence regularization loss kl loss 0 5 tf reduce mean z log var tf square z mean tf exp z log var 1 self add loss kl loss return reconstruct vae variationalautoencoder 784 64 1 optimizer tf keras optimizer adam learning rate 1e 3 vae compile optimizer loss tf keras loss meansquarederror vae fit x train x train epoch 3 batch size 64 request visual if applicable n a submit a pull request n a
tensorflowtensorflow
sequenceenqueuer doesn t work
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior sequenceenqueuer object take a sequence object which then can be use with multi processing for fast data pipeline but as soon as you start an enqueuer it throw notimplemente error describe the expect behavior it should produce batch of datum use multiprocesse code to reproduce the issue python pip install tensorflow import numpy as np import tensorflow as tf from tensorflow keras util import sequence to categorical sequenceenqueuer class datagenerator sequence def init self batch size 32 self batch size batch size self indice np arange 1024 def len self return 1024 def getitem self idx x np random rand self batch size 32 32 3 astype np float32 y np random randint 10 size self batch size return x y ds datagenerator batch size 32 enqueuer sequenceenqueuer ds use multiprocesse true enqueuer start worker 2 other info log notimplementederror traceback most recent call last in 1 enqueuer sequenceenqueuer ds use multiprocesse true 2 enqueuer start worker 2 1 frame usr local lib python3 6 dist package tensorflow core python keras util datum util py in start self worker max queue size 727 728 if self use multiprocesse 729 self executor fn self get executor init worker 730 else 731 we do not need the init since it s thread usr local lib python3 6 dist package tensorflow core python keras util datum util py in get executor init self worker 778 function a function to initialize the pool 779 780 raise notimplementederror 781 782 abstractmethod notimplementederror
tensorflowtensorflow
dcgans tutorial s batchnormalization be maybe something wrong
Bug
url s with the issue description of issue what need change in you tutorial batchnormalization will be actnormalization in glow clear description we need correct bathnormalization usage example we should input the shape when batchnormalization be initialize
tensorflowtensorflow
tf profiler calculate incorrect flop in mobilenetssd v2
Bug
below script be give correct result for mobilenetssd v1 flop but fail in calculate mobilentssd v2 flop mobilenetssd v2 flop come out to be 185 million only sess tf session import numpy as np import os import time import cv2 graph tf get default graph image np cv2 imread 41566 large jpg image cv2 resize image np 300 300 interpolation cv2 inter cubic image np expand dim image axis 0 print image for I in range 1 tf reset default graph st device cpu str I with tf device st with graph as default with sess as default restore the model run metadata tf runmetadata saver tf train import meta graph ssd mobilenet v2 coco 2018 03 29 model ckpt meta saver restore sess tf train late checkpoint ssd mobilenet v2 coco 2018 03 29 op tf get default graph get operation all tensor name output name for op in op for output in op output tensor dict for key in num detection detection box detection score detection class detection mask tensor name key 0 if tensor name in all tensor name tensor dict key tf get default graph get tensor by name tensor name if detection mask in tensor dict detection box tf squeeze tensor dict detection box 0 detection mask tf squeeze tensor dict detection mask 0 real num detection tf cast tensor dict num detection 0 tf int32 detection box tf slice detection box 0 0 real num detection 1 detection mask tf slice detection mask 0 0 0 real num detection 1 1 detection mask reframe util op reframe box mask to image mask detection mask detection box image shape 1 image shape 2 detection mask reframe tf cast tf great detection mask reframe 0 5 tf uint8 tensor dict detection mask tf expand dim detection mask reframe 0 image tensor tf get default graph get tensor by name image tensor 0 run inference opt tf profiler profileoptionbuilder trainable variable parameter for I in range 1 print inference flag start time time output dict sess run tensor dict option tf runoption trace level tf runoption full trace feed dict image tensor image run metadata run metadata end time time print run metadata print end start opt tf profiler profileoptionbuilder trainable variable parameter flop tf profiler profile sess graph run meta run metadata option tf profiler profileoptionbuilder float operation param tf profiler profile sess graph run meta run metadata cmd op option opt print flop before freeze flop total float op print parametrs param total parameter output dict num detection int output dict num detection 0 output dict detection class output dict detection class 0 astype np int64 output dict detection box output dict detection box 0 output dict detection score output dict detection score 0 if detection mask in output dict output dict detection mask output dict detection mask 0 model be take from tensorflow api github repo this be the link
tensorflowtensorflow
integrate odeint in tensorflow 2 0
Bug
can t find odeint or integrate can t import contrib from tf compat v1 where be odeint
tensorflowtensorflow
emnist link break
Bug
the link to the emnist page on tensorflow s doc be break the exist link be it should be replace by
tensorflowtensorflow
which be the correct type for debug assert shape
Bug
url s with the issue description of issue what need change update documentation or implementation of tf debugging assert shape shape in its argument require list of tuple not dictionary clear description your documentation python 4 s syntax python x tf random normal 128 32 32 1 tf debugging assert shape x 128 32 32 1 x 128 32 32 1 syntaxerror invalid syntax with dictionary python 3 s syntax python tf debugging assert shape x 128 32 32 1 traceback most recent call last file line 2 in file usr local lib python3 6 dist package tensorflow core python framework op py line 713 in hash raise typeerror tensor be unhashable if tensor equality be enable typeerror tensor be unhashable if tensor equality be enable instead use tensor experimental ref as the key with list of tuple python 3 s syntax not dictionary python tf debugging assert shape x 128 32 32 1 nothing correct
tensorflowtensorflow
tf keras layer reshape not work as expect
Bug
system information ubuntu 18 043 lt tensorflow instal use pip tensorflow version 2 0 0 cpu version python version 3 6 9 describe the current behavior the sample code show below give follow partial output and exception 400 100 2 200 100 lambda reshape work invalidargumenterror traceback most recent call last in 12 print out 1 shape 13 print lambda reshape work 14 reshape layer inp code venv lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 889 with base layer util autocast context manager 890 self compute dtype 891 output self call cast input args kwargs 892 self handle activity regularization input output 893 self set mask metadata input output input mask code venv lib python3 6 site package tensorflow core python keras layers core py in call self input 469 def call self input 470 return array op reshape input 471 array op shape input 0 self target shape 472 473 def get config self code venv lib python3 6 site package tensorflow core python op array ops py in reshape tensor shape name 129 a tensor have the same type as tensor 130 131 result gen array op reshape tensor shape name 132 tensor util maybe set static shape result shape 133 return result code venv lib python3 6 site package tensorflow core python ops gen array op py in reshape tensor shape name 8104 try 8105 return reshape eager fallback 8106 tensor shape name name ctx ctx 8107 except core symbolicexception 8108 pass add node to the tensorflow graph code venv lib python3 6 site package tensorflow core python ops gen array op py in reshape eager fallback tensor shape name ctx 8142 attrs t attr t tshape attr tshape 8143 result execute execute b reshape 1 input input flat attrs attrs 8144 ctx ctx name name 8145 execute record gradient 8146 reshape input flat attrs result name code venv lib python3 6 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor code venv lib python3 6 site package six py in raise from value from value invalidargumenterror input to reshape be a tensor with 40000 value but the request shape have 8000000 op reshape describe the expect behavior define reshape layer should reshape the input of shape 400 100 to a tensor of shape 2 200 100 wrap tf reshape in a lambda layer be work as an alternative solution code to reproduce the issue python from tensorflow keras layer import reshape lambda import tensorflow as tf max len 200 char hide dim 50 reshape layer reshape max len 2 char hide dim lambda reshape layer lambda lambda x tf reshape x shape 1 max len 2 char hide dim n sample 400 dim 100 inp np zero n sample dim dtype np float32 print inp shape out 1 lambda reshape layer inp print out 1 shape print lambda reshape work reshape layer inp do I have a flawed understanding of the reshape layer or can someone confirm that it be indeed a bug please help
tensorflowtensorflow
tf 2 0 parameterserverstrategy and centralstoragestrategy doesn t work with kera when use gpu even though it work well with cpu
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow I use code in multiworkermirroredstrategy tutorial and I only change multiworkermirroredstrategy to parameterserverstrategy and turn off the eager mode os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 lts tensorflow instal from source or binary pip3 tensorflow version use command below tensorflow 2 0 0 tensorflow gpu 2 0 0 python version python 3 6 8 cuda cudnn version cuda 10 0 gpu model and memory titan xp code to reproduce the issue from future import absolute import division print function unicode literal import tensorflow dataset as tfds import tensorflow as tf tf compat v1 disable eager execution strategy tf distribute experimental parameterserverstrategy buffer size 10000 batch size 64 num worker 2 global batch size batch size num worker def scale image label image tf cast image tf float32 image 255 return image label dataset info tfds load name mnist with info true as supervise true train dataset unbatched dataset train map scale cache shuffle buffer size train dataset train dataset unbatched batch global batch size repeat def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model with strategy scope multi worker model build and compile cnn model multi worker model fit x train dataset epoch 3 step per epoch 938 describe the current behavior above code work well with cpu but when use gpu it produce error like below my tf config variable be like this tf config cluster worker localhost 7779 ps localhost 7777 task index 0 type ps and it also produce same error when I try to apply centralstoragestrategy traceback most recent call last file temp py line 48 in multi worker model build and compile cnn model file temp py line 35 in build and compile cnn model tf keras layer dense 10 activation softmax file home elzino tf2 lib python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home elzino tf2 lib python3 6 site package tensorflow core python keras engine sequential py line 114 in init self add layer file home elzino tf2 lib python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home elzino tf2 lib python3 6 site package tensorflow core python keras engine sequential py line 178 in add layer x file home elzino tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 817 in call self maybe build input file home elzino tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 2141 in maybe build self build input shape file home elzino tf2 lib python3 6 site package tensorflow core python keras layers convolutional py line 165 in build dtype self dtype file home elzino tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 2311 in setattr if val trainable file home elzino tf2 lib python3 6 site package tensorflow core python op variable py line 477 in trainable raise notimplementederror notimplementederror describe the expect behavior I should work well like when use cpu other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
kear layers concatenate do not work when save a model
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow a custom model os platform and distribution e g linux ubuntu 16 04 windows7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below 2 0 0 python version 3 75 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 1 gpu model and memory quadro k620 m describe the current behavior I build a custom model as follow class c3br tf keras model 3d convolution batch normalisation relu def init self filternum ksize strsize padmode super c3br self init self conv layer conv3d filter filternum kernel size ksize stride strsize padding padmode datum format channel first self bn layer batchnormalization axis 1 def call self input iftrain true x self conv input if iftrain true x self bn x return activation relu x def build model self input shape a work around to define dimension of signal through the nn self build input shape input tf keras input shape input shape 1 self call input class simpleunet1 tf keras model serialise basic unit so as to build up a double layered encoder decoder u net input indim for initialisation modaility channel tensor dimensions classnum background include name name for the net input 5d tf tensor of mbsize modaility channel tensor dimension input must be organise into channel first order input shape a 1x5 tuple mbsize modaility channel tensor dimension iftrain true for training and false for validation and testing return output 5d tf tensor of mbsize classnum tensor dimension def init self indim classnum name simpleunet kwarg super simpleunet1 self init name name kwarg self indim indim self classnum classnum dimenst1end np array indim 2 2 2 dimenst2ed dimenst1end 2 2 2 dimbridgeend dimenst2ed 2 2 2 2 dimdestd1end dimbridgeend 2 2 2 self outdim dimdestd1end 2 2 2 temp dimenst2ed dimbridgeend 2 astype int32 crop3d1 tuple np tile temp 2 1 t temp dimenst1end dimdestd1end 2 astype int32 crop3d2 tuple np tile temp 2 1 t self en st1 cbr1 c3br 32 3 1 valid self en st1 cbr2 c3br 64 3 1 valid self en st2 mp layer maxpooling3d pool size 2 2 2 stride 2 2 2 padding valid datum format channel first self en st2 cbr1 c3br 64 3 1 valid self en st2 cbr2 c3br 128 3 1 valid self bridge mp layer maxpooling3d pool size 2 2 2 stride 2 2 2 padding valid datum format channel first self bridge cbr1 c3br 128 3 1 valid self bridge cbr2 c3br 256 3 1 valid self bridge tconv1 layer conv3dtranspose 256 2 stride 2 padding valid datum format channel first self de 3dcrop1 layer cropping3d crop3d1 data format channel first self de st1 concat layer concatenate axis 1 self de st1 cbr1 c3br 256 3 1 valid self de st1 cbr2 c3br 128 3 1 valid self de st1 tconv1 layer conv3dtranspose 128 2 stride 2 padding valid datum format channel first self de 3dcrop2 layer cropping3d crop3d2 datum format channel first self de st2 concat layer concatenate axis 1 self de st2 cbr1 c3br 64 3 1 valid self de st2 cbr2 c3br 64 3 1 valid self final conv3d layer conv3d filter self classnum kernel size 3 stride 1 padding valid datum format channel first tf function def call self input iftrain true x0 self en st1 cbr1 input iftrain xenst1end self en st1 cbr2 x0 iftrain x1 self en st2 mp xenst1end x2 self en st2 cbr1 x1 iftrain xenst2ed self en st2 cbr2 x2 iftrain x3 self bridge mp xenst2ed x4 self bridge cbr1 x3 iftrain x5 self bridge cbr2 x4 iftrain xbridgeend self bridge tconv1 x5 xcrop1 self de 3dcrop1 xenst2ed print xbridgeend shape print xcrop1 shape x6 self de st1 concat xbridgeend xcrop1 print x6 shape x7 self de st1 cbr1 x6 iftrain x8 self de st1 cbr2 x7 iftrain xdest1end self de st1 tconv1 x8 xcrop2 self de 3dcrop2 xenst1end x9 self de st2 concat xdest1end xcrop2 x10 self de st2 cbr1 x9 iftrain x11 self de st2 cbr2 x10 iftrain x12 self final conv3d x11 output activation softmax x12 axis 1 return output def build model self input shape a work around to define dimension of signal through the nn self build input shape input tf keras input shape input shape 1 self call input def compute output shape self override this function if one expect to use the subclasse model in kera s fit method otherwise it be optional return tf tensorshape np append self classnum self outdim please pay attention to the follow two definition if I use concatenate layer in this way the c be a capitalised one when I try and save the model it work self de st1 concat layer concatenate axis 1 self de st2 concat layer concatenate axis 1 for instance modelindim 4 64 64 64 classnum 2 mbsize 2 tunet simpleunet1 modelindim classnum tunet build model input shape mbsize modelindim x tf random uniform mbsize 4 64 64 64 y tunet x tunet summary tunet set input x tunet save r ttweight save format tf but if as per this page I use layer concatenate small c to generate signal x6 and x9 respectively that be x6 layer concatenate xbridgeend xcrop1 axis 1 x9 layer concatenate xdest1end xcrop2 axis 1 then save the model in the same way above it raise an error valueerror a concatenate layer require input with match shape except for the concat axis get input shape none 256 none none none none 128 18 18 18 the detailed log be here it take I almost an entire day to figure out the root cause in conclusion I think the second one may have to be deprecate otherwise user like I will be confused and misguided
tensorflowtensorflow
tf metric mean metric miscalculate
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip3 tensorflow version use command below 2 0 0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf metric meanabsoluteerror and other compute the mean of mean describe the expect behavior they should compute the mean of all the datum to support iterate over evaluation dataset code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf m tf metric meanabsoluteerror m update state y true 0 0 y pre 1 1 m update state y true 0 y pre 2 assert m result 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the fix be simple instead of the mean square error function use a new sum square error function in the metric and pass that to meanmetricwrapper
tensorflowtensorflow
relu layer doesn t handle integer dtype
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 10 buster mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc0 47 g064e153 2 1 0 rc1 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior an exception during tf keras layers relu construction with integer dtype and max value describe the expect behavior a layer be properly construct and functional code to reproduce the issue import tensorflow as tf input tf keras layers input shape name x dtype int64 y tf keras layers relu max value 100 dtype int64 input other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach valueerror traceback most recent call last usr local lib python3 7 site package tensorflow core python framework tensor util py in assertcompatible value dtype 323 try 324 fn value 325 except valueerror as e usr local lib python3 7 site package tensorflow core python framework tensor util py in inner value 262 def inner value 263 check fail v for v in nest flatten value 264 if not isinstance v expect type usr local lib python3 7 site package tensorflow core python framework tensor util py in 0 263 check fail v for v in nest flatten value 264 if not isinstance v expect type 265 return inner usr local lib python3 7 site package tensorflow core python framework tensor util py in check fail v 247 it be safe to use here 248 raise valueerror v 249 valueerror 0 0 during handling of the above exception another exception occur typeerror traceback most recent call last in 1 input tf keras layers input shape name x dtype int64 2 y tf keras layers relu max value 100 dtype int64 input usr local lib python3 7 site package tensorflow core python keras engine base layer py in call self input args kwargs 771 not base layer util be in eager or tf function 772 with auto control dep automaticcontroldependencie as acd 773 output call fn cast input args kwargs 774 wrap tensor in output in tf identity to avoid 775 circular dependency usr local lib python3 7 site package tensorflow core python keras layers advanced activation py in call self input 317 alpha self negative slope 318 max value self max value 319 threshold self threshold 320 321 def get config self usr local lib python3 7 site package tensorflow core python keras backend py in relu x alpha max value threshold 4373 if clip max 4374 max value constant to tensor max value x dtype base dtype 4375 zero constant to tensor 0 x dtype base dtype 4376 x clip op clip by value x zero max value 4377 usr local lib python3 7 site package tensorflow core python keras backend py in constant to tensor x dtype 676 a tensor 677 678 return constant op constant x dtype dtype 679 680 usr local lib python3 7 site package tensorflow core python framework constant op py in constant value dtype shape name 256 257 return constant impl value dtype shape name verify shape false 258 allow broadcast true 259 260 usr local lib python3 7 site package tensorflow core python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 294 tensor util make tensor proto 295 value dtype dtype shape shape verify shape verify shape 296 allow broadcast allow broadcast 297 dtype value attr value pb2 attrvalue type tensor value tensor dtype 298 const tensor g create op internal pylint disable protect access usr local lib python3 7 site package tensorflow core python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 449 nparray np empty shape dtype np dt 450 else 451 assertcompatible value dtype 452 nparray np array value dtype np dt 453 check to they usr local lib python3 7 site package tensorflow core python framework tensor util py in assertcompatible value dtype 329 else 330 raise typeerror expect s get s of type s instead 331 dtype name repr mismatch type mismatch name 332 333 typeerror expect int64 get 0 0 of type float instead tf constant 0 dtype int64 fail as well but with different backtrace tf nightly 2 1 0 dev20191226 be affect too
tensorflowtensorflow
x unknown while train the first epoch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 code be run on google colab however my machine be windows 10 version 1903 if that matter tensorflow instal from source or binary no knowledge use the already instal version on google colab tensorflow version use command below v2 1 0 rc1 0 g064e1535a7 2 1 0 rc1 python version 3 6 9 gcc compiler version if compile from source 8 3 0 gpu model and memory no knowledge use the give google colab hardware describe the current behavior when fit the model on the first epoch it count step like 337 unknown the unknown stay there until the first epoch fitting be over at step 582 but that issue get fix after the first epoch be over the text turn to 582 582 and it show like x 582 on the next epoch describe the expect behavior it should say like x 582 at the first epoch too code to reproduce the issue try do the train the model part you should encounter this problem when you start fit the model
tensorflowtensorflow
bijector crash tf2 1 autograph in distribute mirror multi gpu mode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc1 58 g9837ece 2 1 0 rc2 python3 c import tensorflow as tf print tf version git version tf version version python version python 3 6 8 cuda cudnn version driver version 440 33 01 cuda version 10 2 cudnn 7 6 2 gpu model and memory tesla v100 sxm2 16 gb describe the current behavior I m use bijector as a flexible prior for a vae in tf2 1 autograph distribute mirror mode l1152 I be get google protobuf message decodeerror error parse message when run with multiple gpu but not when run with single gpu bijector for I in range 16 bijector append tfb maskedautoregressiveflow shift and log scale fn tfb mask autoregressive default template code hide layer 1024 1024 name scope maf str I bijector append tfb batchnormalization batchnorm layer tf layer batchnormalization name scope batch norm str I name scope batch norm bijector str I permutation tf get variable permutation str I initializer np random permutation out channel astype int32 trainable false bijector append tfb permute permutation flow bijector tfb chain list reverse bijector 1 l190 describe the expect behavior should not crash code to reproduce the issue tf2 x code l190 other info log traceback most recent call last file main py line 134 in main file main py line 121 in main gan train file app home ubuntu spade tensorflow tf2 spade py line 1379 in train train loop file app home ubuntu spade tensorflow tf2 spade py line 1336 in train loop counter result input result loss det result outputs det result outputs resample det result output random det result outputs random gen det train det grad both global step self train main input file home ubuntu local lib python3 6 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home ubuntu local lib python3 6 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file home ubuntu local lib python3 6 site package tensorflow core python eager def function py line 497 in initialize args kwd file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 2599 in create graph function share func graph false file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 1511 in init func graph self attrs self garbage collector file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 601 in init self func graph input self func graph outputs attrs file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 466 in init function def parsefromstring compat as bytes proto data google protobuf message decodeerror error parse message train celebamask hq tf21 4xgpu maf log
tensorflowtensorflow
vgg19 preprocess input doesn t work in tf2 1 autograph mode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc1 58 g9837ece 2 1 0 rc2 python3 c import tensorflow as tf print tf version git version tf version version python version python 3 6 8 cuda cudnn version driver version 440 33 01 cuda version 10 2 cudnn 7 6 2 gpu model and memory tesla v100 sxm2 16 gb describe the current behavior run tensorflow keras application vgg19 preprocess input inside tf function result in exception typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name vggloss const 0 describe the expect behavior should work the same as in tf1 x l15 code to reproduce the issue self vgg loss vggloss l631 g nondet vgg loss self vgg loss fake nondet x output real x l776 g det vgg loss self vgg loss real x fake det x stat 0 0 l784 this be the line that fail x vgg y vgg self vgg preprocess input x self vgg preprocess input y l15 I also try to patch the code for preprocess input myself l62 it somewhat work but judge by the scale of the vgg loss some input normalization be off other info log traceback most recent call last file main py line 134 in main file main py line 121 in main gan train file app home ubuntu spade tensorflow tf2 spade py line 1180 in train build file home ubuntu local lib python3 6 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home ubuntu local lib python3 6 site package tensorflow core python eager def function py line 632 in call return self stateless fn args kwd file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 2363 in call return graph function filter call args kwargs pylint disable protect access file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 1611 in filter call self capture input file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 1692 in call flat ctx args cancellation manager cancellation manager file home ubuntu local lib python3 6 site package tensorflow core python eager function py line 545 in call ctx ctx file home ubuntu local lib python3 6 site package tensorflow core python eager execute py line 76 in quick execute raise e file home ubuntu local lib python3 6 site package tensorflow core python eager execute py line 61 in quick execute num output typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name vggloss const 0
tensorflowtensorflow
valueerror flatten a perreplica to component be not support in replica context
Bug
system information window tensorflow instal from conda tensorflow version 2 0 python version 3 7 4 cuda cudnn version 10 2 7 6 gpu model and memory 2 nvidia rtx 2070s 8 gb describe the current behavior I follow the distribute training tutorial to change my code custom model for cumtom training loop but when I run the script it show mistake valueerror flatten a perreplica to component be not support in replica context I don t understande why it be happen it can run in the train step but it can t run in the test step describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I use this strategy before but it make the same mistake strategy tf distribute mirroredstrategy cross device op tf distribute hierarchicalcopyallreduce strategy tf distribute mirroredstrategy cross device op tf distribute reductiontoonedevice device cpu 0 def train step fn rgb spec with tf gradienttape as tape fake spec model rgb train true loss compute loss spec fake spec gradient tape gradient loss model trainable variable opt apply gradient zip gradient model trainable variable update metric rmse1 update state spec fake spec rmse2 update state spec fake spec rrmse1 update state spec fake spec rrmse2 update state spec fake spec sam update state spec fake spec return loss def test step fn rgb sepc fake spec model rgb train false loss loss object spec fake spec update metric rmse1 update state spec fake spec rmse2 update state spec fake spec rrmse1 update state spec fake spec rrmse2 update state spec fake spec sam update state spec fake spec return loss tf function def distribute train step rgb spec per replica loss strategy experimental run v2 train step fn args rgb spec return strategy reduce tf distribute reduceop sum per replica loss axis none tf function def distribute test step rgb spec return strategy experimental run v2 test step fn args rgb spec for epoch in range parser epoch train for step rgb spec in enumerate data 0 train mean loss distribute train step rgb spec step 1 ckpt step assign step if step 50 break val rmse1 reset state rmse2 reset state rrmse1 reset state rrmse2 reset state sam reset state for step rgb spec in enumerate data 1 test mean loss distribute test step rgb spec other info log valueerror in convert code mytraint py 163 distribute test step return strategy experimental run v2 test step fn args rgb spec c user zhangstation anaconda3 envs tf lib site package tensorflow core python distribute distribute lib py 760 experimental run v2 return self extend call for each replica fn args args kwargs kwargs mytraint py 144 test step fn loss loss object spec fake spec c user zhangstation anaconda3 envs tf lib site package tensorflow core python keras loss py 125 call with k name scope scope name or self class name graph ctx c user zhangstation anaconda3 envs tf lib contextlib py 112 enter return next self gen c user zhangstation anaconda3 envs tf lib site package tensorflow core python keras util tf util py 435 graph context for symbolic tensor if any be symbolic tensor v for v in list args list kwargs value c user zhangstation anaconda3 envs tf lib site package tensorflow core python keras util tf util py 435 if any be symbolic tensor v for v in list args list kwargs value c user zhangstation anaconda3 envs tf lib site package tensorflow core python keras util tf util py 345 be symbolic tensor return tensor be graph tensor pylint disable protect access c user zhangstation anaconda3 envs tf lib site package tensorflow core python framework composite tensor py 119 be graph tensor component self type spec to component self pylint disable protect access c user zhangstation anaconda3 envs tf lib site package tensorflow core python distribute value py 500 to component flatten a perreplica to component be not support in replica valueerror flatten a perreplica to component be not support in replica context