repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | when subclasse the model class you should implement a call method | Bug | tf version tf 2 1 0 python import tensorflow as tf def sub test model x input tf keras input 5 name input x tf keras layer dense 8 x x tf keras layers softmax x return tf keras model input x def test create model x input tf keras input 3 name input x tf keras layer dense 5 x x tf keras layers softmax x x sub test model x return tf keras model input x test model test create model test model save checkpoint test model test model restore tf keras model load model checkpoint test model without sub test model the model save load will work fine with the sub model it will occur the follow error image |
tensorflowtensorflow | overflow in tf keras layer experimental preprocessing normalization | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below for code to reproduce os platform and distribution e g linux ubuntu 16 04 window 10 10 0 18363 836 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary conda tensorflow version use command below unknown 2 1 0 python version 3 6 10 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory describe the current behavior use for a tf keras layer experimental preprocessing normalization layer norm norm adapt dataset encounter overflow warning describe the expect behavior calculate norm and standard deviation correctly standalone code to reproduce the issue python import numpy as np import tensorflow as tf def gen for I in range 2 13 array np random random sample 1024 1024 4 reshape 1024 1024 4 astype np float32 yield array 1024 exacerbate the issue dataset tf datum dataset from generator gen tf float32 tf tensorshape 1024 1024 4 dataset dataset batch 4 norm tf keras layer experimental preprocessing normalization norm adapt dataset this end up with runtimewarning print norm mean result be all inf print norm variance result be 0 other info log d local envs tf 2 1 lib site package tensorflow core python keras layer preprocesse normalization py 181 runtimewarning divide by zero encounter in true divide combine count d local envs tf 2 1 lib site package tensorflow core python keras layer preprocesse normalization py 190 runtimewarne invalid value encounter in reduce variance contribution accumulator for accumulator in accumulator d local envs tf 2 1 lib site package tensorflow core python keras layer preprocesse normalization py 187 runtimewarning overflow encounter in square accumulator variance np square accumulator mean combine mean d local envs tf 2 1 lib site package tensorflow core python keras layer preprocesse normalization py 187 runtimewarne invalid value encounter in multiply accumulator variance np square accumulator mean combine mean the count overflow problem could potentially be mitigate by change the dtype here l158 to int64 |
tensorflowtensorflow | about the official document variable name introduce by profiler be inconsistent | Bug | about the official document variable name introduce by profiler be inconsistent please see the url wechatimg6 you can find problem with two variable tb callback and tensorboard callback |
tensorflowtensorflow | the tf keras model compile metric do not respect masking since tf 2 2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux debian stable mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf 2 2 tf nightly 2 3 0 dev20200529 python version 3 7 describe the current behavior the metric pass to tf keras model compile as metric do not respect model mask describe the expect behavior until tf 2 1 these metric do respect model mask standalone code to reproduce the issue consider the follow code which mask the input element python import numpy as np import tensorflow as tf print tf version model tf keras sequential tf keras layer mask 1 tf keras layer dense 1 model compile optimizer tf optimizer adam loss tf loss meansquarederror metric tf metric meansquarederror weight metric tf metrics meansquarederror print model train on batch np one 1 1 np one 1 1 tensorflow until 2 1 mask also the metric in metric while tensorflow 2 2 and later do not tensorflow 2 1 colab print 0 0 0 0 0 0 tensorflow 2 2 colab print 0 0 1 0 0 0 tensorflow nightly 2 3 0 dev20200529 colab print 0 0 1 0 0 0 other info log the logic of apply the mask in master be here l404 l414 the metric do not get call with sample weight but that be the place where the mask be apply in apply mask on the other hand in tf 2 1 l2000 l2012 the output mask be pass even for the unweighted metric |
tensorflowtensorflow | wrong output shape with ellipsis in tflite from keras model | Bug | I m work on convert a model use tflite start from a keras model and I notice that if I use ellipsis to slice up tensor something weird happen once I load the tflite model inside the interpreter before allocate tensor through interpreter allocate tensor call the function interpreter get output detail give as output shape the same one that I get with my keras model but after the tensor allocation interpreter get output detail give an output shape different from the one of the keras model this do not happen if I use normal slicing instead of ellipsis I create a toy example for replicate this behavior import numpy as np import tensorflow as tf from tensorflow keras import layer from tensorflow import kera input layer keras input shape 3 4 x layer dense 10 input layer x layer dense 1 x 0 with ellipsis model keras model input layer x loss keras loss meansquarederror optimizer keras optimizer adam model compile optimizer loss loss model fit np random random 10 3 4 astype np float32 np one 10 3 astype np float32 epoch 10 batch size 5 tflite model multi tf lite tfliteconverter from keras model model tflite model multi tflite model multi convert with open my model tflite wb as fin fin write tflite model multi interpreter tf lite interpreter model path my model tflite print interpreter get output detail interpreter allocate tensor print interpreter get output detail output name identity index 14 shape array 1 3 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 name identity index 14 shape array 1 1 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 instead if in the same chunk of code I use slice x layer dense 1 x 0 without ellipsis it output name identity index 14 shape array 1 3 dtype int32 shape signature array 1 3 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter name identity index 14 shape array 1 3 dtype int32 shape signature array 1 3 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter I don t know if this behavior be want but likely one can spend easily a couple of hour debug around in order to find it |
tensorflowtensorflow | tf module break gradient registration | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab default describe the current behavior when use operation within a tf module init the gradient be break describe the expect behavior the gradient should still be register properly standalone code to reproduce the issue |
tensorflowtensorflow | tf math maximum example be write incorrectly | Bug | url s with the issue return description of issue what need change the example of tf math maximum be not write correctly it s write in this manner example x tf constant 0 0 0 0 y tf constant 2 0 2 5 tf math maximum x y instead of this it should ve write in this manner x tf constant 0 0 0 0 y tf constant 2 0 2 5 tf math maximum x y tf tensor 0 0 2 5 shape 4 dtype float32 submit a pull request no |
tensorflowtensorflow | miss positional argument error when deepcopy a lstmcell | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 2 0 python version 3 5 3 6 describe the current behavior an exception be raise when call copy deepcopy on a tf keras layers lstmcell describe the expect behavior keras layer should support copy deepcopy without error since the same code work in tensorflow 2 1 standalone code to reproduce the issue python import copy import tensorflow as tf cell tf keras layer lstmcell 512 cell copy deepcopy cell other info log text traceback most recent call last file test deepcopy py line 5 in cell copy deepcopy cell file lib python3 6 copy py line 180 in deepcopy y reconstruct x memo rv file lib python3 6 copy py line 280 in reconstruct state deepcopy state memo file lib python3 6 copy py line 150 in deepcopy y copi x memo file lib python3 6 copy py line 240 in deepcopy dict y deepcopy key memo deepcopy value memo file lib python3 6 copy py line 180 in deepcopy y reconstruct x memo rv file lib python3 6 copy py line 280 in reconstruct state deepcopy state memo file lib python3 6 copy py line 150 in deepcopy y copi x memo file lib python3 6 copy py line 240 in deepcopy dict y deepcopy key memo deepcopy value memo file lib python3 6 copy py line 150 in deepcopy y copi x memo file lib python3 6 copy py line 240 in deepcopy dict y deepcopy key memo deepcopy value memo file lib python3 6 copy py line 180 in deepcopy y reconstruct x memo rv file lib python3 6 copy py line 280 in reconstruct state deepcopy state memo file lib python3 6 copy py line 150 in deepcopy y copi x memo file lib python3 6 copy py line 220 in deepcopy tuple y deepcopy a memo for a in x file lib python3 6 copy py line 220 in y deepcopy a memo for a in x file lib python3 6 copy py line 150 in deepcopy y copi x memo file lib python3 6 copy py line 240 in deepcopy dict y deepcopy key memo deepcopy value memo file lib python3 6 copy py line 161 in deepcopy y copi memo file lib python3 6 weakref py line 421 in deepcopy new self class typeerror init miss 1 require positional argument default factory |
tensorflowtensorflow | simple keras model predict call fail inside py function | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes create a simple example for reproducibility os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary pip tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version python 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 2 cudnn 7 6 5 gpu model and memory geforce gtx 980 ti 6 gb describe the current behavior use the predict api for a keras model inside a py function throw the follow error lookuperror no gradient define for operation iteratorgetnext op type iteratorgetnext standalone code to reproduce the issue import numpy as np import tensorflow as tf inp tf keras input shape 5 out tf keras layer dense 1 inp model tf keras model inp out def outer func arr def func x re model predict x return re out tf py function func arr tf float32 return out outer func np random rand 10 5 |
tensorflowtensorflow | tflite convert python api have bad code | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change under convert a keras model it have the code tf gfile gfile and that code have move to tf io gfile gfile clear description this change should be make so that the code run correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define no error usage example be there a usage example no request visual if applicable be there currently visual if not will it clarify the content n a submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | incorrect layer normalization description | Bug | hi I see a description of layernormalization accord to this image image layernorm should compute mean and var across dimensions c h w thus I expect they to be 1d tensor of length n but the code of tf keras layers layernormalization l1131 compute mean and var on 3rd dimension which be c and generate tensor of shape n h w 1 that seem incorrect be I miss something |
tensorflowtensorflow | miss pre processing for mobilenet v2 model training | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example l292 description of issue what need change in the above codelabs tutorial we see image have be rescale to 0 1 by divide by 255 since pre train weight imagenet be train by feed 1 1 normalize image ideally tutorial should add that step to correctly leverage transfer learn clear description so what be happen be we create a tflite model train with 0 1 base preprocessing on android client we be do 1 1 base preprocesse before feed to tflite model can someone please clarify correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | nnapi reference do not output the same result as cpu for style transfer app on android for tensorflow lite | Bug | I have alter this app significantly such that styletransfermodelexecutor could configure the interpreter with more fine granularity for example styleexecutor appcontext quant true device device nnapi nnacc nnapi reference I be more than willing to share all that code however the main point be run inference with style predict quantize 256 tflite style transfer quantize 384 tflite on cpu be different than nnapi reference this be how I set the option for nnapi reference val tfliteoption interpreter option var opt nnapidelegate option opt setacceleratorname nnapi reference tfliteoption adddelegate nnapidelegate opt interpreter loadmodelfile context modelname tfliteoption I have an instrumented test that essentially run noise through the model and compare the result and the result be significantly different I believe this be a big issue than just transfer style and really relate to quantize model on nnapi not agree with cpu I just use this app as a basis for re reproducibility feed in noise content style cpu and gpu inference agree cpuinference gpuinference nnapi reference differ nninference |
tensorflowtensorflow | tensorflow python compiler xla jit test fail on s390x and need to add support for llvm | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below v2 2 0 python version 3 8 2 bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc ubuntu 9 3 0 10ubuntu2 9 3 0 cuda cudnn version n a gpu model and memory n a describe the current behavior I m build tensorflow v2 2 0 on s390x ibm z architecture when run the test case tensorflow python compiler xla jit test I get the follow error message run test under python 3 8 2 usr bin python run compilationenabledingradientt testcompilationgradientscopename function 2020 05 20 20 23 54 560838 I tensorflow core platform profile util cpu util cc 104 cpu frequency 1555500000 hz 2020 05 20 20 23 54 561101 I tensorflow compiler xla service service cc 168 xla service 0x1fce270 initialize for platform host this do not guarantee that xla will be use device 2020 05 20 20 23 54 561105 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version home sidong cache bazel bazel sidong 338a466d2403fbfe3413e7ca6003e4cf execroot org tensorflow bazel out s390x opt bin tensorflow python compiler xla jit test runfiles org tensorflow tensorflow python framework index slice py 349 deprecationwarne use or import the abc from collection instead of from collection abc be deprecate since python 3 3 and in 3 9 it will stop work if not isinstance value collection sequence ok compilationenabledingradientt testcompilationgradientscopename function run compilationenabledingradientt testcompilationgradientscopename v1 graph ok compilationenabledingradientt testcompilationgradientscopename v1 graph run compilationenabledingradientt testcompilationingradient function z14 be not a recognize processor for this target ignore processor z14 be not a recognize processor for this target ignore processor z14 be not a recognize processor for this target ignore processor z14 be not a recognize processor for this target ignore processor 2020 05 20 20 23 54 964299 f tensorflow compiler xla service llvm ir llvm util cc 252 check fail module getdatalayout islittleendian tensorflow port klittleendian 1 vs 0 fatal python error abort also similar error message be also observe in other xla related test case please check below for the code to reproduce the error describe the expect behavior module getdatalayout islittleendian should return 0 test case should pass standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow python framework op import disable eager execution disable eager execution sess tf compat v1 session with sess as default jit scope tf python compiler xla jit experimental jit scope x tf constant 3 print x eval with jit scope y tf constant 5 print x eval print y eval the first two evaluation will return 3 and the third evaluation will fail and throw the error on s390x other info log I dig into this issue and notice that the bug may be cause by llvm configuration I check the file third party llvm llvm autogenerate build and notice that systemz a target that be support by llvm be not list as a target in this build file I think this could cause llvm not support s390x architecture correctly since this be an auto generate file I wonder how should I modify it and add support for s390x properly |
tensorflowtensorflow | connect to invalid output 163 of source node gru 1 while which have 163 output | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below version 2 2 0 git version v2 2 0 rc4 8 g2b96f3662b python version 3 7 cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory gtx 1080 8 gb 16 gb ram describe the current behavior my code work great on 2 1 1 but not work at 2 2 0 error log 1 below empirically find that the problem appear if a dropout or recurrent dropout be use in gru layer try to change gru to lstm also same problem I try to use tf compat v1 experimental output all intermediate true and false have no effect at 2 2 0 it work only if I remove dropout and reccurent dropout option from gru layer and disable eager execution with tf compat v1 disable eager execution command but if I remove dropout and eager be enable I have another error error log 2 below standalone code to reproduce the issue test case with this problem other info log error log 1 invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1364 try 1365 return fn args 1366 except error operror as e 14 frame invalidargumenterror node training sgd gradient gradient gru 1 while grad gru 1 while grad connect to invalid output 163 of source node gru 1 while which have 163 output try use tf compat v1 experimental output all intermediate true during handling of the above exception another exception occur invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1382 nsession config graph option rewrite option 1383 disable meta optimizer true 1384 raise type e node def op message 1385 1386 def extend graph self invalidargumenterror node training sgd gradient gradient gru 1 while grad gru 1 while grad connect to invalid output 163 of source node gru 1 while which have 163 output try use tf compat v1 experimental output all intermediate true error log 2 tf keras util plot model model attributeerror traceback most recent call last in 1 tf keras util plot model model 1 frame usr local lib python3 6 dist package tensorflow python keras util vis util py in model to dot model show shape show layer name rankdir expand nest dpi subgraph 141 142 append a wrap layer s label to node s label if it exist 143 layer name layer name 144 class name layer class name 145 attributeerror dict object have no attribute name model fit parse alldata dataset step per epoch 1000 epoch 100 attributeerror traceback most recent call last in 1 model fit parse alldata dataset step per epoch 1000 epoch 100 10 frame usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise attributeerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 571 train function output self distribute strategy run usr local lib python3 6 dist package tensorflow python distribute distribute lib py 951 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 543 train step self compile metric update state y y pre sample weight usr local lib python3 6 dist package tensorflow python keras engine compile util py 391 update state self build y pre y true usr local lib python3 6 dist package tensorflow python keras engine compile util py 322 build self metric y true y pre usr local lib python3 6 dist package tensorflow python util nest py 1118 map structure up to kwargs usr local lib python3 6 dist package tensorflow python util nest py 1214 map structure with tuple path up to flat value list usr local lib python3 6 dist package tensorflow python util nest py 1213 result func args kwargs for args in zip flat path list usr local lib python3 6 dist package tensorflow python util nest py 1116 lambda value func value discard the path arg usr local lib python3 6 dist package tensorflow python keras engine compile util py 421 get metric object return self get metric object m y t y p for m in metric usr local lib python3 6 dist package tensorflow python keras engine compile util py 421 return self get metric object m y t y p for m in metric usr local lib python3 6 dist package tensorflow python keras engine compile util py 442 get metric object y t rank len y t shape as list attributeerror nonetype object have no attribute shape |
tensorflowtensorflow | model can not be save because the input shape have not be set | Bug | hello I m follow save and serialize subclass model whole model save loading when I run the follow code python class custommodel tf keras model def init self hide unit super custommodel self init self dense layer tf keras layer dense u for u in hide unit def call self input x input for layer in self dense layer x layer x return x model custommodel 16 16 10 build the model by call it input arr tf random uniform 1 5 output model input arr model save my custom model I get the this error valueerror model main custommodel object at 0x7f96797a2c10 can not be save because the input shape have not be set usually input shape be automatically determine from call fit or predict to manually set the shape call model set input input but when I change code as follow python model custommodel 16 16 10 input arr tf random uniform 1 5 output model input arr model set inputs input arr add this line model save my custom model it run without any error what could be the possible cause do I need to add model set inputs input arr explicitly I m use ubuntu 20 04 lts conda environment tensorflow 2 1 1 thank |
tensorflowtensorflow | keras mixed precision policy and horovod be incompatible | Bug | cc reedwm nluehr tgaddair cliffwoolley pkanwar23 it seem that mixed precision kera policy be currently in broken state when use combine with horovod if you use the test repository you can reproduce the issue you will need 2 gpu the issue come by the sequence of operation 1 set visible device 2 define keras policy reproducible test case bash usr bin env bash export horovod gpu allreduce nccl export horovod gpu broadcast nccl export horovod nccl include usr include export horovod nccl lib usr lib x86 64 linux gnu export horovod nccl link share export horovod without pytorch 1 export horovod without mxnet 1 export horovod with tensorflow 1 export horovod with mpi 1 export horovod build arch flag march sandybridge mtune broadwell pip uninstall horovod y pip install no cache no cache dir horovod 0 19 3 git clone cd tf hvd stability test pip install r requirement txt pyt will give the following result bash platform linux python 3 6 9 pyt 5 4 2 py 1 8 1 pluggy 0 13 1 usr bin python cachedir pyt cache rootdir workspace inifile pyt ini plugin typeguard 2 7 1 collect 18 item test py horovodt test example 00 rn50 gradient tape hvd 1gpu pass 5 test py horovodt test example 01 rn50 gradient tape hvd amp 1gpu pass 11 test py horovodt test example 02 rn50 gradient tape hvd amp fp16 all reduce 1gpu pass 16 test py horovodt test example 03 rn50 gradient tape hvd 2gpus pass 22 test py horovodt test example 04 rn50 gradient tape hvd amp 2gpus fail 27 fail test py horovodt test example 05 rn50 gradient tape hvd amp fp16 all reduce 2gpus fail 33 fail test py horovodt test example 06 kera sequential ctl gradient tape hvd 1gpu pass 38 test py horovodt test example 07 kera sequential ctl gradient tape hvd amp 1gpu pass 44 test py horovodt test example 08 keras sequential ctl gradient tape hvd amp fp16 all reduce 1gpu pass 50 test py horovodt test example 09 keras sequential ctl gradient tape hvd 2gpus pass 55 test py horovodt test example 10 kera sequential ctl gradient tape hvd amp 2gpus fail 61 fail test py horovodt test example 11 kera sequential ctl gradient tape hvd amp fp16 all reduce 2gpus fail 66 fail test py horovodt test example 12 kera fit compile gradient tape hvd 1gpu pass 72 test py horovodt test example 13 kera fit compile gradient tape hvd amp 1gpu pass 77 test py horovodt test example 14 kera fit compile gradient tape hvd amp fp16 all reduce 1gpu pass 83 test py horovodt test example 15 kera fit compile gradient tape hvd 2gpus pass 88 test py horovodt test example 16 kera fit compile gradient tape hvd amp 2gpus fail 94 fail test py horovodt test example 17 keras fit compile gradient tape hvd amp fp16 all reduce 2gpus fail 100 fail if we look at the traceback the error be quite explicit python 1 1 traceback most recent call last 1 1 file example tf2 fitcompile gradienttape py line 49 in 1 1 policy mix precision policy mixed float16 1 1 file usr local lib python3 6 dist package tensorflow python keras mixed precision experimental policy py line 349 in init 1 1 skip local true 1 1 file usr local lib python3 6 dist package tensorflow python keras mixed precision experimental device compatibility check py line 157 in log device compatibility check 1 1 device attr list device lib list local device 1 1 file usr local lib python3 6 dist package tensorflow python client device lib py line 43 in list local device 1 1 convert s for s in pywrap device lib list device serialize config 1 1 runtimeerror tensorflow device gpu 0 be be map to multiple cuda device 0 now and 1 previously which be not support this may be the result of provide different gpu configuration configproto gpu option for example different visible device list when create multiple session in the same process this be not currently support see reedwm so far I can see that you find a somehow link issue unfortunately this be not solve and be something really limit to hot fix the issue you can just comment these line l339 l341 however I suppose that you may want to fix the issue more cleanly pkanwar23 one more example why we need the horovod unittest |
tensorflowtensorflow | unexpected behaviour of tf image convert image dtype | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf nightly python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior convert a tf int32 tensor to tf float32 seem underflow as show in the below code I think c should be the right result correct I if I be wrong describe the expect behavior the tf int32 tensorf should be scale to 0 1 as document standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python a tf reshape tf range 0 12 shape 3 4 print a print tf reduce mean a n b tf image convert image dtype a dtype tf float32 print b print tf reduce mean b n c a tf reduce max a print c print tf reduce mean c output tf tensor 0 1 2 3 4 5 6 7 8 9 10 11 shape 3 4 dtype int32 tf tensor 5 shape dtype int32 tf tensor 0 0000000e 00 4 6566129e 10 9 3132257e 10 1 3969839e 09 1 8626451e 09 2 3283064e 09 2 7939677e 09 3 2596290e 09 3 7252903e 09 4 1909516e 09 4 6566129e 09 5 1222742e 09 shape 3 4 dtype float32 tf tensor 2 561137e 09 shape dtype float32 tf tensor 0 0 09090909 0 18181818 0 27272727 0 36363636 0 45454545 0 54545455 0 63636364 0 72727273 0 81818182 0 90909091 1 shape 3 4 dtype float64 tf tensor 0 5 shape dtype float64 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | test fit with modelcheckpoint with dir as h5 filepath test failure in tensorflow python keras callback test | Bug | hello system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 x86 64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below v2 2 0 python version 3 6 9 bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 cuda cudnn version n a gpu model and memory n a describe the current behavior when run the test case on 2 2 0 I encounter an error in tensorflow python keras callbacks test the fail test log snippet be fail test fit with modelcheckpoint with dir as h5 filepath main kerascallbackst test fit with modelcheckpoint with dir as h5 filepath main kerascallbackst traceback most recent call last file home peterbao cache bazel bazel peterbao c55b467aea9f5b6a0b36d1bc596dae4f execroot org tensorflow bazel out k8 opt bin tensorflow python keras callbacks test runfiles org tensorflow tensorflow python keras callbacks test py line 821 in test fit with modelcheckpoint with dir as h5 filepath model fit train ds epoch 1 callback callback assertionerror oserror not raise describe the expect behavior test fit with modelcheckpoint with dir as h5 filepath and tensorflow python keras callback test to pass standalone code to reproduce the issue the test case test fit with modelcheckpoint with dir as h5 filepath appear to be introduce in and seem not change after that so I be think the test case would still fail on the master to reproduce the failure you can run tensorflow python keras callbacks test other info log I spend some time look into the test case and this be what I notice pdb list 1225 self model save filepath overwrite true 1226 1227 self maybe remove file 1228 except ioerror as e 1229 e errno appear to be none so check the content of e args 0 1230 if be a directory in six ensure str e args 0 1231 raise ioerror please specify a non directory filepath for 1232 modelcheckpoint filepath use be an exist 1233 directory format filepath 1234 1235 def get file path self epoch log pdb p e args 0 unable to create file unable to open file name tmp tem49izq6ff tmplrdc03ul temp h5 errno 21 error message be a directory flag 13 o flag 242 it look like the code above want an error message to be be a directory but the oserror actually have be a directory as the error message as a result the more detailed error message be not output here and that result test fit with modelcheckpoint with dir as h5 filepath to be fail base on this information I think the following can fix the problem diff git a tensorflow python keras callbacks py b tensorflow python keras callbacks py index bb9e61d01a bfad1112d7 100644 a tensorflow python keras callbacks py b tensorflow python keras callbacks py 1227 7 1227 7 class modelcheckpoint callback self maybe remove file except ioerror as e e errno appear to be none so check the content of e args 0 if be a directory in six ensure str e args 0 if be a directory in six ensure str e args 0 raise ioerror please specify a non directory filepath for modelcheckpoint filepath use be an exist directory format filepath let I know if more information be need thank peter |
tensorflowtensorflow | mix xla and non xla autograph trigger retrace | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow true os platform and distribution e g linux ubuntu 16 04 linux colab I m not sure what the os be tensorflow instal from source or binary colab tensorflow version use command below 2 2 0 describe the current behavior mix xla experimental compile and non xla function result in constant retracing describe the expect behavior function inside an xla function should inherit the option xla function inside non xla one shouldn t retrace this be particularly important when rely on third party library make use of the functionality standalone code to reproduce the issue |
tensorflowtensorflow | load tf save model throw error in kera but work in tf | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary python version 2 1 cuda cudnn version unsure gpu model and memory unsure tf v2 1 0 rc2 17 ge5bf8de410 2 1 0 describe the current behavior when load a model with sub nest model in keras it throw an error say the model must contain a call method however when load the model directly through tensorflow it still work describe the expect behavior the model should successfully load standalone code to reproduce the issue python assume a model with submodel call save model exist import tensorflow as tf model tf save model load save model work model keras model load model save model do not work other info log notimplementederror traceback most recent call last in 3 import tensorflow as tf 4 model tf save model load save model 5 tf keras model load model save model local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras save save py in load model filepath custom object compile 148 if isinstance filepath six string type 149 loader impl parse save model filepath 150 return save model load load filepath compile 151 152 raise ioerror local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras save save model load py in load path compile 87 todo kathywu add save loading of optimizer compile loss and metric 88 todo kathywu add code to load from object that contain all endpoint 89 model tf load load internal path loader cls kerasobjectloader 90 91 pylint disable protect access local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python save model load py in load internal export dir tag loader cls 550 loader loader cls object graph proto 551 save model proto 552 export dir 553 root loader get 0 554 root tensorflow version meta graph def meta info def tensorflow version local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras save save model load py in init self args kwargs 117 def init self args kwargs 118 super kerasobjectloader self init args kwargs 119 self finalize 120 121 def finalize self local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras save save model load py in finalize self 155 input output network lib reconstruct from config 156 node get config 157 create layer layer name layer for layer in node layer 158 node init graph network 159 input output local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras engine network py in reconstruct from config config custom object create layer 1901 if layer in unprocessed node 1902 for node datum in unprocessed node pop layer 1903 process node layer node datum 1904 1905 input tensor local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras engine network py in process node layer node datum 1849 if not isinstance input tensor dict and len flat input tensor 1 1850 input tensor flat input tensor 0 1851 output tensor layer input tensor kwargs 1852 1853 update node index map local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras engine base layer py in call self input args kwargs 771 not base layer util be in eager or tf function 772 with auto control dep automaticcontroldependencie as acd 773 output call fn cast input args kwargs 774 wrap tensor in output in tf identity to avoid 775 circular dependency local share virtualenvs model tby5gsp1 lib python3 7 site package tensorflow core python keras engine network py in call self input training mask 710 711 if not self be graph network 712 raise notimplementederror when subclasse the model class you should 713 implement a call method 714 notimplementederror when subclasse the model class you should implement a call method |
tensorflowtensorflow | dll load fail while import pywrap tensorflow internal a dynamic link library dll initialization routine fail | Bug | window 10 python 3 8 3 tensorflow 2 2 0 I have instal it use pip install tensorflow I run this code import tensorflow as tf I get error traceback most recent call last file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow internal py line 28 in pywrap tensorflow internal swig import helper file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow internal py line 24 in swig import helper mod imp load module pywrap tensorflow internal fp pathname description file c user pratibha appdata local program python python38 lib imp py line 242 in load module return load dynamic name filename file file c user pratibha appdata local program python python38 lib imp py line 342 in load dynamic return load spec importerror dll load fail while import pywrap tensorflow internal a dynamic link library dll initialization routine fail during handling of the above exception another exception occur traceback most recent call last file c user pratibha appdata roaming jetbrain pycharmce2020 1 scratch textblob py line 1 in import tensorflow as tf file c user pratibha appdata roaming python python38 site package tensorflow init py line 41 in from tensorflow python tool import module util as module util file c user pratibha appdata roaming python python38 site package tensorflow python init py line 50 in from tensorflow python import pywrap tensorflow file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow py line 69 in raise importerror msg importerror traceback most recent call last file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow internal py line 28 in pywrap tensorflow internal swig import helper file c user pratibha appdata roaming python python38 site package tensorflow python pywrap tensorflow internal py line 24 in swig import helper mod imp load module pywrap tensorflow internal fp pathname description file c user pratibha appdata local program python python38 lib imp py line 242 in load module return load dynamic name filename file file c user pratibha appdata local program python python38 lib imp py line 342 in load dynamic return load spec importerror dll load fail while import pywrap tensorflow internal a dynamic link library dll initialization routine fail fail to load the native tensorflow runtime see for some common reason and solution include the entire stack trace above this error message when ask for help process finish with exit code 1 |
tensorflowtensorflow | can not read from google storage with tf io gfile gfile under intel tensorflow 1 14 0 | Bug | support for google storage gs protocol seem to be miss from intel tensorflow 1 14 0 which be unexpected I m aware that this be no core tensorflow issue but I ve see that some engineer from intel be active in this repository system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 docker image intelaipg intel optimize tensorflow 1 14 0 mkl py3 ubuntu 18 04 2 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary no tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version output v1 14 0 1 harden 0 g340d16ee58 1 14 0 describe the current behavior google storage doesn t seem to be support for some reason python python 3 6 8 default jan 14 2019 11 02 34 gcc 8 0 1 20180414 experimental trunk revision 259383 on linux type help copyright credit or license for more information import tensorflow as tf tf io gfile gfile gs some bucket test txt read deprecation warning redact traceback most recent call last file line 1 in file usr local lib python3 6 dist package tensorflow python lib io file io py line 122 in read self preread check file usr local lib python3 6 dist package tensorflow python lib io file io py line 84 in preread check compat as bytes self name 1024 512 tensorflow python framework error impl unimplementederror file system scheme gs not implement file gs some bucket test txt describe the expect behavior verify to work with 1 13 2 python python 3 6 8 default jan 14 2019 11 02 34 gcc 8 0 1 20180414 experimental trunk revision 259383 on linux type help copyright credit or license for more information import tensorflow as tf tf io gfile gfile gs some bucket test txt read deprecation warning redact test n standalone code to reproduce the issue see above other info log n a |
tensorflowtensorflow | wrong result when call multi line lambda inside tf function | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when decorate a function use tf function that use a lambda that extend to multiple line only the first line be consider when not decorate the function use tf function the result be correct describe the expect behavior tf function should not alter the function behaviour in the example below it print 1 instead of 0 remove the tf function decorator give the correct result move the lambda into a single line give again a correct result standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf a lambda y lambda x x y y 1 tf function def test lambda tf print a 1 test lambda other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the issue be notice after apply black formatting change the code output |
tensorflowtensorflow | tflite conversion of conv1d layer with dilation rate 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 7 describe the current behavior after convert a tf conv1d op with dilation rate 1 to tflite op the interpreter can not allocate tensor runtimeerror tensorflow lite kernel space to batch nd cc 98 numdimension op context input kinputdimensionnum 3 4 node number 0 space to batch nd fail to prepare describe the expect behavior tflite model should be able to load and execute by the interpreter standalone code to reproduce the issue import tensorflow as tf from tensorflow keras model import model from tensorflow keras layers import def get model input tf keras input shape 10 40 no error when dilation rate 1 layer conv1d 32 3 dilation rate 2 padding same use bias false input layer globalmaxpooling1d layer output dense 2 layer model model input input output output return model model get model converter tf lite tfliteconverter from keras model model tflite model converter convert open train model tflite wb write tflite model interpreter tf lite interpreter model path train model tflite interpreter allocate tensor other info log the problem do not occur when dilation rate 1 |
tensorflowtensorflow | tf debugging assert near raise invalidargumenterror for complex tensor | Bug | system information linux different setup other os not test tensorflow cpu instal from pip tensorflow version 2 2 0 python version 3 8 3 describe the current behavior tf debugging assert near raise invalidargumenterror for complex64 or complex128 input although the documentation say complex input be allow standalone code to reproduce the issue python import tensorflow as tf a tf constant 1j dtype tf complex64 b tf constant 1j dtype tf complex64 tf debugging assert near a b output invalidargumenterror traceback most recent call last in 1 tf debugging assert near a b usr lib python3 8 site package tensorflow python op check op py in assert near v2 x y rtol atol message summarize name 756 end compatibility 757 758 return assert near x x y y rtol rtol atol atol summarize summarize 759 message message name name 760 usr lib python3 8 site package tensorflow python op check op py in assert near x y rtol atol datum summarize message name 833 x s x name x y s y name y 834 835 tol atol rtol math op abs y 836 diff math op abs x y 837 condition math op reduce all math op less diff tol usr lib python3 8 site package tensorflow python op math ops py in binary op wrapper x y 982 with op name scope none op name x y as name 983 if isinstance x op tensor and isinstance y op tensor 984 return func x y name name 985 elif not isinstance y sparse tensor sparsetensor 986 try usr lib python3 8 site package tensorflow python op math ops py in mul dispatch x y name 1281 be tensor y isinstance y op tensor 1282 if be tensor y 1283 return gen math op mul x y name name 1284 else 1285 assert isinstance y sparse tensor sparsetensor case dense sparse usr lib python3 8 site package tensorflow python ops gen math op py in mul x y name 6087 pass add node to the tensorflow graph 6088 except core notokstatusexception as e 6089 op raise from not ok status e name 6090 add node to the tensorflow graph 6091 op output op def library apply op helper usr lib python3 8 site package tensorflow python framework op py in raise from not ok status e name 6651 message e message name name if name be not none else 6652 pylint disable protect access 6653 six raise from core status to exception e code message none 6654 pylint enable protect access 6655 usr lib python3 8 site package six py in raise from value from value invalidargumenterror can not compute mul as input 1 zero base be expect to be a complex64 tensor but be a float tensor op mul |
tensorflowtensorflow | formula error display in rmsprop | Bug | url s with the issue description of issue what need change the sub index in the formula be not display correctly clear description as an example meansquare t appear like mean s quaredt and so on |
tensorflowtensorflow | convert to tflite issue | Bug | system information os platform and distribution elementary os 5 1 3 hera base on ubuntu 18 04 lt tensorflow instal from source or binary binary pip instal tensorflow version or github sha if from source 1 14 0 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast conv 2d elu fully connect l2 normalization less max pool 2d mul range reshape shape stride slice here be a list of operator for which you will need custom implementation enter exit loopcond merge switch tensorarraygatherv3 tensorarrayreadv3 tensorarrayscatterv3 tensorarraysizev3 tensorarrayv3 tensorarraywritev3 standalone code to reproduce the issue toco graph def file out save model pb input format tensorflow graphdef output format tflite output file out save model tflite inference input type quantize uint8 input array image output array feature input shape 1 128 64 3 std dev value 127 mean value 128 enable select tf opsa environment capture script result check python python version 2 7 17 python branch python build version default apr 15 2020 17 20 14 python compiler version gcc 7 5 0 python implementation cpython check os platform os linux os kernel version 38 18 04 1 ubuntu smp tue mar 31 04 17 56 utc 2020 os release version 5 3 0 46 generic os platform linux 5 3 0 46 generic x86 64 with elementary 5 1 3 hera linux distribution elementary 5 1 3 hera linux os distribution elementary 5 1 3 hera mac version uname linux asuspro 5 3 0 46 generic 38 18 04 1 ubuntu smp tue mar 31 04 17 56 utc 2020 x86 64 x86 64 architecture 64bit machine x86 64 be we in docker no compiler c ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 copyright c 2017 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 16 6 protobuf 3 11 3 tensorflow 1 14 0 tensorflow estimator 1 14 0 check for virtualenv false tensorflow import tf version version 1 14 0 tf version git version v1 14 0 rc1 22 gaf24dc91b5 tf version compiler version 4 8 5 sanity check array 1 dtype int32 9322 find library libc so 6 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libc so 6 9322 9322 find library libpthread so 0 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libpthread so 0 9322 9322 find library libdl so 2 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libdl so 2 9322 9322 find library libutil so 1 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libutil so 1 9322 9322 find library libz so 1 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libz so 1 9322 9322 find library libm so 6 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libm so 6 9322 9322 9322 call init lib x86 64 linux gnu libpthread so 0 9322 9322 9322 call init lib x86 64 linux gnu libc so 6 9322 9322 9322 call init lib x86 64 linux gnu libm so 6 9322 9322 9322 call init lib x86 64 linux gnu libz so 1 9322 9322 9322 call init lib x86 64 linux gnu libutil so 1 9322 9322 9322 call init lib x86 64 linux gnu libdl so 2 9322 9322 9322 initialize program usr bin python 9322 9322 9322 transfer control usr bin python 9322 9322 find library libffi so 6 0 search 9322 search cache etc ld so cache 9322 try file usr lib x86 64 linux gnu libffi so 6 9322 9322 9322 call init usr lib x86 64 linux gnu libffi so 6 9322 9322 9322 call init usr lib python2 7 lib dynload ctype x86 64 linux gnu so 9322 9322 find library libopenblasp r0 34a18dc3 3 7 so 0 search 9322 search path home andreiungureanu local lib python2 7 site package numpy core lib tls haswell x86 64 home andreiungureanu local lib python2 7 site package numpy core lib tls haswell home andreiungureanu local lib python2 7 site package numpy core lib tls x86 64 home andreiungureanu local lib python2 7 site package numpy core lib tls home andreiungureanu local lib python2 7 site package numpy core lib haswell x86 64 home andreiungureanu local lib python2 7 site package numpy core lib haswell home andreiungureanu local lib python2 7 site package numpy core lib x86 64 home andreiungureanu local lib python2 7 site package numpy core libs rpath from file home andreiungureanu local lib python2 7 site package numpy core multiarray umath so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib tls haswell x86 64 libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib tls haswell libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib tls x86 64 libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib tls libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib haswell x86 64 libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib haswell libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib x86 64 libopenblasp r0 34a18dc3 3 7 so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib libopenblasp r0 34a18dc3 3 7 so 9322 9322 find library libgfortran ed201abd so 3 0 0 0 search 9322 search path home andreiungureanu local lib python2 7 site package numpy core libs rpath from file home andreiungureanu local lib python2 7 site package numpy core multiarray umath so 9322 try file home andreiungureanu local lib python2 7 site package numpy core lib libgfortran ed201abd so 3 0 0 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy core lib libgfortran ed201abd so 3 0 0 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy core lib libopenblasp r0 34a18dc3 3 7 so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy core multiarray umath so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy core multiarray test so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy linalg lapack lite so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy linalg umath linalg so 9322 9322 find library libbz2 so 1 0 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libbz2 so 1 0 9322 9322 9322 call init lib x86 64 linux gnu libbz2 so 1 0 9322 9322 9322 call init usr lib python2 7 lib dynload bz2 x86 64 linux gnu so 9322 9322 9322 call init usr lib python2 7 lib dynload future builtin x86 64 linux gnu so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy fft fftpack lite so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package numpy random mtrand so 9322 9322 find library libcrypto so 1 1 0 search 9322 search cache etc ld so cache 9322 try file usr lib x86 64 linux gnu libcrypto so 1 1 9322 9322 9322 call init usr lib x86 64 linux gnu libcrypto so 1 1 9322 9322 9322 call init usr lib python2 7 lib dynload hashlib x86 64 linux gnu so 9322 9322 find library libtensorflow framework so 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow home andreiungureanu local lib python2 7 site package tensorflow python tls haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python tls haswell home andreiungureanu local lib python2 7 site package tensorflow python tls x86 64 home andreiungureanu local lib python2 7 site package tensorflow python tls home andreiungureanu local lib python2 7 site package tensorflow python haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python haswell home andreiungureanu local lib python2 7 site package tensorflow python x86 64 home andreiungureanu local lib python2 7 site package tensorflow python home andreiungureanu local lib python2 7 site package tensorflow python tls haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python tls haswell home andreiungureanu local lib python2 7 site package tensorflow python tls x86 64 home andreiungureanu local lib python2 7 site package tensorflow python tls home andreiungureanu local lib python2 7 site package tensorflow python haswell x86 64 home andreiungureanu local lib python2 7 site package tensorflow python haswell home andreiungureanu local lib python2 7 site package tensorflow python x86 64 home andreiungureanu local lib python2 7 site package tensorflow python runpath from file home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python solib k8 u s stensorflow spython c upywrap utensorflow uinternal so utensorflow libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python tls libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python haswell x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python haswell libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python x86 64 libtensorflow framework so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libtensorflow framework so 1 9322 9322 find library librt so 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package tensorflow python home andreiungureanu local lib python2 7 site package tensorflow python runpath from file home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python librt so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python librt so 1 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu librt so 1 9322 9322 find library libstdc so 6 0 search 9322 search path home andreiungureanu local lib python2 7 site package tensorflow python home andreiungureanu local lib python2 7 site package tensorflow python runpath from file home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libstdc so 6 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libstdc so 6 9322 search cache etc ld so cache 9322 try file usr lib x86 64 linux gnu libstdc so 6 9322 9322 find library libgcc s so 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package tensorflow python home andreiungureanu local lib python2 7 site package tensorflow python runpath from file home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libgcc s so 1 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libgcc s so 1 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libgcc s so 1 9322 9322 9322 call init lib x86 64 linux gnu libgcc s so 1 9322 9322 9322 call init usr lib x86 64 linux gnu libstdc so 6 9322 9322 9322 call init lib x86 64 linux gnu librt so 1 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package tensorflow python libtensorflow framework so 1 9322 9322 find library libhdfs so 0 search 9322 search path home andreiungureanu local lib python2 7 site package tensorflow python runpath from file home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 try file home andreiungureanu local lib python2 7 site package tensorflow python libhdfs so 9322 search cache etc ld so cache 9322 search path lib x86 64 linux gnu tls haswell x86 64 lib x86 64 linux gnu tls haswell lib x86 64 linux gnu tls x86 64 lib x86 64 linux gnu tls lib x86 64 linux gnu haswell x86 64 lib x86 64 linux gnu haswell lib x86 64 linux gnu x86 64 lib x86 64 linux gnu usr lib x86 64 linux gnu tls haswell x86 64 usr lib x86 64 linux gnu tls haswell usr lib x86 64 linux gnu tls x86 64 usr lib x86 64 linux gnu tls usr lib x86 64 linux gnu haswell x86 64 usr lib x86 64 linux gnu haswell usr lib x86 64 linux gnu x86 64 usr lib x86 64 linux gnu lib tls haswell x86 64 lib tls haswell lib tls x86 64 lib tls lib haswell x86 64 lib haswell lib x86 64 lib usr lib tls haswell x86 64 usr lib tls haswell usr lib tls x86 64 usr lib tls usr lib haswell x86 64 usr lib haswell usr lib x86 64 usr lib system search path 9322 try file lib x86 64 linux gnu tls haswell x86 64 libhdfs so 9322 try file lib x86 64 linux gnu tls haswell libhdfs so 9322 try file lib x86 64 linux gnu tls x86 64 libhdfs so 9322 try file lib x86 64 linux gnu tls libhdfs so 9322 try file lib x86 64 linux gnu haswell x86 64 libhdfs so 9322 try file lib x86 64 linux gnu haswell libhdfs so 9322 try file lib x86 64 linux gnu x86 64 libhdfs so 9322 try file lib x86 64 linux gnu libhdfs so 9322 try file usr lib x86 64 linux gnu tls haswell x86 64 libhdfs so 9322 try file usr lib x86 64 linux gnu tls haswell libhdfs so 9322 try file usr lib x86 64 linux gnu tls x86 64 libhdfs so 9322 try file usr lib x86 64 linux gnu tls libhdfs so 9322 try file usr lib x86 64 linux gnu haswell x86 64 libhdfs so 9322 try file usr lib x86 64 linux gnu haswell libhdfs so 9322 try file usr lib x86 64 linux gnu x86 64 libhdfs so 9322 try file usr lib x86 64 linux gnu libhdfs so 9322 try file lib tls haswell x86 64 libhdfs so 9322 try file lib tls haswell libhdfs so 9322 try file lib tls x86 64 libhdfs so 9322 try file lib tls libhdfs so 9322 try file lib haswell x86 64 libhdfs so 9322 try file lib haswell libhdfs so 9322 try file lib x86 64 libhdfs so 9322 try file lib libhdfs so 9322 try file usr lib tls haswell x86 64 libhdfs so 9322 try file usr lib tls haswell libhdfs so 9322 try file usr lib tls x86 64 libhdfs so 9322 try file usr lib tls libhdfs so 9322 try file usr lib haswell x86 64 libhdfs so 9322 try file usr lib haswell libhdfs so 9322 try file usr lib x86 64 libhdfs so 9322 try file usr lib libhdfs so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 9322 9322 find library libssl so 1 1 0 search 9322 search cache etc ld so cache 9322 try file usr lib x86 64 linux gnu libssl so 1 1 9322 9322 9322 call init usr lib x86 64 linux gnu libssl so 1 1 9322 9322 9322 call init usr lib python2 7 lib dynload ssl x86 64 linux gnu so 9322 9322 9322 9322 9322 9322 9322 call init usr lib python2 7 lib dynload csv x86 64 linux gnu so 9322 9322 9322 call init usr lib python2 7 lib dynload termio x86 64 linux gnu so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package tensorflow python framework fast tensor util so 9322 9322 find library libuuid so 1 0 search 9322 search cache etc ld so cache 9322 try file lib x86 64 linux gnu libuuid so 1 9322 9322 9322 call init lib x86 64 linux gnu libuuid so 1 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package wrapt wrapper so 9322 9322 9322 call init usr lib python2 7 lib dynload json x86 64 linux gnu so 9322 9322 9322 call init usr lib python2 7 lib dynload multiprocesse x86 64 linux gnu so 9322 9322 find library libhdf5 9028dcc4 so 103 0 0 0 search 9322 search path home andreiungureanu local lib python2 7 site package h5py lib tls haswell x86 64 home andreiungureanu local lib python2 7 site package h5py lib tls haswell home andreiungureanu local lib python2 7 site package h5py lib tls x86 64 home andreiungureanu local lib python2 7 site package h5py lib tls home andreiungureanu local lib python2 7 site package h5py lib haswell x86 64 home andreiungureanu local lib python2 7 site package h5py lib haswell home andreiungureanu local lib python2 7 site package h5py lib x86 64 home andreiungureanu local lib python2 7 site package h5py lib rpath from file home andreiungureanu local lib python2 7 site package h5py error so 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls haswell x86 64 libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls haswell libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls x86 64 libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib haswell x86 64 libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib haswell libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib x86 64 libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 9322 9322 find library libhdf5 hl db841637 so 100 1 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package h5py lib rpath from file home andreiungureanu local lib python2 7 site package h5py error so 9322 try file home andreiungureanu local lib python2 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 9322 9322 find library libsz 1c7dd0cf so 2 0 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package h5py lib tls haswell x86 64 home andreiungureanu local lib python2 7 site package h5py lib tls haswell home andreiungureanu local lib python2 7 site package h5py lib tls x86 64 home andreiungureanu local lib python2 7 site package h5py lib tls home andreiungureanu local lib python2 7 site package h5py lib haswell x86 64 home andreiungureanu local lib python2 7 site package h5py lib haswell home andreiungureanu local lib python2 7 site package h5py lib x86 64 home andreiungureanu local lib python2 7 site package h5py lib rpath from file home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls haswell x86 64 libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls haswell libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls x86 64 libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib tls libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib haswell x86 64 libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib haswell libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib x86 64 libsz 1c7dd0cf so 2 0 1 9322 try file home andreiungureanu local lib python2 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 9322 9322 find library libaec 2147abcd so 0 0 4 0 search 9322 search path home andreiungureanu local lib python2 7 site package h5py lib rpath from file home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib libaec 2147abcd so 0 0 4 9322 9322 find library libz a147dcb0 so 1 2 3 0 search 9322 search path home andreiungureanu local lib python2 7 site package h5py lib rpath from file home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 9322 try file home andreiungureanu local lib python2 7 site package h5py lib libz a147dcb0 so 1 2 3 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py lib libz a147dcb0 so 1 2 3 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py lib libaec 2147abcd so 0 0 4 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py error so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5 so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py def so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py object so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py conv so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5r so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5 t so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py util so 9322 9327 find library libc so 6 0 search 9327 search cache etc ld so cache 9327 try file lib x86 64 linux gnu libc so 6 9327 9327 9327 call init lib x86 64 linux gnu libc so 6 9327 9327 9327 initialize program sh 9327 9327 9327 transfer control sh 9327 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5z so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5a so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5s so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5p so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5ac so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py proxy so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5d so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5ds so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5f so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5 g so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5i so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5fd so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5pl so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5o so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package h5py h5l so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy lib ccallback c so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse sparsetool so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csparsetool so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csgraph short path so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csgraph tool so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csgraph traversal so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csgraph min span tree so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy sparse csgraph reorder so 9322 9322 find library libjpeg 3b10b538 so 9 3 0 0 search 9322 search path home andreiungureanu local lib python2 7 site package pil libs tls haswell x86 64 home andreiungureanu local lib python2 7 site package pil libs tls haswell home andreiungureanu local lib python2 7 site package pil libs tls x86 64 home andreiungureanu local lib python2 7 site package pil libs tls home andreiungureanu local lib python2 7 site package pil lib haswell x86 64 home andreiungureanu local lib python2 7 site package pil libs haswell home andreiungureanu local lib python2 7 site package pil lib x86 64 home andreiungureanu local lib python2 7 site package pil libs rpath from file home andreiungureanu local lib python2 7 site package pil imaging so 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls haswell x86 64 libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls haswell libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls x86 64 libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil lib tls libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil lib haswell x86 64 libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil lib haswell libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil lib x86 64 libjpeg 3b10b538 so 9 3 0 9322 try file home andreiungureanu local lib python2 7 site package pil libs libjpeg 3b10b538 so 9 3 0 9322 9322 find library libopenjp2 b3d7668a so 2 3 1 0 search 9322 search path home andreiungureanu local lib python2 7 site package pil libs rpath from file home andreiungureanu local lib python2 7 site package pil imaging so 9322 try file home andreiungureanu local lib python2 7 site package pil libs libopenjp2 b3d7668a so 2 3 1 9322 9322 find library libtiff bd1961ca so 5 5 0 0 search 9322 search path home andreiungureanu local lib python2 7 site package pil libs rpath from file home andreiungureanu local lib python2 7 site package pil imaging so 9322 try file home andreiungureanu local lib python2 7 site package pil lib libtiff bd1961ca so 5 5 0 9322 9322 find library liblzma 6cd627ed so 5 2 4 0 search 9322 search path home andreiungureanu local lib python2 7 site package pil libs tls haswell x86 64 home andreiungureanu local lib python2 7 site package pil libs tls haswell home andreiungureanu local lib python2 7 site package pil libs tls x86 64 home andreiungureanu local lib python2 7 site package pil libs tls home andreiungureanu local lib python2 7 site package pil libs haswell x86 64 home andreiungureanu local lib python2 7 site package pil libs haswell home andreiungureanu local lib python2 7 site package pil lib x86 64 home andreiungureanu local lib python2 7 site package pil libs rpath from file home andreiungureanu local lib python2 7 site package pil lib libtiff bd1961ca so 5 5 0 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls haswell x86 64 liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls haswell liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls x86 64 liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil libs tls liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil libs haswell x86 64 liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil libs haswell liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil lib x86 64 liblzma 6cd627ed so 5 2 4 9322 try file home andreiungureanu local lib python2 7 site package pil lib liblzma 6cd627ed so 5 2 4 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package pil lib liblzma 6cd627ed so 5 2 4 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package pil libs libjpeg 3b10b538 so 9 3 0 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package pil lib libtiff bd1961ca so 5 5 0 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package pil libs libopenjp2 b3d7668a so 2 3 1 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package pil image so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scandir so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy ndimage nd image so 9322 9322 find library libopenblasp r0 2ecf47d5 3 7 dev so 0 search 9322 search path home andreiungureanu local lib python2 7 site package scipy linalg lib tls haswell x86 64 home andreiungureanu local lib python2 7 site package scipy linalg lib tls haswell home andreiungureanu local lib python2 7 site package scipy linalg lib tls x86 64 home andreiungureanu local lib python2 7 site package scipy linalg lib tls home andreiungureanu local lib python2 7 site package scipy linalg lib haswell x86 64 home andreiungureanu local lib python2 7 site package scipy linalg lib haswell home andreiungureanu local lib python2 7 site package scipy linalg lib x86 64 home andreiungureanu local lib python2 7 site package scipy linalg lib rpath from file home andreiungureanu local lib python2 7 site package scipy linalg fbla so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib tls haswell x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib tls haswell libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib tls x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib tls libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib haswell x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib haswell libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 9322 try file home andreiungureanu local lib python2 7 site package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg fbla so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg flapack so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg flinalg so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg solve toeplitz so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg decomp update so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg cython blas so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy linalg cython lapack so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy special ufunc so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy special ufunc cxx so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy special specfun so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy special comb so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy special ellip harm 2 so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy interpolate fitpack so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy interpolate dfitpack so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy interpolate bspl so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy interpolate ppoly so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy interpolate interpnd so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy spatial ckdtree so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy spatial qhull so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy lib messagestream so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy spatial voronoi so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy spatial distance wrap so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy spatial hausdorff so 9322 9322 9322 call init home andreiungureanu local lib python2 7 site package scipy ndimage ni label so 9322 9322 9322 call fini usr bin python 0 9322 9322 9322 call fini lib x86 64 linux gnu libutil so 1 0 9322 9322 9322 call fini lib x86 64 linux gnu libz so 1 0 9322 9322 9322 call fini usr lib python2 7 lib dynload ctype x86 64 linux gnu so 0 9322 9322 9322 call fini usr lib x86 64 linux gnu libffi so 6 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy core multiarray umath so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy core multiarray test so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy linalg lapack lite so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy linalg umath linalg so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy core lib libopenblasp r0 34a18dc3 3 7 so 0 9322 9322 9322 call fini usr lib python2 7 lib dynload bz2 x86 64 linux gnu so 0 9322 9322 9322 call fini lib x86 64 linux gnu libbz2 so 1 0 0 9322 9322 9322 call fini usr lib python2 7 lib dynload future builtin x86 64 linux gnu so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy fft fftpack lite so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy random mtrand so 0 9322 9322 9322 call fini usr lib python2 7 lib dynload hashlib x86 64 linux gnu so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package tensorflow python pywrap tensorflow internal so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package tensorflow python libtensorflow framework so 1 0 9322 9322 9322 call fini usr lib python2 7 lib dynload ssl x86 64 linux gnu so 0 9322 9322 9322 call fini usr lib x86 64 linux gnu libssl so 1 1 0 9322 9322 9322 call fini usr lib x86 64 linux gnu libcrypto so 1 1 0 9322 9322 9322 9322 9322 9322 9322 call fini usr lib python2 7 lib dynload csv x86 64 linux gnu so 0 9322 9322 9322 call fini usr lib python2 7 lib dynload termio x86 64 linux gnu so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package tensorflow python framework fast tensor util so 0 9322 9322 9322 call fini lib x86 64 linux gnu libuuid so 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package wrapt wrapper so 0 9322 9322 9322 call fini usr lib python2 7 lib dynload json x86 64 linux gnu so 0 9322 9322 9322 call fini usr lib python2 7 lib dynload multiprocesse x86 64 linux gnu so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py error so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5 so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py def so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py object so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py conv so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5r so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5 t so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py util so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5z so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5a so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5s so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5p so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5ac so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py proxy so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5d so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5ds so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5f so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5 g so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5i so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5fd so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5pl so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5o so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py h5l so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py lib libhdf5 9028dcc4 so 103 0 0 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py lib libaec 2147abcd so 0 0 4 0 9322 9322 9322 call fini lib x86 64 linux gnu libdl so 2 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy lib ccallback c so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse sparsetool so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csparsetool so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csgraph short path so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csgraph tool so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csgraph traversal so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csgraph min span tree so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy sparse csgraph reorder so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package pil imaging so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package pil libs libopenjp2 b3d7668a so 2 3 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package pil lib libtiff bd1961ca so 5 5 0 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package pil libs libjpeg 3b10b538 so 9 3 0 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package h5py lib libz a147dcb0 so 1 2 3 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package pil lib liblzma 6cd627ed so 5 2 4 0 9322 9322 9322 call fini lib x86 64 linux gnu librt so 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scandir so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy ndimage nd image so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg fbla so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg flapack so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg flinalg so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg solve toeplitz so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg decomp update so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg cython blas so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg cython lapack so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy special ufunc so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy special ufunc cxx so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy special specfun so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy special comb so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy special ellip harm 2 so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy interpolate fitpack so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy interpolate dfitpack so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy interpolate bspl so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy interpolate ppoly so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy interpolate interpnd so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy spatial ckdtree so 0 9322 9322 9322 call fini usr lib x86 64 linux gnu libstdc so 6 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy spatial qhull so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 0 9322 9322 9322 call fini lib x86 64 linux gnu libgcc s so 1 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package numpy core lib libgfortran ed201abd so 3 0 0 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy lib messagestream so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy spatial voronoi so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy spatial distance wrap so 0 9322 9322 9322 call fini lib x86 64 linux gnu libm so 6 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy spatial hausdorff so 0 9322 9322 9322 call fini home andreiungureanu local lib python2 7 site package scipy ndimage ni label so 0 9322 9322 9322 call fini lib x86 64 linux gnu libpthread so 0 0 9322 env ld library path be unset dyld library path be unset nvidia smi tf env collect sh line 147 nvidia smi command not find cuda lib tensorflow instal from info name tensorflow version 1 14 0 summary tensorflow be an open source machine learning framework for everyone home page author email license apache 2 0 location home andreiungureanu local lib python2 7 site package python version major minor micro releaselevel serial 2 7 17 final 0 bazel version |
tensorflowtensorflow | model object have no attribute total loss 2 2 eager | Bug | below work in tf 2 1 and 2 0 eager graph and 2 2 graph but not 2 2 eager how then be we to get gradient in 2 2 eager python import numpy as np import tensorflow as tf from tensorflow keras layers import input dense from tensorflow keras model import model from tensorflow keras import backend as k ipt input 16 out dense 16 ipt model model ipt out model compile adam mse x y np random randn 32 16 model train on batch x y output model optimizer get gradient model total loss model trainable weight input model input 0 model feed target 0 grad fn k function input output grad grad fn x y print g shape for g in grad |
tensorflowtensorflow | break in v2 2 0 set visible device be use with tf keras mixed precision | Bug | if you use set visible device with tf keras mixed precision you get a crash in v2 2 0 this be fix in this commit however for some reason in v2 2 0 the fix be apply in a way that be very broken and ineffective on master this seem like this commit be apply correctly specifically on tag v2 2 0 python device attr list device lib list local device if not skip local log device compatibility check policy name device attr list return on master python if not skip local device attr list device lib list local device log device compatibility check policy name device attr list return the whole point of skip local be to avoid call that function so move this line render that fix ineffective I m a little confused how this happen maybe a cherry pick go wrong but I think I would put this issue here in case anyone hit this issue I don t know if v2 2 0 can be fix or we just have to wait for v2 3 0 since master have the fix |
tensorflowtensorflow | rpi zero build instruction not work | Bug | url s with the issue compile natively on raspberry pi description of issue what need change guide on natively compile say test on raspberry pi zero but follow the instruction on a raspberry pi zero lead to a build for armv7l not armv6 as specify add target arch armv6 to the command would likely work as have be do in the cross compile section to separate new model from the rpi 1 zero however since I follow these cross compile instruction result in a armv7l target as well hard float vfp abi link error on zero I directly follow the tip from 30181 to be on the safe side |
tensorflowtensorflow | tflite micro conv2d layer not run on esp32 board | Bug | tensorflow micro system information host os platform and distribution compile for esp32 plattformio w esp idf create tflite window 10 w anaconda and python 3 7 tensorflow instal from source or binary compile c tflite micro from hello world example ttlite creation python pip in a anaconda environment with python 3 7 tensorflow version commit sha if source tensorflow 2 2 target platform e g arm mbe os arduino nano 33 etc esp32cam esp idf compiler describe the problem use a tflite model with a conv2d layer result in a crash of the c code run on an esp32 system just exchange the layer to a maxpool2d layer let the model run smoothly this give I the idea that the problem be in use the conv2d layer rather than the c code or model training the model structure look like follow model sequential model add inputlayer input shape 32 20 3 model add conv2d 8 3 3 model add maxpool2d pool size 2 2 model add flatten model add dense 11 activation softmax the error on the esp32 monitoring be the follow input load guru meditation error core 0 panic ed loadprohibite exception be unhandled core 0 register dump pc 0x40089191 ps 0x00060033 a0 0x80089913 a1 0x3ffb2f90 a2 0x3ffb3094 a3 0x00000000 a4 0x00060021 a5 0x000000fe a6 0x00000001 a7 0x00000000 a8 0x00000000 a9 0x3ffb34f8 a10 0x00000003 a11 0x00060023 a12 0x00060021 a13 0x000000fe a14 0x0000002a a15 0x3ffb5370 sar 0x0000001f exccause 0x0000001c excvaddr 0x00000050 lbeg 0x4008e610 lend 0x4008e63e lcount 0x00000000 core 0 be run in isr context epc1 0x40089191 epc2 0x00000000 epc3 0x00000000 epc4 0x00000000 the model training be do in a jypiter notebook on a window 10 system and the tflite file be create with the following conversion sequence name conv2d converter tf lite tfliteconverter from keras model model tflite model converter convert open name tfl wb write tflite model the coding for the esp32 be do in c in esp idf the tflite micro library be copy from the hello world example with the esp idf compiler run on plattformio the simplify c code be include extern c void app main static tflite errorreporter error reporter nullptr const tflite model model nullptr static tflite microinterpreter interpreter nullptr tflitetensor output nullptr static tflite op micro allopsresolver resolver static tflite microopresolver 5 micro op resolver int ktensorarenasize 128 1024 uint8 t tensor arena new uint8 t ktensorarenasize tflitestatus allocate status error reporter new tflite microerrorreporter caccesssdclass accesssd model tflite getmodel accesssd readfiletochararray sdcard maxpool tfl model tflite getmodel accesssd readfiletochararray sdcard conv2d tfl printf model load n micro op resolver addbuiltin tflite builtinoperator reshape tflite op micro register reshape micro op resolver addbuiltin tflite builtinoperator conv 2d tflite op micro register conv 2d micro op resolver addbuiltin tflite builtinoperator fully connect tflite op micro register fully connect micro op resolver addbuiltin tflite builtinoperator softmax tflite op micro register softmax micro op resolver addbuiltin tflite builtinoperator max pool 2d tflite op micro register max pool 2d interpreter new tflite microinterpreter model micro op resolver tensor arena ktensorarenasize error reporter allocate status interpreter allocatetensor cimagebasis cib sdcard zif0 jpg float input data ptr interpreter type tensor 0 for int x 0 x cib width x for int y 0 y cib height y for int ch 0 ch cib channel ch input datum ptr float cib getpixelcolor x y ch input datum ptr printf input load n interpreter invoke printf invoke do n output interpreter output 0 for int I 0 I 11 I printf result d f n I output datum f I if the conv2d layer be exchange by a maxpoole layer the model be run technically smoothl on exactly the same c code the only change in the model be comment the conv2d layer and activation the maxpool2d layer model add inputlayer input shape 32 20 3 model add conv2d 8 3 3 model add maxpool2d pool size 2 2 model add flatten model add dense 11 activation softmax then the result be as expect w o any error input load invoke do result 0 0 000000 result 1 0 000000 result 2 0 000000 result 3 0 000000 result 4 0 000000 result 5 0 000000 result 6 0 000000 result 7 0 000000 result 8 0 000000 result 9 0 000000 result 10 1 000000 any idea or hint where the problem be remark this model be not do any usefull anymore I have reduce the model and the c code to a minimum for reproduce the problem final target be a image classification on a esp32 with the inbuild camera for the problem I need a dedicated cnn with several conv2d layer |
tensorflowtensorflow | problem run visualize py at import flatbuffersn | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os 10 15 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary from source tensorflow version use command below v1 12 1 32248 g8670c85844 2 2 0 python version 3 7 4 bazel version if compile from source 3 0 0 gcc compiler version if compile from source apple clang version 11 0 3 clang 1103 0 32 59 from gcc version cuda cudnn version n a gpu model and memory n a describe the current behavior visualize py script fail with this error importerror can not import name flatbuffersn from flatbuffer python private var tmp bazel jeremy 95159cfd4782ce915016562181875cd6 execroot org tensorflow bazel out darwin opt bin tensorflow lite tool visualize runfile flatbuffer python init py I be run it with the command bazel run tensorflow lite tool visualize dev tfl dev mobilenet v1 0 25 128 quant mobilenet v1 0 25 128 quant tflite tmp mobnet html from the top of my tensorflow source directory the build phase seem to work fine then the import error seem to happen on run visualize py full output in attached file vis dump txt the target tflite file can be download here but I don t think the tflite file ever get load so I doubt that the specific file be relevant the directory where it be try to import from be list here there be no flatbuffersn file or directory and the init py file be empty ls l private var tmp bazel jeremy 95159cfd4782ce915016562181875cd6 execroot org tensorflow bazel out darwin opt bin tensorflow lite tool visualize runfile flatbuffer python total 0 r xr xr x 1 jeremy wheel 0 may 20 16 15 init py drwxr xr x 3 jeremy wheel 96 may 21 09 39 pycache drwxr xr x 10 jeremy wheel 320 may 20 16 15 flatbuffer describe the expect behavior I would expect it to dump an html file to tmp mobnet html contain a visualization of the mobilenet model in the target tflite file standalone code to reproduce the issue the command line above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | multiworkermirroredstrategy assign a wrong device | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento linux release 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device nan tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 10 bazel version if compile from source nan gcc compiler version if compile from source nan cuda cudnn version 10 1 gpu model and memory nan you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior use distribution strategy multiworkermirroredstrategy in graph mode throw an error tensorflow python framework error impl invalidargumenterror can not assign a device for operation metric auc identity node metric auc identity be explicitly assign to replica 0 task 0 device cpu 0 but available device be job worker replica 0 task 1 device cpu 0 job worker replica 0 task 1 device gpu 0 job worker replica 0 task 1 device xla cpu 0 job worker replica 0 task 1 device xla gpu 0 make sure the device specification refer to a valid device assign a wrong device which be replica 0 task 0 device cpu 0 on task1 describe the expect behavior train normly standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python file demo py import sys import numpy as np import tensorflow as tf strategy tf distribute experimental multiworkermirroredstrategy graph context tf graph as default strategy context strategy scope with graph context with strategy context ip tf keras layers input 2 h tf keras layer dense 10 activation relu input dim 2 ip out tf keras layer dense 2 activation softmax h model tf keras model model input ip output out model compile optimizer adam loss categorical crossentropy metric tf keras metrics auc num threshold 100 x np random randn 100 2 y x 0 x 1 0 model fit x tf keras util to categorical y epoch 1 run above script command tf config cluster worker localhost 2222 localhost 2223 task type worker index 0 cuda visible device 0 python demo py tf config cluster worker localhost 2222 localhost 2223 task type worker index 1 cuda visible device 1 python demo py other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1367 in do call return fn args file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1350 in run fn self extend graph file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1390 in extend graph tf session extendsession self session tensorflow python framework error impl invalidargumenterror can not assign a device for operation metric auc identity node metric auc identity be explicitly assign to replica 0 task 0 device cpu 0 but available device be job worker replica 0 task 1 device cpu 0 job worker replica 0 task 1 device gpu 0 job worker replica 0 task 1 device xla cpu 0 job worker replica 0 task 1 device xla gpu 0 make sure the device specification refer to a valid device metric auc identity during handling of the above exception another exception occur traceback most recent call last file x py line 22 in model fit x tf keras util to categorical y epoch 1 file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training distribute py line 790 in fit args kwargs file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training distribute py line 777 in wrapper mode dc coordinatormode independent worker file usr local anaconda3 lib python3 6 site package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file usr local anaconda3 lib python3 6 site package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training distribute py line 772 in worker fn return method model kwargs file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training distribute py line 619 in fit epoch epoch file usr local anaconda3 lib python3 6 site package tensorflow core python keras engine training py line 2200 in distribution standardize user datum session k get session file usr local anaconda3 lib python3 6 site package tensorflow core python keras backend py line 496 in get session initialize variable session file usr local anaconda3 lib python3 6 site package tensorflow core python keras backend py line 911 in initialize variable variable module be variable initialize v for v in candidate var file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 960 in run run metadata ptr file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1183 in run feed dict tensor option run metadata file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1361 in do run run metadata file usr local anaconda3 lib python3 6 site package tensorflow core python client session py line 1386 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror can not assign a device for operation metric auc identity node metric auc identity define at x py 18 be explicitly assign to replica 0 task 0 device cpu 0 but available device be job worker replica 0 task 1 device cpu 0 job worker replica 0 task 1 device gpu 0 job worker replica 0 task 1 device xla cpu 0 job worker replica 0 task 1 device xla gpu 0 make sure the device specification refer to a valid device metric auc identity |
tensorflowtensorflow | non deterministic behaviour tf math unsorted segment sum use cuda atomic operation | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de and v2 2 0 rc4 8 g2b96f3662b python version 3 7 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 105 and 7 6 5 32 gpu model and memory rtx6000 24 gb describe the current behavior currently tf math unsorted segment sum use non deterministic gpu kernel which lead significant failing in the tensorflow determinism venture other tensorflow function make use of tf math unsorted segment sum such as tf gather on backprop some function affect that I ve discover tfa image dense image warp on backprop tf gather on backprop describe the expect behavior when tf deterministic op 1 tf math unsorted segment sum should use deterministic gpu kernel lead to reproducibility who will benefit from this bug fix correction determinism be an extremely important part of our venture into deep learning as a community without determinism it be hard to reliably tune hyperparameter and conduct other type of investigation such ablation study whilst many tensorflow operation have a deterministic alternative upon set the os environment variable tf deterministic op 1 tf math unsorted segment sum seem to have fall under the radar perhaps because other operation take priority such as tf reduce sum introduce this level of determinism to tensorflow will allow it to be a well candidate for deep learn deployment in more sensitive environment such as medical I e it doesn t make sense that a radiologist will look at result during one scan and then conduct the same scan and get a different result it also affect the public s trust in ai venture altogether as far as I m aware pytorch offer full deterministic capability perhaps due to the benefit of hindsight with tensorflow not have it standalone code to reproduce the issue code to reproduce the issue edit please see the code here instead issuecomment 632590302 I ve add seed setting tf deterministic op etc and the issue still reproduce import tensorflow as tf import numpy as np num segment 4 datum tf random normal 30 256 256 datum tf constant datum segment np random randint low 0 high num segment size datum shape for I in range 5 reduce sum tf math unsorted segment sum datum segment num segment print reduce sum output tf tensor 273 92117 380 23163 1279 9718 839 6437 shape 4 dtype float32 tf tensor 273 92395 380 22168 1279 9834 839 62573 shape 4 dtype float32 tf tensor 273 91425 380 22177 1279 9773 839 62976 shape 4 dtype float32 tf tensor 273 9177 380 2243 1279 9733 839 6427 shape 4 dtype float32 tf tensor 273 91568 380 2217 1279 9747 839 64166 shape 4 dtype float32 note all print result be different but in reality they should be the same colab notebook with this code can be find at unit test essentially the code above will produce the same result rather than a different result every time it be execute in the for loop come soon other info log more information about the gpu operation can be find at more information come soon |
tensorflowtensorflow | documentation of accept datatype for validation datum in keras model fit be incorrect | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue fit description of issue what need change the description of the keras model fit function be either ambiguous or incorrect regard the accept datatype for the parameter validation datum specifically some datum type be accept even when the documentation state that they be not for example the datatype keras util sequence be accept as a possible datatype and as far as I can tell behave as one would expect as far as I can tell this be primarily true when the user pass a generator sequence as x in this case the function model fit dispatch to the deprecate function model fit generator which do accept a generator or sequence for the validation data parameter this documentation should be correct to unambiguously state one of the follow exactly the list of datatype that be accept e g numpy array list panda dataframe etc note this may require more work in order to fully test this set of datatype an approximation of the list of datatype that be accept with a cavaeat that some may be untested only sometimes valid if the type accept be dependent on the type of x then this should also be document submit a pull request if necessary I be happy to open a pr however I think that give this be clear user face code and the primary interface for most tensorflow user it woudl be good to have this fix spearhead by an internal developer |
tensorflowtensorflow | tf lite nightly model with fully connect layer can t be convert fully quantization int8 | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux tensorflow instal from source or binary tf nightly tensorflow version or github sha if from source tf nightly command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook import numpy as np import tensorflow as tf mnist tf keras datasets mnist train datum test datum mnist load datum pre process lambda x x 255 0 num calib 1000 calib data pre process train datum 0 0 num calib astype np float32 model tf keras sequential tf keras layers inputlayer input shape 28 28 tf keras layer reshape target shape 28 28 1 tf keras layer conv2d filter 12 kernel size 3 3 activation tf nn relu tf keras layer maxpooling2d pool size 2 2 tf keras layer flatten tf keras layer dense 10 activation tf nn softmax model summary train image pre process train datum 0 train label train datum 1 test image pre process test data 0 test label test datum 1 train the digit classification model model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit train image train label epoch 1 validation data test image test label def get calib data func def representative datum gen for input value in calib data input value np expand dim input value axis 0 astype np float32 yield input value return representative datum gen converter tf lite tfliteconverter from keras model model converter representative dataset get calib data func converter target spec support op tf lite opsset tflite builtins int8 tflite model int8 converter convert runtimeerror max and min for dynamic tensor should be record during calibration fail for tensor sequential 2 reshape 2 shape empty min max for tensor sequential 2 reshape 2 shape also please include a link to the save model or graphdef failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tfa reducelronplateau callback from tf keras be not recognize cohen kappa metric direction in auto mode | Bug | hi all the auto mode in reducelronplateau and modelcheckpoint be look for specific string acc in the name of the metric to be monitor this actually lead to unlickly scenarious of not work properly even while use metric that be define in tfa and hope tf will be aware of the direction this can be add in the doc to make the developer understand how to name their metric or to set min max mode on their own thank |
tensorflowtensorflow | do tensorflow 1 15 0 support int8 tflite convertion wrong accurary | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip install tensorflow gpu 1 15 tensorflow version or github sha if from source 1 15 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook converter tf lite tfliteconverter from frozen graph pb path input array input tensor name output array class tensor name input shape input tensor shape converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf int8 converter inference output type tf int8 converter representative dataset representative datum gen tflite model converter convert with open owntempuint tflite wb as f f write tflite model failure detail if the conversion be successful but the generate model be wrong state what be wrong the int8 model produce successfully however the accuracy be very low while from the same pb model whose accuracy be about 0 51 float tflite model achieve 0 47 accuracy the int8 tflite model only have 0 04 with the same input |
tensorflowtensorflow | non ok status tensorflow env default deletefile ptx path status not find | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 centos7 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install officail tensorflow version use command below 1 15 2 with gpu python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 6 gpu model and memory v100 32 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior can t find ptxas binary in cuda dir bin will back to the gpu driver for ptx sass compilation this be ok so long as you don t see a warning below about an out of date driver version 2020 04 07 16 28 20 691087 w tensorflow compiler xla service gpu nvptx compiler cc 70 search for cuda in the follow directory 2020 04 07 16 28 20 691210 w tensorflow compiler xla service gpu nvptx compiler cc 73 cuda sdk lib 2020 04 07 16 28 20 691315 w tensorflow compiler xla service gpu nvptx compiler cc 73 usr local cuda 2020 04 07 16 28 20 691454 w tensorflow compiler xla service gpu nvptx compiler cc 73 2020 04 07 16 28 20 691571 w tensorflow compiler xla service gpu nvptx compiler cc 75 you can choose the search directory by set xla gpu cuda datum dir in hlomodule s debugoption for most app set the environment variable xla flag xla gpu cuda data dir path to cuda will work 2020 04 07 16 28 20 725596 f tensorflow stream executor cuda ptxas util cc 181 non ok status tensorflow env default deletefile ptx path status not find tmp tempfile 72d9c7c8 2841 4447 bd7d 3947098f8e24 6a7fc700 2624 5a2af2a888f80 no such file or directory fatal python error abort describe the expect behavior 1 the code in follow log the mislead warn l405 bool log warn true if maybe cubin status code tensorflow error code not find miss ptxas be expect in some environment where cuda sdk binary be not available we don t want to spam log with identical warning in this case todo jlebar we should implement a log first n and log every n for more general usage static std atomic warning do false log warn warning do exchange true if log warn printcantfindcudamessage 2 if some exception that the ptx path do not create there will rasie a error that 2020 04 07 16 28 20 725596 f tensorflow stream executor cuda ptxas util cc 181 non ok status tensorflow env default deletefile ptx path status not find l184 write ptx into a temporary file string ptx path if env localtempfilename ptx path return port internalerror couldn t get temp ptx file name auto ptx clean tensorflow gtl makecleanup ptx path tf check ok tensorflow env default deletefile ptx path tf return if error tensorflow writestringtofile env ptx path ptx content vlog 2 ptx write to ptx path standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow overfit and underfit | Bug | tensorflow 2 0 overfit and underfit google colab |
tensorflowtensorflow | customize loss function require eager tensor but symbolic tensor be pass | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tensorflow gpu 2 2 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 gpu model and memory quadro rtx 8000 48 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 2 0 rc4 8 g2b96f3662b 2 2 0 describe the current behavior I need to apply a binary mask to the model output for compute loss my current implementation use a model that take two input the datum and the mask and use function closure to implement the customize loss however this raise the error tensorflow python eager core symbolicexception input to eager execution function can not be keras symbolic tensor apparently the mask input be treat as symbolic tensor describe the expect behavior this only happen in the eager mode apply disable eager execution will eliminate the problem however I want to know if there be any way to make this work in the eager mode standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook this be the gist import tensorflow as tf import numpy as np import tensorflow kera backend as k from tensorflow keras layers import input flatten dense from tensorflow keras import model x datum np zero 32 28 28 x mask np zero 32 10 y np zero 32 10 input datum input shape 28 28 input mask input shape 10 output flatten input datum output dense 64 activation relu output output dense 10 output model model input input datum input mask output output def custom loss def loss y true y pre this line cause the error return k mean k square y true y pre model input 1 axis 1 this line doesn t cause the error return k mean k square y true y pre axis 1 return loss model compile optimizer tf keras optimizer sgd loss custom loss metric accuracy for I in range 2 print I model train on batch x datum x mask y other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | weird block of rnn in tf2 2 | Bug | system information this bug exist in tf v2 2 0 rc4 8 g2b96f3662b 2 2 0 and on windows linux both cpu and gpu version however it do not exist in tf 2 1 describe the current behavior just check the code python rnn tf keras layers gru 3 rnn tf keras input none 2 tf keras input 3 tf function input signature tf tensorspec shape none none 2 tf tensorspec shape none 3 def test x I with tf gradienttape persistent true as tape rnn x initial state I test np random randn 3 2 2 astype np float32 np random randn 3 3 astype np float32 the code will block in test and the memory will grow unlimitedly the code will also block if gru be replace by lstm but it will run well if it be simplernn this bug be a little complicated bacause when only all rnn tf keras input tf function input signature and with tf gradienttape persistent true exist the block show up |
tensorflowtensorflow | different sha256 for mkl dnn | Bug | in tensorflow workspace bzl tf http archive name mkl dnn build file clean dep third party mkl dnn mkldnn build sha256 31e78581e59d7e60d4becaba3834fc6a5bf2dccdae3e16b7f70d89ceab38423f strip prefix mkl dnn 0 21 3 url when the mkl dnn be download from the first url the sha256 be 31e78581e59d7e60d4becaba3834fc6a5bf2dccdae3e16b7f70d89ceab38423f which be right but when the mkl dnn be download from the second url the sha256 be a0211aeb5e7dad50b97fa5dffc1a2fe2fe732572d4164e1ee8750a2ede43fbec which will fail to compile when I unzip the compress package download from the second url its directory be onednn 0 21 3 but the file in they be exactly the same maybe this be the reason hope you can check it and fix it thank |
tensorflowtensorflow | tf 2 2 api docs tf keras application resnet v2 preprocess input doc return paragraph be mislead | Bug | the documentation say in the return paragraph that the image be convert from rgb to bgr then each color channel be zero center with respect to the imagenet dataset without scale however accord to the function definition l125 l139 it s not true anymore since the mode parameter can t be set and it be always equal to tf therefore this docs part must be correct to will scale pixel between 1 and 1 sample wise |
tensorflowtensorflow | op execute eagerly outside function assertionerror when use train on batch with mirroredstrategy and disable eager execution | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory quadro rtx you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 2 0 rc4 8 g2b96f3662b 2 2 0 describe the current behavior multi gpu training mirrorstrategy with keras train on batch api and disable eager execution cause the follow assertionerror traceback most recent call last file mini test py line 27 in model train on batch x y file home ubuntu keras examples venv keras examples lib python3 6 site package tensorflow python keras engine training v1 py line 1050 in train on batch output loss metric self output loss metric file home ubuntu keras examples venv keras examples lib python3 6 site package tensorflow python keras engine training eager py line 316 in train on batch output loss metric output loss metric file home ubuntu keras examples venv keras examples lib python3 6 site package tensorflow python keras engine training eager py line 250 in process single batch with backend eager learning phase scope 1 if train else 0 file usr lib python3 6 contextlib py line 81 in enter return next self gen file home ubuntu keras examples venv keras examples lib python3 6 site package tensorflow python keras backend py line 456 in eager learning phase scope assert op execute eagerly outside function assertionerror describe the expect behavior error only occur when use train on batch mirrorstrategy and disable eager execution all together at the same time turn off mirrorstrategy or disable eager execution will make the error disappear replace model train on batch by model fit will also make the error go away standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import numpy as np from tensorflow python framework op import disable eager execution disable eager execution strategy tf distribute mirroredstrategy device gpu 0 gpu 1 x np zero 32 28 28 y np zero 32 with strategy scope model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 64 activation relu tf keras layer dense 10 model compile optimizer tf keras optimizer sgd loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy for I in range 2 model train on batch x y print tf execute eagerly other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach replace model train on batch by model fit the code below will make the error go away comment out disable eager execution or with strategy scope will also make the error disappear import tensorflow as tf import numpy as np from tensorflow python framework op import disable eager execution disable eager execution strategy tf distribute mirroredstrategy device gpu 0 gpu 1 with strategy scope x train y train x test y test tf keras datasets mnist load datum x train x test x train 255 0 x test 255 0 train dataset tf datum dataset from tensor slice x train y train batch 512 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 64 activation relu tf keras layer dense 10 model compile optimizer tf keras optimizer sgd loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy model fit train dataset epoch 2 print tf executing eagerly |
tensorflowtensorflow | tf 2 x can t return variable as tf keras model output | Bug | system information reproduce in colab currently with tensorflow 2 2 reproduce in debian buster tensorflow 2 1 cpu build from source describe the current behavior python import tensorflow as tf input tf keras input var tf variable 3 0 ok tf keras model inputs input output input var attributeerror exception tf keras model inputs input output input var var the same happen with variant like tf identity var describe the expect behavior no exception raise standalone code to reproduce the issue code other info log same as but that be on the wrong repo I think |
tensorflowtensorflow | save model readme md use deprecate code | Bug | url s with the issue description of issue what need change that documentation use deprecate code like tf session submit a pull request no because I don t really know how it should be use now |
tensorflowtensorflow | bug in tf keras bidirectional lstm when time major be true | Bug | system information have I write custom code yes os platform and distribution macos mojave 10 14 6 tensorflow instal from binary tensorflow version v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 3 bug description when use bidirectional layer with forward backward lstms with time major true and merge mode concat same issue exist in other mode too it produce incorrect result due to the below line l658 if time major true the input shape of bidi lstm be seq len batch size hide size axis 1 represent batch dimesnion and we end up reverse y rev in batch dimension before concatenation while it should have be reverse in the dimension represent seq len axis 0 this work fine when lstm be time major false as in that case axis 1 represent seq len ideally it should see which axis axis 0 or axis 1 represent the time dimension and reverse on that axis instead of generically reverse on axis 1 could you please fix this bug as the time major version of the lstms be more efficient standalone code to reproduce the issue import tensorflow as tf import numpy as np seq len 2 batch size 1 feature dim 1 input tf keras input shape seq len feature dim transpose input to be time major input transpose tf transpose input perm 1 0 2 output tf keras layers bidirectional tf keras layers lstm 1 return sequence true time major true name bi input transpose model tf keras model input input output output set all the weight to be one for simplicity rnn layer model get layer bi weight rnn layer get weight new w np one x dtype np float32 for x in feature dim 4 1 4 4 2 rnn layer set weight new w model save test h5 x np one batch size seq len feature dim dtype np float32 expect model predict x print expect expect result be 0 6082834 0 87263733 0 87263733 0 6082834 which be forward layer seq 1 backward layer seq 1 forward layer seq 2 backward layer seq 2 but what we get be 0 6082834 0 6082834 0 87263733 0 87263733 forward layer seq 1 backward layer seq 2 forward layer seq 2 backward layer seq 1 |
tensorflowtensorflow | error with my customize layer bind method could not be transform bad argument number for name 3 expect 4 | Bug | system information os platform and distribution linux ubuntu 16 04 tensorflow version 1 14 python version 3 7 standalone code to reproduce the issue I would like to customize a dropout layer in which I can cache and reset the dropout mask manually the code be as below class dropoutcontrol layer def init self rate seed none kwargs super dropoutcontrol self init kwargs self rate rate self seed seed self cache dropout mask none def reset dropout self rate op convert to tensor self rate dtype self input dtype name rate random tensor random op random uniform shape self shape seed self seed dtype self input dtype keep prob 1 rate scale 1 keep prob keep mask random tensor rate self cache dropout mask scale math op cast keep mask self input dtype def get dropout mask self return self cache dropout mask def call self input training if self cache dropout mask be none self shape array op shape input self input dtype input dtype self reset dropout def drop input return input self cache dropout mask output tf util smart cond training drop input lambda array op identity input return output def compute output shape self input shape return input shape def get config self config rate self rate seed self seed base config super dropoutcontrol self get config return dict list base config item list config item describe the current behavior when I use this layer the system give I the follow warning what be this problem be it an issue that I have to fix warn tensorflow entity could not be transform and will be execute as be please report this to the autgograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert assertionerror bad argument number for name 3 expect 4 thank ahead |
tensorflowtensorflow | lecun normal and he normal crash when type cast typeerror he normal get an unexpected keyword argument dtype | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from source or binary idk tensorflow version use command below 2 1 0 python version 3 6 10 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory none describe the expect behavior successful type casting in tf datum dataset map function standalone code to reproduce the issue import os os environ tf cpp min log level 3 from tensorflow keras import model import tensorflow as tf from tensorflow keras layer import dense import tensorflow dataset as tfds from tensorflow keras initializers import glorot normal he normal lecun normal dataset info tfds load binary alpha digit with info true split train datum dataset map lambda x tf cast x image tf float32 x label batch 8 class model model def init self super model self init self layer1 dense 16 kernel initializer he normal self layer2 dense unit info feature label num class def call self input training none kwargs x self layer1 input x self layer2 x return x model model model next iter datum 0 glorot normal work he normal doesn t work lecun normal doesn t work |
tensorflowtensorflow | tf keras model docs miss get weight method and metric property | Bug | url s with the issue description of issue what need change the doc be miss get weight set weight method and metric property get weight method and metric property be define in the src but not in the generate doc l190 l197 l361 l410 set weight method be mention in keras docs setweight method |
tensorflowtensorflow | equation not be render in doc page | Bug | url s with the issue description of issue what need change the equation after the optimal q function obey the follow bellman optimality equation be not render correctly this be what I see on my screen begin equation q s a mathbb e leave r gamma max a q s a right end equation |
tensorflowtensorflow | simple quantization aware training tf tutorial throw a warning | Bug | system information os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary 2 2 tensorflow version or github sha if from source command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook the output from the converter invocation warn tensorflow from usr local lib python3 6 dist package tensorflow python keras backend py 465 set learning phase from tensorflow python keras backend be deprecate and will be remove after 2020 10 11 instruction for update simply pass a true false value to the training argument of the call method of your layer or model warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 105 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically info tensorflow asset write to tmp tmpjvd7obgu asset failure detail there be no failure as such but there be a warning which be not clear to any common user it would be helpful if the warning guide the user to root cause of the issue so that user will update training argument any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | ts sessionrun input to reshape error request shape be always tensor size 2 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary from pip c api download from tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 c api hello from tensorflow c library version 1 15 0 python version python 2 7 12 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior load a save model with the c api fail to run the graph all input placeholder fail in the reshape operation for example tf sessionrun status 3 input to reshape be a tensor with 3715 value but the request shape have 13801225 tf sessionrun status 3 input to reshape be a tensor with 2 value but the request shape have 4 tf sessionrun status 3 input to reshape be a tensor with 1422 value but the request shape have 2022084 the number of value in the request shape be always the expect number of value 2 I suspect the follow line of code play a role in this l139 tf loadsessionfromsavedmodel load the model successfully and it will correctly fail to run if I don t provide all input I can see from stderr that it be call the initop 2020 05 15 19 18 37 508528 I tensorflow cc save model loader cc 202 restore savedmodel bundle 2020 05 15 19 18 37 529034 I tensorflow cc save model loader cc 151 running initialization op on savedmodel bundle at path home james model bq 1 2020 05 15 19 18 38 585963 I tensorflow cc save model loader cc 311 savedmodel load for tag serve status success take 1097826 microsecond more on the stackoverflow question signature def name serve default method name tensorflow serve predict describe the expect behavior tf loadsessionfromsavedmodel output a runnable graph or if extra step be require to run a save model with the c api they should be document use the jni code as an example I can not find extra step there l132 I can not use the c share lib because of the no session factory register for the give session option target config register factory be issue that I can not get around standalone code to reproduce the issue I don t think this be possible the operation that fail be all populate by file in the asset directory the value in the tensor with value be correct the request shape be not the code be pretty much just a standard c api block influence by the jni code above session tf loadsessionfromsavedmodel for each input input push back tf graphoperationbyname for each output output push back tf graphoperationbyname tf sessionrun session input output |
tensorflowtensorflow | tf convert to tensor throw valueerror for tf float64 tensor and dtype tf float32 instead of silently cast | Bug | I think this be a bug at least it be inconsistent behavior system information custom code no just the minimal example to reproduce the error message os platform and distribution linux ubuntu 18 04 pc tensorflow instal from source or binary binary tensorflow version v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 6 default jan 8 2020 19 59 22 describe the current behavior tf convert to tensor accept numpy np float64 array when dtype be tf float32 and return a tf float32 tensor if the argument however be a tensor of type tf float64 an error be throw instead of return a tf float32 tensor describe the expect behavior I would expect tf convert to tensor to treat the float64 tensor and the float64 ndarray the same in my understanding it be in general use for all tensorflow operator and should accept a broad range of input I see no reason to reject tensor but accept ndarray standalone code to reproduce the issue script import numpy as np import tensorflow as tf v np np one shape 3 dtype np float64 print a v np type v np 0 v tf tf constant 1 0 1 1 dtype tf float64 print b v tf type v tf 0 numpy print c tf convert to tensor v np dtype tf float32 print d tf convert to tensor v tf dtype tf float32 output a 1 1 1 b tf tensor 1 1 1 shape 3 dtype float64 c tf tensor 1 1 1 shape 3 dtype float32 traceback most recent call last file home user tensorflow myapp tf bug py line 9 in print d tf convert to tensor v tf dtype tf float32 file home user miniconda2 envs tf20b lib python3 7 site package tensorflow core python framework op py line 1256 in convert to tensor v2 as ref false file home user miniconda2 envs tf20b lib python3 7 site package tensorflow core python framework op py line 1290 in convert to tensor dtype name value dtype name value valueerror tensor conversion request dtype float32 for tensor with dtype float64 |
tensorflowtensorflow | keras training parameter value incorrect | Bug | tensorflow ver 2 1 0 python import tensorflow as tf import numpy as np class mymodel tf keras model def init self super mymodel self init self dense1 tf keras layer dense 4 activation tf nn relu self dense2 tf keras layer dense 5 activation tf nn softmax self dropout tf keras layers dropout 0 5 def call self input training none x self dense1 input if training be true print in training x self dropout x training training elif training be none print training none else print not in training return self dense2 x model mymodel optimizer tf keras optimizer adam 1e 4 loss tf keras loss categoricalcrossentropy model compile optimizer loss x tf random normal 5 y tf one 5 model fit x y epoch 1 the system report not in training first then in training image |
tensorflowtensorflow | need fast help tensorflow training error | Bug | hi I be try to code an ai with an linear regression algorithm with tensorflow but my model give an error message when I try to train it this be my code linear est train train input fn train result linear est evaluate eval input fn get model metric stat by test on tetse datum clear output clear consoke output print result the result variable be simply a dict of stat about our model and it say invalidargumenterror assertion fail label must be n class 1 condition x y do not hold element wise x head loss cast 0 8 10 2 y head loss check label range const 0 1 node assert take a look at the rest if you want to here big thank and good regard justin |
tensorflowtensorflow | tflite fail to create hexagon delegate on oneplus 5 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux pop os 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus 5 tensorflow instal from source or binary binary tensorflow version use command below 2 2 python version 3 7 describe the current behavior device information adb shell getprop ro product device oneplus5 adb shell getprop ro board platform msm8998 app root app src main jnilibs arm64 v8a libhexagon nn skel so libhexagon nn skel v65 so libhexagon nn skel v66 so armeabi v7a libhexagon nn skel so libhexagon nn skel v65 so libhexagon nn skel v66 so app root app build gradle apply plugin com android application apply plugin kotlin android apply plugin kotlin android extension android compilesdkversion 29 buildtoolsversion 29 0 3 defaultconfig applicationid com example sr tflite minsdkversion 25 targetsdkversion 29 versioncode 1 versionname 1 0 ndk abifilter armeabi v7a arm64 v8a testinstrumentationrunner androidx test runner androidjunitrunner buildtype release minifyenable false proguardfile getdefaultproguardfile proguard android optimize txt proguard rule pro aaptoption nocompress tflite to inline the bytecode build with jvm target 1 8 into bytecode that be be build with jvm target 1 6 e g navarg compileoption sourcecompatibility javaversion version 1 8 targetcompatibility javaversion version 1 8 kotlinoption jvmtarget 1 8 dependency def tfl version 0 0 0 nightly implementation filetree dir lib include jar implementation org jetbrain kotlin kotlin stdlib jdk7 kotlin version implementation androidx appcompat appcompat 1 1 0 implementation androidx core core ktx 1 2 0 implementation com google android material material 1 1 0 implementation androidx constraintlayout constraintlayout 1 1 3 implementation androidx navigation navigation fragment ktx 2 0 0 implementation androidx navigation navigation ui ktx 2 0 0 implementation org tensorflow tensorflow lite 0 0 0 nightly implementation org tensorflow tensorflow lite hexagon 0 0 0 nightly testimplementation junit junit 4 12 androidtestimplementation androidx test ext junit 1 1 1 androidtestimplementation androidx test espresso espresso core 3 2 0 implementation org tensorflow tensorflow lite tfl version change true implementation org tensorflow tensorflow lite gpu tfl version change true implementation org tensorflow tensorflow lite select tf op 0 0 0 nightly I follow the tensorflow lite hexagon delegate guide on oneplus 5 tensorflow lite be fail to create hexagon delegate log 020 05 14 16 26 19 906 27848 27848 com example sr tflite I system out datum app com example sr tflite wr6xefkeipjz vwqyv9msq lib arm64 2020 05 14 16 26 19 947 27848 27848 com example sr tflite v com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1859 successfully create user pd on domain 0 attrs 0x0 2020 05 14 16 26 19 960 27848 28029 com example sr tflite v com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 270 rpc latency thread start 2020 05 14 16 26 19 961 27848 28027 com example sr tflite e com example sr tflite vendor qcom proprietary commonsys intf adsprpc src app std imp c 729 error 45 fopen fail for oemconfig so no such file or directory 2020 05 14 16 26 19 961 27848 28027 com example sr tflite e com example sr tflite vendor qcom proprietary commonsys intf adsprpc src app std imp c 729 error 45 fopen fail for libhexagon nn skel so no such file or directory 2020 05 14 16 26 19 961 27848 27848 com example sr tflite d com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 983 error fffffffb remote handle open domain fail domain 0 name file libhexagon nn skel so hexagon nn domain skel handle invoke modver 1 0 dom adsp dlerror can not open oemconfig so 2020 05 14 16 26 19 961 27848 27848 com example sr tflite d com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 920 error ffffffff remote handle invoke fail domain 0 handle 0 sc 1010200 pra 0x7fde45af88 2020 05 14 16 26 19 961 27848 27848 com example sr tflite d com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1034 error ffffffff remote handle close fail error 2020 05 14 16 26 19 961 27848 27848 com example sr tflite d com example sr tflite vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1020 error fffffffb remote handle64 open fail name file libhexagon nn skel so hexagon nn domain skel handle invoke modver 1 0 dom adsp 2020 05 14 16 26 19 961 27848 27848 com example sr tflite w tflite fail to fetch hexagon nn version this might be because you re use incompatible version of libhexagon interface and libhexagon nn skel you must use compatible version refer to tensorflow lite hexagon delegate guide 2020 05 14 16 26 19 961 27848 27848 com example sr tflite I tflite hexagon delegate be not support 2020 05 14 16 26 19 962 27848 27848 com example sr tflite d androidruntime shut down vm 2020 05 14 16 26 19 966 27848 27848 com example sr tflite e androidruntime fatal exception main process com example sr tflite pid 27848 java lang unsupportedoperationexception this device doesn t support hexagon dsp execution at org tensorflow lite experimental hexagondelegate hexagondelegate java 40 at com example sr tflite 1 mainactivity oncreate 1 onclick mainactivity kt 97 at android view view performclick view java 6669 at android view view performclickinternal view java 6638 at android view view access 3100 view java 789 at android view view performclick run view java 26145 at android os handler handlecallback handler java 873 at android os handler dispatchmessage handler java 99 at android os looper loop looper java 193 at android app activitythread main activitythread java 6898 at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java 537 at com android internal os zygoteinit main zygoteinit java 858 2020 05 14 16 26 19 977 27848 28030 com example sr tflite d ostracker os event crash 2020 05 14 16 26 19 995 27848 27848 com example sr tflite I process send signal pid 27848 sig 9 I have download the late hexagon nn skel run v1 17 from the page but it still say fail to fetch hexagon nn version this might be because you re use incompatible version of libhexagon interface and libhexagon nn skel you must use compatible version refer to tensorflow lite hexagon delegate guide 2020 05 14 16 26 19 961 27848 27848 com example sr tflite I tflite hexagon delegate be not support do I need to pull some other so file from somewhere else or my device be not support thank describe the expect behavior the dsp delegate should be initialize as my soc be in the list of support hardware |
tensorflowtensorflow | fail to pass tf datum dataset object to multi input tf keras model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device nil tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 4 gpu model and memory rtx 2070 super 8 gb ram you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior valueerror fail to find datum adapter that can handle input contain key and value an error be encounter when try to pass multiple tf datum dataset object to my multi input keras model this be do by pass a dictionary with the key correspond to the name of each of my input layer and value correspond to each tf datum dataset object I have no issue pass the same dataset object to a model take in a single input but when I try to combine multiple model instance into a single model for an ensemble cnn this error be encounter my network and ensemble code python def net input base model densenet121 include top false weight imagenet input tensor input output base model get layer pool3 conv output x conv2d 128 3 activation relu pad same output x batchnormalization x x conv2d 64 3 activation relu pad same x x batchnormalization x x flatten x x dense 2 activation softmax name clf output x model tf keras model model input base model input output x return model def create ensemble model for I in range len model each model be a net object model model I for layer in model layer 1 layer trainable false layer name ensemble str i 1 layer name stack input model input for model in model stack output model output for model in model merge concatenate stack output x dense 16 activation relu merge x dense 2 activation softmax x model tf keras model model input stack input output x name ensemble return model my training process be summarize as follow 1 train 5 instance of densenet121 model in sequence 2 create a new ensemble combine the output of the previously train densenet121 model 3 train the ensemble model error occur on model fit my training code python image dir os path join data dir image depth dir os path join data dir label val split int np floor 0 1 len list path list image image dir print f validation split be val split image imagepath sort list path list image image dir depthpath sorted list path list image depth dir label generate label depthpath image ds tf datum dataset from tensor slice imagepath image ds create image dataset image ds batch size bs seed 42 training true label ds tf datum dataset from tensor slice label label ds create label dataset label ds batch size bs seed 42 splitting into training and validation dataset image train label train image ds skip val split label ds skip val split image val label val image ds take val split label ds take val split train ds tf datum dataset zip image train label train val ds tf datum dataset zip image val label val build the model model for I in range 1 n model 1 input tf keras layers input shape 224 224 3 name f input I model net input model append model print check model architecture print model summary opt lookahead radam lr init lr opt tf train experimental enable mixed precision graph rewrite opt create callback earlystop tf keras callback earlystopping monitor val loss patience 3 verbose 1 mode auto restore good weight true reducelr tf keras callback learningratescheduler reduce lr train the model print training if args pretraine 0 for I in range n model model model I model compile loss categorical crossentropy optimizer opt metric accuracy checkpoint tf keras callbacks modelcheckpoint f net I backup weight h5 monitor val loss verbose 1 save weight only true save well only true history model fit train ds epoch epoch verbose 1 step per epoch len imagepath val split bs validation datum val ds validation step val split bs callback checkpoint reducelr save the model print f saving model I model save f net I h5 include optimizer false model save weight f net I weight h5 else print loading pretraine weight for I in range args pretraine model I load weight f net I weight h5 create ensemble print create ensemble ensemble create ensemble model print ensemble architecture print ensemble summary keras util plot model model ensemble png show shape true ensemble save ensemble h5 ensemble train ensemble val for I in range 1 args pretraine 1 ensemble train f input I train ds ensemble val f input I val ds assert len ensemble train int args pretraine assert len ensemble val int args pretraine ensemble compile loss categorical crossentropy optimizer opt metric accuracy ensemble checkpoint tf keras callbacks modelcheckpoint ensemble backup weight h5 monitor val loss verbose 1 save weight only true save well only true history ensemble fit ensemble train epoch epoch verbose 1 step per epoch len imagepath val split bs validation datum ensemble val validation step val split bs callback ensemble checkpoint reducelr print save ensemble ensemble save ensemble h5 include optimizer false ensemble save weight ensemble weight h5 if name main ap argparse argumentparser ap add argument pretraine default 0 type int help number of pretraine model to use main ap parse args sess close output of print train ds batch size 16 no of class 2 python describe the expect behavior the tf datum dataset object should not require a y value to be pass to model fit explicitly since I have combine the image and label through tf datum dataset zip standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf signal rfft documentation refer to tcomplex as an argument | Bug | url s with the issue description of issue what need change clear description the table of args in the documentation include tcomplex tcomplex an optional tf dtype from tf complex64 tf complex128 default to tf complex64 but the function do not accept this argument call tf signal rfft tcomplex result in the error typeerror rfft get an unexpected keyword argument tcomplex this make sense give the signature in the documentation tf signal rfft input tensor fft length none name none and l114 l140 submit a pull request no I could not find where this table be generate in the code |
tensorflowtensorflow | invalid result when run tflite ruy computation within a nodejs v11 addon on armv7 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes libdeepspeech so os platform and distribution e g linux ubuntu 16 04 raspbian buster armbian buster mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary r2 2 master tensorflow version use command below r2 2 master python version n a bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc 6 5 0 rpi toolchain integrate in tensorflow gcc 7 2 1 linaro toolchain custom add to tensorflow cuda cudnn version n a gpu model and memory n a describe the current behavior model computation differ when run the library inside a nodejs process v11 0 0 on armv7 hardware describe the expect behavior model computation should be the same standalone code to reproduce the issue reproduction environment be complicated for now need to build libdeepspeech the nodejs addon install and run and compare to non nodejs work on a much small one as of now how much simple would this need to be our setup be a bit complicated our model use float as input so we need evalhybrid to use the thread enable fast path enable by dtflite with ruy gemv building for android python bin path usr bin python python lib path usr local lib python2 7 dist package tf enable xla 0 tf need opencl sycl 0 tf need cuda 0 tf need rocm 0 tf need mpi 0 tf download clang 0 cc opt flag march native wno sign compare tf set android workspace 1 android ndk home home document codaz mozilla deepspeech android android ndk r18b android ndk api level 21 android sdk home home document codaz mozilla deepspeech android sdk android api level 27 android build tool version 28 0 3 configure bazel clean bazel build s verbose failure workspace status command bash native client bazel workspace status cmd sh config monolithic config android config android arm define runtime tflite action env android ndk api level 21 cxxopt std c 11 copt d glibcxx use c99 native client libdeepspeech so run on android nokia 1 3 qm215 cortex a53 soc drx datum local tmp ld library path pwd deepspeech model model ldc93s1 16 2000 tflite audio ldc93s1 pcms16le 1 16000 wav tensorflow v2 2 0 rc3 31 ga6cee0345c deepspeech v0 7 0 30 gbb716efe info initialize tensorflow lite runtime audio format 1 num channel 1 sample rate 16000 desire 16000 bit per sample 16 re buffer size 93594 she have your dark suit in greasy wash water all year building for rpi3 python bin path usr bin python python lib path usr local lib python2 7 dist package tf enable xla 0 tf need opencl sycl 0 tf need cuda 0 tf need rocm 0 tf need mpi 0 tf download clang 0 cc opt flag march native wno sign compare tf set android workspace 0 configure bazel clean bazel build s verbose failure workspace status command bash native client bazel workspace status cmd sh config monolithic crosstool top local config arm compiler toolchain cpu armeabi define raspberry pi with neon true host crosstool top bazel tool tool cpp toolchain copt march armv7 a copt mfloat abi hard copt mfpu neon fp armv8 copt draspberry pi copt d glibcxx use cxx11 abi 0 copt mno unaligned access define tensorflow mkldnn contraction kernel 0 define runtime tflite copt funsafe math optimization copt ftree vectorize copt pipe copt dtflite with ruy gemv define tflite with ruy true c opt copt pthread linkopt lpthread native client libdeepspeech so run c binary on rpi3 deepspeech model model ldc93s1 16 2000 tflite audio ldc93s1 pcms16le 1 16000 wav tensorflow v2 2 0 rc3 31 ga6cee0345c deepspeech v0 7 0 30 gbb716efe she have your dark suit in greasy wash water all year run nodejs bind on rpi3 node node modules bin deepspeech model model ldc93s1 16 2000 tflite audio ldc93s1 pcms16le 1 16000 wav loading model from file model ldc93s1 16 2000 tflite tensorflow v2 2 0 rc3 31 ga6cee0345c deepspeech v0 7 0 30 gbb716efe static napi value deepspeechnapi createmodel napi env napi callback info modelsate 0x3287d98 static napi value deepspeechnapi createmodel napi env napi callback info modelsate int64 t 52985240 load model in 0 004686s run inference static napi value deepspeechnapi speechtotext napi env napi callback info modelsate int64 t 52985240 static napi value deepspeechnapi speechtotext napi env napi callback info modelsate 0x3287d98 she h yyour drk suit in greasy wash waer all year inference take 2 038 for 2 925s audio file other info log I have test many hypothesis change toolchain to gcc 6 5 0 bundle by tensorflow we use a different one by default re write the nodejs swig generate wrapper with n api in a very basic form repro on master commit 5be613ef4f3ec2608deed653ab4815bbbcfbe7f8 repro on master with new ruy commit 808ff748e0c7dc746a413fe45fa022d63e6253e8 bisect tensorflow first repro be when tflite ruy get the ability to run thread commit be369f57e9e46d03ccd62f1031f9dc484c1016de bisect nodejs issue first arise in obviously hard to actionate repro with different model size if input size be not a multiple of 4 work we do not use thread somehow because of l1210 same code same nodejs version run fine on arm64 armbian on s905x also exclude the soc itself and the distro repro under armbian on s905x when run multilib armv7 repro on rpi3 and rpi4 unable to reproduce and to get indication of any weird thing happen when run under valgrind on other platform valgrind on armv7 raspbian seem break valgrind on armv7 armbian die because of unsupported instruction produce by vfmaq f32 in eigen disable kneon path in ruy but keep thread the computation work disable thread with kneon enable work obviously verify that the input of the model be correct dump mfcc vector input state and output logit and verify they be different only under nodejs runtime input here l293 l299 output here l308 l316 verify dump the vector value and verify as well the copy function we run several pass for the audio file per small timestep of 320ms the very first output be already break no problem with the python binding java android even run concurrent thread c obviously try debug build with no optimization at all model train on r1 15 and use on r2 2 we produce a r2 2 train one and there be the same issue current question I be unable to reply be run under nodejs expose a bug that we have everywhere but that do not manifest v8 use by nodejs be use both thread and neon instruction when ruy s arm code be also use thread and neon in hand write asm |
tensorflowtensorflow | textvectorization layer error with 3d input datum | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary colab 2 x tensorflow version use command below v2 2 0 0 g2b96f3662b 2 2 0 python version colab default bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no describe the current behavior textvectorization layer limit input shape to 2d tensor describe the expect behavior textvectorization layer should work well with tensor of any shape e g 1d skipgram word2vec input batch of word 2d default batch word input for many nlp task currently work 3d fasttext skipgram input batch word character ngram standalone code to reproduce the issue other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full python valueerror traceback most recent call last in 1 fail with 1d datum 2 vectorization tf ragged constant data1 2 frame usr local lib python3 6 dist package tensorflow python framework op py in set shape self shape 1105 raise valueerror 1106 tensor s shape s be not compatible with supply shape s 1107 self shape shape 1108 1109 method not support implement for eager tensor valueerror tensor s shape 11 be not compatible with supply shape none none |
tensorflowtensorflow | bug cause by create custom layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mac os but it should work with ubuntu as well tensorflow instal from source or binary binary tensorflow version use command below 1 15 python version 3 6 gpu model and memory cpu I find a bug relate to create custom layer in tf here be a short code to replicate from future import absolute import from future import division from future import print function import tensorflow as tf class custommodel tf keras layers layer this class be for source sequence def init self super custommodel self init def build self input shape self kernel self add weight shape 32 512 512 initializer tf keras initializers glorot uniform seed 1 trainable true dumb def call self input return input 0 def main def create model source vocab target vocab relationship vocab source tf keras layers input dtype int32 shape 64 name source target tf keras layers input dtype int32 shape 64 name target relationship tf keras layers input dtype int32 shape 1 name relationship embed source tf keras layer embed 512 512 input length 64 source embed target tf keras layer embed 512 512 input length 64 target final layer custommodel relationship embed source embed target model tf keras model model input source target relationship output final layer return model model create model 1000 1000 500 print model summary if name main main I believe this be a bug so I post it here so that you can find and fix it meanwhile one workaround I find be to change the order of parameter below you can find the solution from future import absolute import from future import division from future import print function import tensorflow as tf class custommodel tf keras layers layer this class be for source sequence def init self super custommodel self init def build self input shape self kernel self add weight shape 32 512 512 initializer tf keras initializers glorot uniform seed 1 trainable true dumb def call self input return input 0 def main def create model source vocab target vocab relationship vocab source tf keras layers input dtype int32 shape 64 name source target tf keras layers input dtype int32 shape 64 name target relationship tf keras layers input dtype int32 shape 1 name relationship embed source tf keras layer embed 512 512 input length 64 source embed target tf keras layer embed 512 512 input length 64 target final layer custommodel embed source embed target relationship model tf keras model model input source target relationship output final layer return model model create model 1000 1000 500 print model summary if name main main |
tensorflowtensorflow | add support of uint8 and int8 for mean op | Bug | tensorflow micro l113 todo b 144955155 support uint8 b 144955155 and int8 b 144955018 could you implement this so we can use the mean op in int8 model |
tensorflowtensorflow | tf divide do not return a tensor | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below tensorflow 2 2 0 python version python 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf divide return number instead of tensor describe the expect behavior standalone code to reproduce the issue import tensorflow as tf import math as m print tf add 5 2 print tf multiply 5 2 print tf divide 5 2 print tf multiply tf add 3 2 tf add 14 32 print tf multiply 2 54 tf divide 8 2 6 print tf subtract 6 3 2 1045 print tf pow 3 6 2 print tf add 1 tf pow 2 2 print tf sqrt 5 0 print tf cos m pi other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach output tf tensor 7 shape dtype int32 tf tensor 10 shape dtype int32 2 5 tf tensor 230 shape dtype int32 tf tensor 7 815385 shape dtype float32 tf tensor 4 1955004 shape dtype float32 tf tensor 12 959999 shape dtype float32 tf tensor 5 shape dtype int32 tf tensor 2 236068 shape dtype float32 tf tensor 1 0 shape dtype float32 |
tensorflowtensorflow | load model use compat v1 batchnorm | Bug | system information have I write custom code yes os platform and distribution ubuntu tensorflow instal from source or binary source tensorflow version use command below 2 2 0 dev20200304 python version 3 6 9 describe the current behavior tf keras model load model load model in with tf compat v1 kera layer batchnormalization describe the expect behavior it should load model with tf keras layer batchnormalization standalone code to reproduce the issue see |
tensorflowtensorflow | keras conv3dtranspose inconsistency between padding output shape and output padding | Bug | url s with the issue description of issue what need change output shape computation be show as below on above documentation new depth depth 1 stride 0 kernel size 0 2 padding 0 output pad 0 new row row 1 stride 1 kernel size 1 2 padding 1 output pad 1 new col col 1 stride 2 kernel size 2 2 padding 2 output padding 2 but padding be either valid or same so be pad compute base on traditional convolution computation ref and then use here this be unclear from current documentation how same valid mode be be use clear description clarification about how these mode be reflect in compute actual padding and then use in specify formula computation of output shape deconv output length from keras utils conv util py be use for compute the output shape consider output padding which should be reflect into documentation concisely ref l168 |
tensorflowtensorflow | uint16 uint32 comparison throw error | Bug | some datatype such as uint16 uint32 can t be compare system information run on google colab describe the current behavior tensor with dtype uint16 uint32 can t be compare with any other datatype describe the expect behavior comparison for these datatype should be support standalone code to reproduce the issue import tensorflow as tf one tf one 2 3 dtype tf uint32 zeros tf zero 2 3 dtype tf uint32 tf math equal one zero error log notfounderror traceback most recent call last in 3 one tf one 2 3 dtype tf uint32 4 zero tf zero 2 3 dtype tf uint32 5 tf math equal one zero 4 frame usr local lib python3 6 dist package six py in raise from value from value notfounderror could not find valid device for node node node equal all kernel register for op equal device xla gpu t in dt float dt double dt int32 dt uint8 dt int16 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half device xla cpu jit t in dt float dt double dt int32 dt uint8 dt int16 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half device xla gpu jit t in dt float dt double dt int32 dt uint8 dt int16 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half device gpu t in dt bool device gpu t in dt complex128 device gpu t in dt complex64 device gpu t in dt int64 device gpu t in dt int16 device gpu t in dt int8 device gpu t in dt int32 device gpu t in dt uint8 device gpu t in dt double device gpu t in dt half device gpu t in dt float device cpu t in dt bool device cpu t in dt string device cpu t in dt complex128 device cpu t in dt complex64 device cpu t in dt int64 device cpu t in dt int32 device cpu t in dt bfloat16 device cpu t in dt int16 device cpu t in dt int8 device cpu t in dt uint8 device cpu t in dt double device cpu t in dt half device cpu t in dt float device xla cpu t in dt float dt double dt int32 dt uint8 dt int16 dt quint8 dt qint32 dt bfloat16 dt complex128 dt half op equal stackverflow question stackoverflow 61752565 |
tensorflowtensorflow | keras model evaluate progress bar doesn t work in graph mode in tf2 2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior call model evaluate in graph mode doesn t produce any stdout describe the expect behavior a progress bar should be display along with the loss value as in tf 2 2 standalone code to reproduce the issue |
tensorflowtensorflow | documentaton issue tf split num or size split parameter | Bug | url s with the issue description of issue what need change if num or size split be an integer then value be split along the dimension axis into num split small tensor this require that value shape axis be divisible by num split what be num split here I think this should be if num or size split be an integer then we call it num split and value be split along the dimension axis into num split small tensor this require that value shape axis be divisible by num split |
tensorflowtensorflow | custom metric update state raise error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 1 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to build a custom multiclass precision and recall for a image classification task if I run the code below without keras and the tf function annotator tf run it work fine if I m try to use the tf function annotator or use the keras model fit function the error below be throw to make sure that the error be not cause by my own implementation I copy the implementation of binarytruepositive from the tf website into my own file and try to run it but it produce the same error I know this metric do not make much sense here but it s only for testing purpose I try the code below on google colab and there it work fine I be aware of this issue but remove the return statement have no effect the code to reproduce this be show below describe the expect behavior the custom metric should work like without the tf function annotator standalone code to reproduce the issue python import tensorflow as tf import kera class binarytruepositive tf keras metric metric def init self name binary true positive kwargs super binarytruepositive self init name name kwargs self true positive self add weight name tp initializer zero def update state self y true y pre sample weight none y true tf cast y true tf bool y pre tf cast y pre tf bool value tf logical and tf equal y true true tf equal y pre true value tf cast value self dtype if sample weight be not none sample weight tf cast sample weight self dtype sample weight tf broadcast weight sample weight value value tf multiply value sample weight self true positive assign add tf reduce sum value def result self return self true positive class fashion model tf keras model def init self super fashion model self init self optimizer tf keras optimizer adam learning rate 1e 4 self flatten tf keras layer flatten data format channel last self dense1 tf keras layer dense unit 128 input shape 28 28 activation relu self out layer tf keras layer dense unit 10 self loss object tf keras loss sparsecategoricalcrossentropy from logit true reduction tf keras loss reduction sum over batch size self train loss tf keras metric mean train loss self train btp binarytruepositive self train accuracy tf keras metric sparsecategoricalaccuracy def call self input training none mask none x self flatten input x self dense1 x x self out layer x return x tf function def train step model sample image label sample with tf gradienttape as tape logit model image train true loss model loss object y pre logit y true label gradient tape gradient loss model trainable variable model optimizer apply gradient grad and var zip gradient model trainable variable model train loss loss model train accuracy update state y pre logit y true label model train btp update state y pre tf argmax logit axis 1 y true label def tf run fashion mnist keras dataset fashion mnist train image train label test image test label fashion mnist load datum class name t shirt top trouser pullover dress coat sandal shirt sneaker bag ankle boot train dataset tf datum dataset from tensor slice train image train label train dataset shuffle 5000 train dataset train dataset batch 32 drop remainder true model fashion model for epoch in range 1 10 for sample in train dataset train step model sample template epoch loss acc binarytp print template format epoch model train loss result model train accuracy result model train btp result model train loss reset state model train btp reset states model train accuracy reset states def keras run fashion mnist keras dataset fashion mnist train image train label test image test label fashion mnist load data model keras sequential keras layer flatten input shape 28 28 kera layer dense 128 activation relu keras layer dense 10 model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy binarytruepositive model fit train image train label epoch 10 if name main tf run other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python traceback most recent call last file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 code playground tf metric issue py line 192 in main file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 code playground tf metric issue py line 188 in main tf run file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 code playground tf metric issue py line 150 in tf run model train step sample file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py line 497 in initialize args kwd file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py line 2385 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py line 2699 in maybe define function graph function self create graph function args kwargs file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py line 2589 in create graph function capture by value self capture by value file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py line 3207 in bind method wrapper return wrap fn args kwargs file home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e typeerror in convert code home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 code playground tf metric issue py 130 train step self train btp update state y pre tf argmax logit axis 1 y true label home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 code playground tf metric issue py 81 decorate update op update state fn args kwargs home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py 568 call result self call args kwd home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager def function py 638 call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py 1609 filter call self capture input home jakob dokumente informatik sommersemester 20 masterarbeit pollenanalysis2 0 pollenvenv lib python3 7 site package tensorflow core python eager function py 1711 call flat flat output forward function call ctx typeerror call miss 1 require positional argument args |
tensorflowtensorflow | segfault on tf linalg svd | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 macos 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf linalg svd segfault when input shape have at least one element be 0 also test on the late tf 2 2 0 in colab and segfault still exist describe the expect behavior should not segfault standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np tf linalg svd np random rand 2 0 segfault other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | the error in multiprocesse | Bug | hi dear I have the problem in multiprocesse code from multiprocesse pool import threadpool modelv3 tf keras application inceptionv3 include top false pooling avg def process input x pre modelv3 predict input return x pre x np random randn 1 299 299 3 y np random randn 1 299 299 3 z np random randn 1 299 299 3 pool threadpool 2 pool map process x y z error traceback most recent call last file d python36 new xception load py line 26 in pool map process x y z file d python36 lib multiprocesse pool py line 266 in map return self map async func iterable mapstar chunksize get file d python36 lib multiprocesse pool py line 644 in get raise self value file d python36 lib multiprocesse pool py line 119 in worker result true func args kwd file d python36 lib multiprocesse pool py line 44 in mapstar return list map args file d python36 new xception load py line 18 in process x pre modelv3 predict input file d python36 lib site package tensorflow core python keras engine training py line 908 in predict use multiprocesse use multiprocesse file d python36 lib site package tensorflow core python keras engine training array py line 723 in predict callback callback file d python36 lib site package tensorflow core python keras engine training array py line 189 in model iteration f make execution function model mode file d python36 lib site package tensorflow core python keras engine training array py line 566 in make execution function return model make execution function mode file d python36 lib site package tensorflow core python keras engine training py line 2189 in make execution function self make predict function file d python36 lib site package tensorflow core python keras engine training py line 2179 in make predict function kwargs file d python36 lib site package tensorflow core python keras backend py line 3678 in function return graphexecutionfunction input output update update kwargs file d python36 lib site package tensorflow core python keras backend py line 3330 in init with op control dependency self output 0 file d python36 lib site package tensorflow core python framework op py line 5254 in control dependency return get default graph control dependency control input file d python36 lib site package tensorflow core python framework op py line 4688 in control dependency c self as graph element c file d python36 lib site package tensorflow core python framework op py line 3607 in as graph element return self as graph element lock obj allow tensor allow operation file d python36 lib site package tensorflow core python framework op py line 3686 in as graph element lock raise valueerror tensor s be not an element of this graph obj valueerror tensor tensor global average pooling2d 1 mean 0 shape 2048 dtype float32 be not an element of this graph system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 win10 64 tensorflow instal from source or binary pip tensorflow version use command below 1 14 python version 3 6 8 cuda cudnn version no gpu model and memory no gpu could you pls help I thx |
tensorflowtensorflow | experimental run v2 throw attributeerror with multiworkermirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary binary tensorflow version use command below 2 1 and 2 2 affect describe the current behavior use strategy experimental run v2 or strategy run for tf 2 2 with multiworkermirroredstrategy throw attributeerror collectiveallreduceextende object have no attribute cfer fn cache when pass it a tf function this be cause by the access at l743 due to collectiveallreduceextende not call the super init function which create that dictionary at l472 I note that the relevant code be remove in current master by but want to make sure it be include in the next release or in a patch release also multiworkermirroredstrategy be not mention in the commit so it might be a good idea to include something like this as a test case to avoid regression look at the commit I guess this be fix too standalone code to reproduce the issue import tensorflow as tf strategy tf distribute experimental multiworkermirroredstrategy with strategy scope x train y train x test y test tf keras datasets mnist load datum x train x test x train 255 0 x test 255 0 train dataset tf datum dataset from tensor slice x train y train batch 32 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layer dense 10 model compile optimizer tf keras optimizer sgd loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy tf function def train step model data target with tf gradienttape as tape prediction model datum training true loss model loss target prediction gradient tape gradient loss model trainable variable model optimizer apply gradient zip gradient model trainable variable return loss def distribute train step strategy model x y strategy experimental run v2 train step args model x y for x y in train dataset distribute train step strategy model x y |
tensorflowtensorflow | categorical crossentropy return wrong value | Bug | tf loss categoricalcrossentropy be return value different from those of numpy import numpy as np import tensorflow as tf y pre np array 7 3216137e 07 3 3240074e 11 4 4985552e 12 3 9974657e 05 7 3216137e 07 4 4985552e 12 4 4985552e 12 2 9537498e 04 8 8050038e 01 1 1916277e 01 y true np zero 10 y true 5 1 cce tf loss categoricalcrossentropy print f numpy 1 np sum y true np log y pre print f tf cce y true y pre system information have I write custom code no os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary default of colab tensorflow version use command below 2 2 0 rc4 python version 3 6 9 describe the current behavior numpy 26 12726483737188 tf 16 11809539794922 describe the expect behavior numpy 26 12726483737188 tf 26 12726483737188 standalone code to reproduce the issue colab link |
tensorflowtensorflow | upgrade from 2 1 to 2 2 raise unexpectedly high number of iteration in hlo pass exit fix point loop | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 gentoo mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 2 0 rc2 python version 3 7 bazel version if compile from source 2 0 gcc compiler version if compile from source 8 4 cuda cudnn version 10 2 7 6 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m use gentoo linux and tf be instal by gentoo s package management system portage emerge after upgrade tf from 2 1 0 to 2 2 0 rc2 a new warning raise training on a same model w tensorflow compiler xla service hlo pass fix h 49 unexpectedly high number of iteration in hlo pass exit fix point loop this warning might not affect training speed but take more gpu memory the model with same setting work well in tf 2 1 but throw oom in tf 2 2 it s unclear to I what this warning mean and not helpful to find the problem describe the expect behavior should not occupy more memory standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf keras model do not support in call method signature | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow tensorflow version use command below late python version 3 7 describe the current behavior a tf keras model will fail if the call method signature include the name argument only identifier from python 3 7 and if python 3 8 be support by tf then it should also support the identifi function definition describe the expect behavior should be robust to the use of these indentifier standalone code to reproduce the issue other info log n a |
tensorflowtensorflow | do tensorflow 2 x provide function similar to tf contrib lookup index table from tensor in tensorflow 1 x | Bug | in tensorflow 1 x tf contrib lookup index table from tensor be a very useful function but I haven t find a similar function yet in tensorflow 2x so do tensorflow 2 x provide function similar to tf contrib lookup index table from tensor in tensorflow 1 x thank you very much |
tensorflowtensorflow | tf string join not allow in graph | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 19 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 2 python version 3 7 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when write a function that be map on a tf datum dataset use tf string join yield the follow error operatornotallowedingrapherror iterate over tf tensor be not allow autograph do not convert this function try decorate it directly with tf function even when use the tf function describe the expect behavior should work when call the tf function decorator standalone code to reproduce the issue tf function def process path path path tf string split path 3 path tf string join path return path path ds tf datum dataset list file path ds path ds map process path |
tensorflowtensorflow | gradient of output with respect to input inside a custom loss function | Bug | I want to write a custom loss function for a multilayer perceptron network in keras the loss have two component first be the regular mse and the second be element wise gradient of output with respect to input feature let x be the input with 2 feature size number of sample x 2 and y the output with single output size number of sample x 1 I be denote derivative of each output sample with first feature of each sample as frac dy dx 0 similarly I want to compute the follow expression inside the loss function r y frac dy dx 0 x 1 frac d 2y dx 0 2 and take the mean square of the r vector the total loss be the sum of the regular mse and mean square of r vector this be a minimal reproducible example of the code I try def custom loss envelop model input model output def custom loss y true y pre mse loss keras loss mean square error y true y pre print print model input print print model output print dy dx keras backend gradient model output tf gather model input 0 axis 1 print dy dx print d2y dx2 keras backend gradient dy dx tf gather model input 0 axis 1 print d2y dx2 print r tf multiply model output tf gather dy dx 0 axis 1 tf multiply tf gather model input 1 axis 1 tf gather d2y dx2 0 axis 1 y dy dx 0 x 1 d2y dx 0 2 r keras backend mean keras backend square r loss mse loss r return loss return custom loss nx 100 input train np random uniform 0 1 nx 2 output train np random uniform 0 1 nx 1 input val np random uniform 0 1 int nx 2 2 output val np random uniform 0 1 int nx 2 1 n hide unit 50 l2 reg lambda 0 learning rate 0 001 dropout factor 0 0 epoch 3 model keras sequential model add keras layer dense n hidden unit activation relu input shape input train shape 1 kernel regularizer keras regularizer l2 l2 reg lambda first hide layer model add keras layer dropout dropout factor model add keras layers batchnormalization model add keras layer dense n hidden unit activation relu kernel regularizer keras regularizer l2 l2 reg lambda model add keras layer dropout dropout factor model add keras layers batchnormalization model add keras layer dense n hidden unit activation relu kernel regularizer keras regularizer l2 l2 reg lambda model add keras layer dropout dropout factor model add keras layers batchnormalization model add keras layer dense output train shape 1 activation linear optimizer1 keras optimizer adam lr learning rate beta 1 0 9 beta 2 0 999 epsilon none decay 0 0 amsgrad true model compile loss custom loss envelop model input model output optimizer optimizer1 metric mse model fit inputs train output train batch size 100 epoch epoch shuffle true validation datum input val output val verbose 1 here I have generate training and validation sample randomly I be get the tensor shape as follow model input model output and dy dx none the first 2 be as expect but the derivative should also be of shape none 1 which it be not hence I get attributeerror nonetype object have no attribute op error in the line d2y dx2 keras backend gradient dy dx tf gather model input 0 axis 1 any help be appreciate either to fix this issue or with alternate solution |
tensorflowtensorflow | distribute training on multiple node process get stick after initialize grpc channel | Bug | system information tf v 2 1 linux base hpc snakemake workflow manager slurm as scheduler tensorflow instal from source or binary virtual conda environment python version 3 7 6 cuda cudnn version 10 1 gpu model and memory geforce gtx 980 computecapability 5 2 coreclock 1 2405ghz corecount 16 devicememorysize 3 95gib devicememorybandwidth 208 91gib s describe the current behavior hi there my aim be to train a neural net on multiple node and gpu on a hpc therefore I be use tf s multiworkermirroredstrategy and the slurmclusterresolver to get the configuration of my node and to set the tf config variable however when try to connect to the cluster use tf config experimental connect to cluster resolver job name worker task index cfg task index protocol grpc the process get stuck please see the log below when I ssh into the node I see my process there but they be all sleep gpu aren t use either have anyone experience with this kind of setup and can provide help if I don t use the experimental connect to cluster every node be execute the job independently instead of work together I know that tf config should be set before call the multiworkermirroredstrategy but this cause a runtimeerror I also play around with the position of gpu initialization in the code this doesn t seem to have an effect note that I be use the most recent version of the slurmclusterresolver I basically copy the script from the github and call it use slurm cluster resolver slurmclusterresolver standalone code to reproduce the issue import tensorflow as tf tf keras backend clear session print all tensor allocation tf debugging set log device placement true tf random set seed 42 instantiate strategy at program startup to prevent runtimeerror print print define distribute training strategy only ring communication use grpc protocol multiworker strategy tf distribute experimental multiworkermirroredstrategy communication tf distribute experimental collectivecommunication nccl import os import sys import json import gc import numpy as np import panda as pd print print cluster configuration def set tf config resolver environment none set the tf config env variable from the give cluster resolver cfg cluster resolver cluster spec as dict task type resolver get task info 0 index resolver get task info 1 rpc layer resolver rpc layer if environment cfg environment environment os environ tf config json dump cfg return cfg there must be one gpu for every task resolver slurm cluster resolver slurmclusterresolver port base 11214 gpu per task 1 task per node 2 print cfg set tf config resolver tf print cfg print cfg print connect to cluster tf config experimental connect to cluster resolver job name worker task index cfg task index protocol grpc print doesn t matter if at the beginning or not print gpu configuration allow memory growth gpu tf config experimental list physical device gpu print initialize gpu for gpu in gpu tf config experimental set memory growth gpu true if gpu tf config experimental set visible device gpu gpu print gpu print other info log 2020 05 09 08 40 42 220277 I tensorflow core common runtime eager execute cc 573 execute op stringformat in device job localhost replica 0 task 0 device cpu 0 2020 05 09 08 40 42 220277 I tensorflow core common runtime eager execute cc 573 execute op stringformat in device job localhost replica 0 task 0 device cpu 0 2020 05 09 08 40 42 220607 I tensorflow core common runtime eager execute cc 573 execute op printv2 in device job localhost replica 0 task 0 device cpu 0 2020 05 09 08 40 42 220639 I tensorflow core common runtime eager execute cc 573 execute op printv2 in device job localhost replica 0 task 0 device cpu 0 cluster worker dge10 11214 dge10 11215 dge12 11214 dge12 11215 dge13 11214 dge13 11215 dge14 11214 dge14 11215 dge15 11214 dge15 11215 dge9 11214 dge9 11215 rpc layer grpc task index 5 type worker cluster worker dge10 11214 dge10 11215 dge12 11214 dge12 11215 dge13 11214 dge13 11215 dge14 11214 dge14 11215 dge15 11214 dge15 11215 dge9 11214 dge9 11215 rpc layer grpc task index 4 type worker 2020 05 09 08 40 42 223059 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 05 09 08 40 42 223095 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 05 09 08 40 42 223113 I tensorflow core common runtime gpu gpu device cc 1102 2020 05 09 08 40 42 223157 I tensorflow core common runtime gpu gpu device cc 1102 2020 05 09 08 40 42 226758 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 dge10 11214 1 dge10 11215 2 dge12 11214 3 dge12 11215 4 localhost 11214 5 dge13 11215 6 dge14 11214 7 dge14 11215 8 dge15 11214 9 dge15 11215 10 dge9 11214 11 dge9 11215 2020 05 09 08 40 42 226828 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 dge10 11214 1 dge10 11215 2 dge12 11214 3 dge12 11215 4 dge13 11214 5 localhost 11215 6 dge14 11214 7 dge14 11215 8 dge15 11214 9 dge15 11215 10 dge9 11214 11 dge9 11215 |
tensorflowtensorflow | value error raise whern call export save model with estimator model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux cento tensorflow version use command below tensorflow 2 0 cpu v2 0 0 rc2 26 g64c3d38 2 0 0 python version python3 6 cuda cudnn version none gpu model and memory none value error raise whern call export save model with estimator model I be try to export my train estimator model for serve the estimator model be convert from keras model with tf keras estimator model to estimator and I have successfully train and evaluate it however when I be try to save the model to local disk for serve a bug just bother I the code and the log be as list below code python model estimator tf keras estimator model to estimator keras model model model estimator train input fn dataset func step step per epoch model estimator evaluate input fn dataset func step step per epoch model estimator export save model model path serve input fn bug come from here log for error info python info tensorflow call model fn 2020 05 09 13 20 56 estimator py line 1147 info calling model fn valueerror traceback most recent call last usr local anaconda3 lib python3 6 site package tensorflow core python framework op def library py in apply op helper self op type name name keyword 470 prefer dtype default dtype 471 as ref input arg be ref 472 if input arg number attr and len usr local anaconda3 lib python3 6 site package tensorflow core python framework op py in internal convert n to tensor value dtype name as ref prefer dtype ctx 1364 prefer dtype prefer dtype 1365 ctx ctx 1366 return ret usr local anaconda3 lib python3 6 site package tensorflow core python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept composite tensor 1270 tensor conversion request dtype s for tensor with dtype s r 1271 dtype name value dtype name value 1272 return value valueerror tensor conversion request dtype int64 for tensor with dtype float32 during handling of the above exception another exception occur typeerror traceback most recent call last in 1 model estimator export save model model path serve input fn usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator estimator py in export save model self export dir base serve input receiver fn asset extra as text checkpoint path experimental mode 733 as text as text 734 checkpoint path checkpoint path 735 strip default attrs true 736 737 def experimental export all save model usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator estimator py in export all save model self export dir base input receiver fn map asset extra as text checkpoint path strip default attrs 856 builder input receiver fn map checkpoint path 857 save variable mode modekey predict 858 strip default attrs strip default attrs 859 save variable false 860 usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator estimator py in add meta graph for mode self builder input receiver fn map checkpoint path save variable mode export tag check variable strip default attrs 929 label getattr input receiver label none 930 mode mode 931 config self config 932 933 export output export lib export output for mode usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator estimator py in call model fn self feature label mode config 1146 1147 log info calling model fn 1148 model fn result self model fn feature feature kwargs 1149 log info do call model fn 1150 usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator keras py in model fn feature label mode 286 feature feature 287 label label 288 optimizer config optimizer config 289 model output name 290 we need to make sure that the output name of the last layer in the model usr local anaconda3 lib python3 6 site package tensorflow estimator python estimator keras py in clone and build model mode keras model custom object feature label optimizer config 225 in place reset not keras model be graph network 226 optimizer iteration global step 227 optimizer config optimizer config 228 229 if sample weight tensor be not none usr local anaconda3 lib python3 6 site package tensorflow core python keras model py in clone and build model model input tensor target tensor custom object compile clone in place reset optimizer iteration optimizer config 632 clone clone model model input tensor input tensor 633 else 634 clone clone model model input tensor input tensor 635 636 if all isinstance clone sequential usr local anaconda3 lib python3 6 site package tensorflow core python keras model py in clone model model input tensor clone function 420 else 421 return clone functional model 422 model input tensor input tensor layer fn clone function 423 424 usr local anaconda3 lib python3 6 site package tensorflow core python keras model py in clone functional model model input tensor layer fn 193 input tensor output tensor create layer 194 network reconstruct from config model config 195 create layer create layer 196 metric name model metric name 197 model model input tensor output tensor name model name usr local anaconda3 lib python3 6 site package tensorflow core python keras engine network py in reconstruct from config config custom object create layer 1850 if layer in unprocessed nodes 1851 for node datum in unprocessed node pop layer 1852 process node layer node datum 1853 1854 input tensor usr local anaconda3 lib python3 6 site package tensorflow core python keras engine network py in process node layer node datum 1797 if not isinstance input tensor dict and len flat input tensor 1 1798 input tensor flat input tensor 0 1799 output tensor layer input tensor kwargs 1800 1801 update node index map usr local anaconda3 lib python3 6 site package tensorflow core python keras engine base layer py in call self input args kwargs 845 output base layer util mark as return output acd 846 else 847 output call fn cast input args kwargs 848 849 except error operatornotallowedingrapherror as e usr local anaconda3 lib python3 6 site package tensorflow core python keras layer merge py in call self input 180 return y 181 else 182 return self merge function input 183 184 tf util shape type conversion usr local anaconda3 lib python3 6 site package tensorflow core python keras layer merge py in merge function self input 392 393 def merge function self input 394 return k concatenate input axis self axis 395 396 tf util shape type conversion usr local anaconda3 lib python3 6 site package tensorflow core python keras backend py in concatenate tensor axis 2706 return sparse op sparse concat axis tensor 2707 else 2708 return array op concat to dense x for x in tensor axis 2709 2710 usr local anaconda3 lib python3 6 site package tensorflow core python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a usr local anaconda3 lib python3 6 site package tensorflow core python op array ops py in concat value axis name 1429 dtype dtype int32 get shape assert have rank 0 1430 return identity value 0 name name 1431 return gen array op concat v2 value value axis axis name name 1432 1433 usr local anaconda3 lib python3 6 site package tensorflow core python ops gen array op py in concat v2 value axis name 1255 attr n len value 1256 op op def lib apply op helper 1257 concatv2 value value axis axis name name 1258 result op output 1259 input flat op input usr local anaconda3 lib python3 6 site package tensorflow core python framework op def library py in apply op helper self op type name name keyword 497 prefix dtype name 498 else 499 raise typeerror s that don t all match prefix 500 else 501 raise typeerror typeerror tensor in list pass to value of concatv2 op have type int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 float32 float32 float32 float32 float32 float32 float32 int64 int64 int64 int64 int64 int64 int64 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 float32 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 float32 float32 float32 float32 float32 float32 float32 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 float32 float32 float32 float32 float32 float32 float32 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 that don t all match describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | bug report all feature column must be featurecolumn instance give sequencenumericcolumn key volume shape 1 default value 0 0 dtype tf float32 normalizer fn none | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow version use command below tensorflow gpu 2 2 0 and 2 1 0 python version python3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory 8 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version my code be code utf 8 import panda as pd import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow import feature column from tensorflow keras experimental import sequencefeature volume feature column sequence numeric column volume lstm column volume feature tf io parse example feature feature column make parse example spec lstm column sequence feature layer sequencefeature lstm column tensor sequence input sequence length sequence feature layer feature describe the current behavior traceback most recent call last file home zy pycharmproject keras tcn master brain ld py line 104 in feature tf io parse example feature feature column make parse example spec lstm column file home zy anaconda3 envs tf1 lib python3 6 site package tensorflow core python feature column feature column py line 806 in make parse example spec give format column valueerror all feature column must be featurecolumn instance give sequencenumericcolumn key volume shape 1 default value 0 0 dtype tf float32 normalizer fn none it look like a bug can you fix it describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | regression batch begin end callback no long get batch number and size | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no tensorflow instal from source or binary bianry tensorflow version use command below 2 2 0 describe the current behavior the callback be document at on train batch begin as log dict have key batch and size represent the current batch number and the size of the batch the same use to be the case for on train batch end in 2 1 but this seem to have be drop for unknown reason in 2 1 the on train batch begin have a log size 1 which be plain wrong but the value in on train batch end be correct so that could be use however in 2 2 the size log have be remove from on train batch end cause exist code to break furthermore contrary to the documentation for neither callback the size be include describe the expect behavior the size should be correctly report to both callback for they to use standalone code to reproduce the issue instead of come up with a mwe I quote the code directly l847 do not even pass a log parameter so that be clearly wrong l855 use the result of the train function obtain at l848 which do not contain batch number or size either |
tensorflowtensorflow | use eager execution or decorate this function with tf function | Bug | hi dear when I define a class then use map get the error class func tf keras layers layer def init self kwargs super func self init kwargs def build self input shape super func self build input shape def call self x return x x tf constant 1 2 3 with tf session as sess input list map func x print sess run input I have try the tf compat v1 disable eager execution but no use could you pls help I thx system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g win10 64bit tensorflow instal from pip tensorflow version use command below 1 15 python version 3 6 8 cuda cudnn version no gpu model and memory no |
tensorflowtensorflow | custom optimizer that behave differently depend on the shape of weight | Bug | I be attempt to create a custom tensorflow optimizer tf keras optimizer optimizer which treat weight of different shape differently for example please consider a simple convolutional neural network with the follow shape of weight and bias 3 3 3 16 16 3 3 16 16 16 2704 64 64 64 10 10 at the beginning of the method resource apply dense self grad var I would like to transform var of different shape all into 2 dimensional and then perform some other operation the follow be the simplified logic of the desire transformation behavior python def custom train step var if tf rank var 1 case1 return tf expand dim input var axis 0 elif tf rank var 2 case2 return tf transpose a var elif tf rank var 4 case3 var tf transpose a var perm 3 0 1 2 return tf reshape var shape var shape 0 1 else case4 omit pass however this will not work when ndim var 4 because it seem that when tensorflow construct its computation graph all 4 branch be trace include case3 consequently 1d and 2d var during tracing will also be pass to tf transpose a var perm 3 0 1 2 which will result in an error valueerror dimension must be 1 but be 4 for node transpose transpose t dt float tperm dt int32 transpose readvariableop const with input shape 16 4 the error occur when var be a bias tensor of shape 16 I ve try directly write the conditional statement use tf cond tf case and tf switch case but the error stay I understand this be perhaps because that custom train step var be polymorphic which make it necessary for retrace but I can t think of a way to avoid such behavior by improve the code please note again that I probably can not write 4 branch in separate method and decorate each of they with tf function because this be suppose to be call inside a tf keras training loop please correct I if I be wrong I would like to know if there be a workaround to achieve what I describe above or be this type of behavior not yet support by tensorflow any help and suggestion would be appreciate thank |
tensorflowtensorflow | tensorflow updation issue | Bug | sir I want tto implement this code but his instrcution be to have python 2 7 and tenorflow in between 1 4 and 1 11 doubt can I able to execute in python 3 7 and in tensorflow 2 1 |
tensorflowtensorflow | use tf nn softmax with tensor of dtype int321 throw exception work fine with float32 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution arch linux but I m actually use collab tensorflow instal from source or binary use google collab tensorflow version use command below tensorflow 2 x setup use tensorflow version 2 x in the notebook python version 3 6 9 describe the current behavior error be throw when a tensor be use in the tf nn softmax with dtype of int32 as see below image however in contrast when the int32 be convert to a float32 like below the code work image describe the expect behavior I should get back the array of value like I have with float32 standalone code to reproduce the issue tensorflow version 2 x import tensorflow as tf tensor tf constant 10 2 1 1 print tf nn softmax tensor other info log notfounderror traceback most recent call last in 1 tensor tf constant 10 2 1 1 2 print tf nn softmax tensor 4 frame usr local lib python3 6 dist package six py in raise from value from value notfounderror could not find valid device for node node node softmax all kernel register for op softmax device xla gpu t in dt float dt double dt bfloat16 dt half device xla cpu t in dt float dt double dt bfloat16 dt half device xla cpu jit t in dt float dt double dt bfloat16 dt half device cpu t in dt double device cpu t in dt float device cpu t in dt half device gpu t in dt double device gpu t in dt float device gpu t in dt half device xla gpu jit t in dt float dt double dt bfloat16 dt half op softmax other info another similar issue exist ed here possible solution tf nn softmax should automatically promote intx to floatx in the function itself or generate an error that intx can t be use where x be either 64 or 32 bit thank you |
tensorflowtensorflow | imageaugmentation use tf kera preprocesse image imagedatagenerator and tf dataset model fit be run infinitely | Bug | what I need help with what I be wonder I be face issue while run the fit function in tensorflow v 2 2 0 rc4 with augment image use imagedatagenerator pass as a dataset the fit function be run infinitely without stop what I ve try so far I try it with the default code which be share in tensorflow documentation please find the code snippet below import tensorflow as tf from tensorflow kera preprocesse image import imagedatagenerator from tensorflow keras model import sequential model from tensorflow keras layer import dense dropout flatten from tensorflow keras layer import conv2d maxpooling2d from tensorflow keras layers import input dense flower tf keras util get file flower photo untar true img gen tf keras preprocesse image imagedatagenerator rescale 1 255 rotation range 20 image label next img gen flow from directory flower print image dtype image shape print label dtype label shape train datum gen img gen flow from directory batch size 32 directory flower shuffle true target size 256 256 class mode categorical ds tf datum dataset from generator lambda train datum gen output type tf float32 tf float32 output shape 32 256 256 3 32 5 ds ds prefetch buffer size tf datum experimental autotune it iter ds batch next it print batch def create model model sequential model add conv2d 32 3 3 activation relu input shape image 0 shape model add conv2d 32 3 3 activation relu model add maxpooling2d pool size 2 2 model add dropout 0 5 model add conv2d 64 3 3 activation relu model add conv2d 64 3 3 activation relu model add maxpooling2d pool size 2 2 model add dropout 0 5 model add flatten model add dense 64 activation relu model add dropout 0 5 model add dense 5 activation softmax return model model create model model compile loss categorical crossentropy optimizer rmsprop metric accuracy model fit ds verbose 1 batch size 32 epoch 1 this last line of code fit be run infinitly without stop I have also try pass step per epoch total no of train record batch size it would be nice if I would like you to confirm whethere this be a bug in the tensorflow dataset package and in which release will this be fix environment information system google colaborator python version v3 6 9 tensorflow version v2 2 0 rc4 |
tensorflowtensorflow | failedpreconditionerror on tf estimator baselineclassifier | Bug | tf 2 2 0 rc4 same code work fun on tf compat v1 estimator baselineclassifier but raise failedpreconditionerror getnext fail because the iterator have not be initialize ensure that you have run the initializer operation for this iterator before get the next element on tf estimator baselineclassifier input dataset be tensorflow python data op dataset op batchdataset baseline estimator tf estimator baselineclassifier n class 2 baseline estimator train input fn lambda make dataset train df y train colab link |
tensorflowtensorflow | embed projector mouse enter out tooltip incorrect | Bug | see attach animated gif for example hover |
tensorflowtensorflow | train on batch fail with mirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 rhel 7 5 tensorflow instal from source or binary source tensorflow version use command below 2 1 0 python version 3 7 4 bazel version if compile from source 0 29 1 gcc compiler version if compile from source 8 3 0 cuda cudnn version 10 1 gpu model and memory gtx1080ti describe the current behavior when run train on batch on a model create under a distribution strategy such as mirroredstrategy the error valueerror handle be not available outside the replica context or a tf distribute strategy update call be throw put the train ob batch into the strategy scope do not change that error however when do so with a model contain a batch normalization layer e g resnet50 it throw a different error runtimeerror add update be call in a cross replica context this be not expect if you require this feature please file an issue the other error location be not reach in that case as it bail out with the above describe the expect behavior train on batch should work standalone code to reproduce the issue import tensorflow as tf strategy tf distribute mirroredstrategy with strategy scope x train y train x test y test tf keras datasets mnist load datum x train x test x train 255 0 x test 255 0 train dataset tf datum dataset from tensor slice x train y train batch 32 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layer dense 10 model compile optimizer tf keras optimizer sgd loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy for x y in train dataset model train on batch x y other info log include any log or source code that would be helpful to full traceback traceback most recent call last file tf issue train on batch py line 24 in model train on batch x y file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training py line 1078 in train on batch standalone true file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training v2 util py line 433 in train on batch output loss metric model output loss metric file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 497 in initialize args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in convert code sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training eager py 305 train on batch out total loss output loss mask sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training eager py 273 process single batch model optimizer apply gradient zip grad trainable weight sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 444 apply gradient kwargs name name sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute distribute lib py 1949 merge call return self merge call merge fn args kwargs sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute distribute lib py 1956 merge call return merge fn self strategy args kwargs sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 485 distribute apply scope name distribution extend colocate var with var sw instal python 3 7 4 gcccore 8 3 0 lib python3 7 contextlib py 112 enter return next self gen sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 4112 colocate with for gradient with self colocate with op ignore exist sw instal python 3 7 4 gcccore 8 3 0 lib python3 7 contextlib py 112 enter return next self gen sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 4161 colocate with op op to colocate with op self sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 6548 op to colocate with if hasattr v handle and isinstance v handltraceback most recent call last file tf issue train on batch py line 24 in model train on batch x y file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training py line 1078 in train on batch standalone true file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training v2 util py line 433 in train on batch output loss metric model output loss metric file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 497 in initialize args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in convert code sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training eager py 305 train on batch out total loss output loss mask sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training eager py 273 process single batch model optimizer apply gradient zip grad trainable weight sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 444 apply gradient kwargs name name sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute distribute lib py 1949 merge call return self merge call merge fn args kwargs sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute distribute lib py 1956 merge call return merge fn self strategy args kwargs sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 485 distribute apply scope name distribution extend colocate var with var sw instal python 3 7 4 gcccore 8 3 0 lib python3 7 contextlib py 112 enter return next self gen sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 4112 colocate with for gradient with self colocate with op ignore exist sw instal python 3 7 4 gcccore 8 3 0 lib python3 7 contextlib py 112 enter return next self gen sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 4161 colocate with op op to colocate with op self sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python framework op py 6548 op to colocate with if hasattr v handle and isinstance v handle tensor sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute value py 720 handle raise valueerror handle be not available outside the replica context valueerror handle be not available outside the replica context or a tf distribute strategy update call e tensor sw instal tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python distribute value py 720 handle raise valueerror handle be not available outside the replica context valueerror handle be not available outside the replica context or a tf distribute strategy update call |
tensorflowtensorflow | tensorflow release 2 2 0 not find via pip install | Bug | tldr pip do not find the new release 2 2 0 publish yesterday system information have I write custom code as oppose to use a stock example script provide in tensorflow not applicable os platform and distribution e g linux ubuntu 16 04 ubuntu server 20 04 tensorflow instal from source or binary binary tensorflow version use command below not applicable python version 3 8 2 pip version 20 1 cuda cudnn version not applicable 10 2 gpu model and memory not applicable rtx 2080 ti describe the current behavior from the change log and the github release I notice that tensorflow 2 2 0 be release and that I could upgrade the tensorflow version currently use in my project 2 2 0rc4 sadly pip install fail to install the update as it be unable to find release 2 2 0 clear the cache of pip didn t help the pypi website also say late version be the rc4 not the 2 2 0 release history describe the expect behavior well I would like to be able to install tensorflow 2 2 0 over pip standalone code to reproduce the issue pip install tensorflow 2 2 0 other info log error could not find a version that satisfy the requirement tensorflow 2 2 0 from version 2 2 0rc1 2 2 0rc2 2 2 0rc3 2 2 0rc4 error no match distribution find for tensorflow 2 2 0 from r project hohl thesis requirement txt line 3 |
tensorflowtensorflow | overflowerror can not serialize a bytes object large than 4 gib when use model fit use multiprocesse true on custom generator | Bug | hi I m try to train a model consist of some gru layer on datum store in a large numpy array 18 gb on 2 gpu use the mirroredstrategy my system amd ryzen threadripper 3960x 24 core processor 64 gb ram two nvidia geforce rtx 2070 super with 8192mib each win 10 unfortunately the asrock creator trx40 motherboard we buy be currently incompatible with linux wtf tf 2 1 0 instal from binary anaconda python 3 7 7 cuda version 10 2 89 this be my code python import tensorflow as tf import numpy as np train token x np zero 1066673 61 69 dtype np float32 train token x np eye 69 61 train target np zero 1066673 3943 dtype np float32 train target 2 1 valid token x np zero 133366 61 69 dtype np float32 valid token x np eye 69 61 valid target np zero 133366 3943 dtype np float32 valid target 2 1 output size 3943 max i d 68 batch size 256 class sequencer tf keras util sequence def init self x set y set batch size self x self y x set y set self batch size batch size def len self return math ceil len self x self batch size def getitem self idx batch x self x idx self batch size idx 1 self batch size batch y self y idx self batch size idx 1 self batch size return np array batch x np array batch y train generator sequencer train token x train target batch size valid generator sequencer valid token x valid target batch size mirror strategy tf distribute mirroredstrategy cross device op tf distribute hierarchicalcopyallreduce with mirror strategy scope model keras model sequential keras layer gru 128 return sequence true input shape none max i d 1 use bias false keras layer gru 128 return sequence true use bias false keras layer gru 128 use bias false kera layer flatten keras layer dense output size activation softmax model compile loss focal loss umbertogriffo categorical focal loss alpha 25 gamma 2 optimizer adam metric accuracy history model fit train generator validation datum valid generator epoch 25 callback callback max queue size 10 worker 2 use multiprocesse true and there it crash with exception in thread thread 12 traceback most recent call last file c user ki anaconda3 envs tensorflow test lib thread py line 926 in bootstrap inner self run file c user ki anaconda3 envs tensorflow test lib thread py line 870 in run self target self args self kwargs file c user ki anaconda3 envs tensorflow test lib site package tensorflow core python keras util datum util py line 844 in run with close self executor fn share sequence as executor file c user ki anaconda3 envs tensorflow test lib site package tensorflow core python keras util datum util py line 823 in pool fn initargs seq none get worker i d queue file c user ki anaconda3 envs tensorflow test lib multiprocesse context py line 119 in pool context self get context file c user ki anaconda3 envs tensorflow test lib multiprocesse pool py line 176 in init self repopulate pool file c user ki anaconda3 envs tensorflow test lib multiprocesse pool py line 241 in repopulate pool w start file c user ki anaconda3 envs tensorflow test lib multiprocesse process py line 112 in start self popen self popen self file c user ki anaconda3 envs tensorflow test lib multiprocesse context py line 322 in popen return popen process obj file c user ki anaconda3 envs tensorflow test lib multiprocesse popen spawn win32 py line 89 in init reduction dump process obj to child file c user ki anaconda3 envs tensorflow test lib multiprocesse reduction py line 60 in dump forkingpickler file protocol dump obj overflowerror can not serialize a bytes object large than 4 gib if train without the use multiprocesse flag training work seamlessly but be very slow due to the generator overhead I guess with only 10 load on each gpu and also 10 on the cpu if train without the use the generator at all put the datum directly into model fit model fit crash due to memory issue 2020 05 07 12 46 43 785479 w tensorflow core framework cpu allocator impl cc 81 allocation of 16823566556 exceed 10 of system memory so I have to use a generator for memory efficiency I also try to use a tf datum pipeline but the same exceed 10 of system memory arise unfortunately what can I do here |
tensorflowtensorflow | tf hub bert model output name mismatch with the singature | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos 10 15 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 2 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior model be from save model dir path to bert en case l 12 h 768 a 12 1 load model tf save model load save model dir concrete func load model signature serve default prediction concrete func input expect output name concrete fn output out 4 however actual prediction return a dict with different output name prediction key out 5 u bert model 1 u bert model describe the expect behavior expect the prediction output have same name as concrete func output |
tensorflowtensorflow | map fn doesn t work with empty list | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary pip tensorflow version use command below v2 2 0 rc4 0 g70087ab4f4 python version 3 6 9 describe the current behavior map fn doesn t support empty list describe the expect behavior it should return an empty list standalone code to reproduce the issue python import numpy as np import tensorflow as tf fn lambda x x tf map fn fn additionally this work tf map fn fn np array 1 but not this eventhough 1 be not a scalar tf map fn fn 1 |
tensorflowtensorflow | failedpreconditionerror error while read resource variable anonymousvar555 from container localhost | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below 2 2 0 rc 4 describe the current behavior failedpreconditionerror error while read resource variable anonymousvar555 from container localhost this could mean that the variable be uninitialize not find resource localhost anonymousvar555 n10tensorflow3vare do not exist node matmul 388 readvariableop define at 30 op inference keras scratch graph 21594 function call stack keras scratch graph at building model standalone code to reproduce the issue other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | typo in source code doc | Bug | here l532 a minor typo in the source code rturn return |
tensorflowtensorflow | tf dbg source util test fail in python 3 8 | Bug | tf version hash 47ee0d08b22e79b84eb442ccb75ef8e4ccd5ec07 see the follow test log test output for tensorflow python debug source util test bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python op random op py 287 syntaxwarne be with a literal do you mean minval be zero minval be 0 pylint disable literal comparison bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python op random op py 288 syntaxwarne be with a literal do you mean maxval be one maxval be 1 pylint disable literal comparison bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python op rag ragged batch gather with default op py 84 syntaxwarne be not with a literal do you mean if default value shape ndim be not 0 bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python op rag ragged batch gather with default op py 85 syntaxwarne be not with a literal do you mean and default value shape ndim be not 1 run test under python 3 8 2 usr bin python3 8 run guessistensorflowlibraryt testdebuggerexamplefilepathreturnsfalse ok guessistensorflowlibraryt testdebuggerexamplefilepathreturnsfalse run guessistensorflowlibraryt testfileinpythonkernelspathreturnstrue ok guessistensorflowlibraryt testfileinpythonkernelspathreturnstrue run guessistensorflowlibraryt testguessedbasedirisprobablycorrect ok guessistensorflowlibraryt testguessedbasedirisprobablycorrect run guessistensorflowlibraryt testnonpythonfileraisesexception ok guessistensorflowlibraryt testnonpythonfileraisesexception run guessistensorflowlibraryt testsourceutilmodulereturnstrue ok guessistensorflowlibraryt testsourceutilmodulereturnstrue run guessistensorflowlibraryt testunittestfilereturnsfalse ok guessistensorflowlibraryt testunittestfilereturnsfalse run guessistensorflowlibraryt test session warn tensorflow from usr lib python3 8 contextlib py 83 tensorflowtestcase test session from tensorflow python framework test util be deprecate and will be remove in a future version instruction for update use self session or self cache session instead w0506 00 11 11 384973 140211243161344 deprecation py 317 from usr lib python3 8 contextlib py 83 tensorflowtestcase test session from tensorflow python framework test util be deprecate and will be remove in a future version instruction for update use self session or self cache session instead ok guessistensorflowlibraryt test session run listsourceagainstdumptest testgeneratesourcelist skip listsourceagainstdumptest testgeneratesourcelist run listsourceagainstdumptest testgeneratesourcelistwithnodenamefilter skip listsourceagainstdumpt testgeneratesourcelistwithnodenamefilter run listsourceagainstdumptest testgeneratesourcelistwithpathregexfilter skip listsourceagainstdumptest testgeneratesourcelistwithpathregexfilter run listsourceagainstdumpt test session skip listsourceagainstdumpt test session run sourcehelpertest testannotatedumpedtensorsgivescorrectresult 2020 05 06 00 11 11 473613 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation avx512f fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 05 06 00 11 11 489470 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2500005000 hz 2020 05 06 00 11 11 497351 I tensorflow compiler xla service service cc 168 xla service 0x28121b0 initialize for platform host this do not guarantee that xla will be use device 2020 05 06 00 11 11 497393 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python autograph util test py 21 deprecationwarne the imp module be deprecate in favour of importlib see the module s documentation for alternative use import imp ok sourcehelpertest testannotatedumpedtensorsgivescorrectresult run sourcehelpert testannotatesubsetoflinesgivescorrectresult ok sourcehelpertest testannotatesubsetoflinesgivescorrectresult run sourcehelpertest testannotatewholevalidsourcefilegivescorrectresult fail sourcehelpertest testannotatewholevalidsourcefilegivescorrectresult run sourcehelpertest testannotatewithstacktopgivescorrectresult fail sourcehelpertest testannotatewithstacktopgivescorrectresult run sourcehelpertest testcallingannotatesourceonunrelatedsourcefiledoesnoterror ok sourcehelpertest testcallingannotatesourceonunrelatedsourcefiledoesnoterror run sourcehelpertest testcallingannotatesourcewithoutpythongraphraisesexception ok sourcehelpertest testcallingannotatesourcewithoutpythongraphraisesexception run sourcehelpertest testloadnonexistentnonparpathfailswithioerror ok sourcehelpertest testloadnonexistentnonparpathfailswithioerror run sourcehelpert testloadingpythonsourcefileinparfilefailsraisingioerror ok sourcehelpertest testloadingpythonsourcefileinparfilefailsraisingioerror run sourcehelpertest testloadingpythonsourcefileinparfilesucceed ok sourcehelpertest testloadingpythonsourcefileinparfilesucceed run sourcehelpertest testloadingpythonsourcefilewithnonasciichar ok sourcehelpertest testloadingpythonsourcefilewithnonasciichar run sourcehelpertest test session ok sourcehelpertest test session error testannotatewholevalidsourcefilegivescorrectresult main sourcehelpertest testannotatewholevalidsourcefilegivescorrectresult main sourcehelpertest traceback most recent call last file bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python debug lib source util test py line 159 in testannotatewholevalidsourcefilegivescorrectresult source annotation self u init line number keyerror 116 error testannotatewithstacktopgivescorrectresult main sourcehelpertest testannotatewithstacktopgivescorrectresult main sourcehelpertest traceback most recent call last file bazel buildfarm default operation cf843b16 8c97 4552 8d55 fd16fe09d4f4 bazel out k8 opt bin tensorflow python debug source util test runfiles org tensorflow tensorflow python debug lib source util test py line 181 in testannotatewithstacktopgivescorrectresult source annotation self u init line number keyerror 116 run 22 test in 1 288s fail error 2 skip 4 cc caisq |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.