repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf keras callback modelcheckpoint fail in distribute parameter server strategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux rhel 7 tensorflow instal from source or binary tensorflow version use command below 2 1 describe the current behavior tf keras callbacks modelcheckpoint callback fail to save the model if the parameter server doesn t have access to the checkpoint location of other worker and the location for worker other than chief worker can not be configure to a remote filesystem e g hdfs describe the expect behavior the callback should be able to save the model if run on different machine use share storage I be not sure if the share storage should be a requirement either standalone code to reproduce the issue 1 create model py provide below 2 run the model py in two terminal terminal 1 simply execute the model py with the show args terminal 2 run the unshare command to separate the mount namespace optional instead of run on the same machine you can also run the give script on two separate machine all you d have to do this be change tf config in the model py then you wouldn t need to separate the mount namespace model py py import os import pprint import sys import json import tensorflow as tf import tensorflow dataset as tfds tf compat v1 disable eager execution node attr sys argv 1 name node attr 1 index node attr 1 os environ cuda visible device 1 os environ tf config json dump cluster worker localhost 5773 ps localhost 5711 task type name index int index strategy tf distribute experimental parameterserverstrategy uncomment below for multi worker mirror strategy os environ tf config json dump cluster worker localhost 5773 localhost 6778 task type name index int index strategy tf distribute experimental multiworkermirroredstrategy mnist tf keras datasets mnist x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 def input fn mode dataset info tfds load name mnist with info true as supervise true mnist dataset dataset train if mode train else dataset test def scale image label image tf cast image tf float32 image 255 return image label return mnist dataset map scale cache repeat 2 shuffle 10000 batch 4 def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model def main tf config json load os environ tf config job name tf config task type job index tf config task index if job name ps server tf distribute server tf config cluster job name job name task index job index server join else train dataset input fn train ckpt full path os path join sys argv 2 model ckpt epoch 04d callback tf keras callback modelcheckpoint ckpt full path save weight only true verbose 1 save freq 1 option tf datum option option experimental distribute auto shard policy tf datum experimental autoshardpolicy off train dataset no auto shard train dataset with option option with strategy scope model build and compile cnn model model fit x train dataset no auto shard epoch 3 step per epoch 3 callback callback if name main main step to reproduce 1st terminal python model py worker0 tmp warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation 2020 04 13 18 11 00 844880 I tensorflow core distribute runtime worker cc 204 cancellation request for rungraph train on 3 step epoch 1 3 epoch 00001 save model to tmp checkpoint model ckpt 0001 traceback most recent call last file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1367 in do call return fn args file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1352 in run fn target list run metadata file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1445 in call tf sessionrun run metadata tensorflow python framework error impl notfounderror from job ps replica 0 task 0 tmp checkpoint model ckpt 0001 temp 8b7417c3f79f449f87c2218bde68999d no such file or directory node savev2 during handling of the above exception another exception occur traceback most recent call last file model py line 87 in main file model py line 83 in main model fit x train dataset no auto shard epoch 3 step per epoch 3 callback callback file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 790 in fit args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 777 in wrapper mode dc coordinatormode independent worker file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 772 in worker fn return method model kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 685 in fit step name step per epoch file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training array py line 352 in model iteration callback call batch hook mode end step batch log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 239 in call batch hook batch hook batch log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 528 in on train batch end self on batch end batch log log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 977 in on batch end self save model epoch self current epoch log log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 1038 in save model self model save weight filepath overwrite true file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine network py line 1123 in save weight self trackable saver save filepath session session file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training track util py line 1177 in save return session run save path feed dict feed dict file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 960 in run run metadata ptr file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1183 in run feed dict tensor option run metadata file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1361 in do run run metadata file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python client session py line 1386 in do call raise type e node def op message tensorflow python framework error impl notfounderror from job ps replica 0 task 0 tmp checkpoint model ckpt 0001 temp 8b7417c3f79f449f87c2218bde68999d no such file or directory node savev2 define at model py 83 error may have originate from an input operation input source operation connect to node savev2 dense kernel read readvariableop define at model py 48 training sgd decay read readvariableop define at export app python 3 7 lib python3 7 thread py 926 original stack trace for savev2 file model py line 87 in main file model py line 83 in main model fit x train dataset no auto shard epoch 3 step per epoch 3 callback callback file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 790 in fit args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 777 in wrapper mode dc coordinatormode independent worker file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 772 in worker fn return method model kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training distribute py line 685 in fit step name step per epoch file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine training array py line 352 in model iteration callback call batch hook mode end step batch log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 239 in call batch hook batch hook batch log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 528 in on train batch end self on batch end batch log log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 977 in on batch end self save model epoch self current epoch log log file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras callbacks py line 1038 in save model self model save weight filepath overwrite true file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine network py line 1123 in save weight self trackable saver save filepath session session file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training track util py line 1168 in save file prefix file prefix tensor object graph tensor object graph tensor file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training track util py line 1116 in save cache when graph building save op saver save file prefix file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training save functional saver py line 230 in save sharde save append saver save shard prefix file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training save functional saver py line 72 in save return io op save v2 file prefix tensor name tensor slice tensor file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python ops gen io op py line 1717 in save v2 name name file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python framework op def library py line 742 in apply op helper attrs attr proto op def op def file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python framework op py line 3322 in create op internal op def op def file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python framework op py line 1756 in init self traceback tf stack extract stack exception ignore in traceback most recent call last file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training server lib py line 158 in del attributeerror nonetype object have no attribute unimplementederror 2nd terminal isolate the mount namespace unshare mount mkdir newtmp mount bind newtmp tmp python model py ps0 tmp 2020 04 13 18 04 37 871417 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job ps 0 localhost 5711 2020 04 13 18 04 37 871485 I tensorflow core distribute runtime rpc grpc channel cc 300 initialize grpcchannelcache for job worker 0 localhost 5773 2020 04 13 18 04 37 872330 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 5711 2020 04 13 18 09 31 762295 w tensorflow core framework op kernel cc 1655 op require fail at save restore v2 op cc 109 not find tmp checkpoint model ckpt 0001 temp 64dd8f8a454a4a28b378ce888c66219e no such file or directory other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach it work fine with multiworkermirrorstrategy even if we want to give share access through hdfs it doesn t work it try to write to a tmp location on worker other than chief it do work if all the worker and ps be run on the same machine with local filesystem access |
tensorflowtensorflow | keras callback param api change | Bug | in the current nightly there be change to the param attribute in the callback that be affect custom callback code system information yes colab describe the current behavior the minimal example be import tensorflow as tf model tf keras sequential tf keras layer dense 1 model compile loss mse optimizer sgd callback tf keras callbacks callback model fit 1 2 3 5 6 7 callback callback callback param with current 2 2 rc3 we see epoch 1 step 1 verbose 1 describe the expect behavior the expect behavior be a dict with the follow parameter batch size 32 do validation false epoch 1 metric loss sample 3 step 1 verbose 1 here be colab link for 2 1 2 2 rc2 |
tensorflowtensorflow | runtimeerror while try to run with parameter server strategy in eager mode | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 rhel 7 tensorflow instal from source or binary tensorflow version use command below 2 0 2 1 describe the current behavior the model fail to instantiate and fail with the follow error file model py line 10 in tf keras layer conv2d 32 3 activation relu input shape 28 28 1 file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine sequential py line 116 in init self add layer file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine sequential py line 185 in add layer x file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine base layer py line 748 in call self maybe build input file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine base layer py line 2116 in maybe build self build input shape file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras layers convolutional py line 158 in build dtype self dtype file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine base layer py line 446 in add weight cache device cache device file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python training tracking base py line 744 in add variable with custom getter kwarg for getter file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python keras engine base layer util py line 142 in make variable shape variable shape if variable shape else none file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python op variable py line 258 in call return cls variable v1 call args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python op variable py line 219 in variable v1 call shape shape file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python op variable py line 65 in getter return capture getter capture previous kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute distribute lib py line 1330 in creator with resource var return self create variable args kwargs file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python distribute parameter server strategy py line 446 in create variable with op device self variable device file home angoyal ws scratch myvenv lib python3 7 site package tensorflow core python framework op py line 5032 in device tf device do not support function when eager execution runtimeerror tf device do not support function when eager execution be enable describe the expect behavior the model should instantiate standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook myvenv angoyal angoyal ld2 scratch cat model py import os import tensorflow as tf os environ tf config cluster worker localhost 5773 ps localhost 5711 task type worker index 0 strategy tf distribute experimental parameterserverstrategy with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras assertion error for tpu strategy | Bug | I get the follow assertion error at fit when try to use tpu distribute strategy model my model final layer type output shape param my model mymodel multiple 2098016 dense 2 dense multiple 3120 layer normalization 9 layer multiple 96 dense 3 dense multiple 4160 output 2 dense multiple 65 output 1 embeddingsimilarit multiple 32000 total param 2 137 457 trainable param 2 137 457 non trainable param 0 none traceback most recent call last file copy train lm py line 91 in model fit x gen all file 128 epoch 100 step per epoch 100 file usr local lib python3 7 dist package tensorflow core python keras engine training py line 819 in fit use multiprocesse use multiprocesse file usr local lib python3 7 dist package tensorflow core python keras engine training distribute py line 619 in fit epoch epoch file usr local lib python3 7 dist package tensorflow core python keras engine training py line 2242 in distribution standardize user datum assert isinstance x dataset op datasetv2 assertionerror this be my code to reproduce the result import numpy as np import tensorflow as tf from keras model import mymodelfinal from tensorflow import kera tf compat v1 disable eager execution resolver tf distribute cluster resolver tpuclusterresolver tpu rakshanda agarwal tf config experimental connect to host resolver master tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver def gen all file batch size while true for file in all file with np load file as datum for I in range 0 len datum input ids batch size input ids datum input ids I I batch size input mask datum input mask I I batch size segment id datum segment id I I batch size mask lm position datum mask lm position I I batch size mask lm ids datum mask lm ids I I batch size mask lm weight datum mask lm weight I I batch size next sentence label datum next sentence label I I batch size mask lm weight tf reshape mask lm weight 128 20 mask lm ids tf reshape mask lm ids 128 20 1 yield input ids segment id input mask mask lm position mask lm ids mask lm weight next sentence label output 1 mask lm ids output 2 next sentence label output 1 mask lm weight output 2 np one batch size mask lm ids next sentence label mask lm weight 1 mask lm ids next sentence label mask lm weight np one batch size all file data1 train 1 npz val file lm1 train 1 npz batch size 128 out filter 64 num layer 4 def loss1 logit y vocab size 32000 print y mask lm ids y 0 mask lm weight y 1 logit tf reshape logit 2560 32000 log prob tf nn log softmax logit axis 1 mask lm ids tf reshape mask lm ids 1 mask lm weight tf reshape mask lm weight 1 one hot label tf one hot mask lm ids depth vocab size dtype tf float32 per example loss tf reduce sum log prob one hot label axis 1 numerator tf reduce sum mask lm weight per example loss denominator tf reduce sum mask lm weight 1e 5 loss numerator denominator return loss with strategy scope model mymodelfinal out filter 64 be train true emb size 48 vocab size 32000 max seq length 128 num layer 4 with np load all file 0 as datum for I in range 0 len datum input ids batch size input ids datum input ids I I batch size input mask datum input mask I I batch size segment id datum segment id I I batch size mask lm position datum mask lm position I I batch size mask lm ids datum mask lm ids I I batch size mask lm weight datum mask lm weight I I batch size next sentence label datum next sentence label I I batch size model input ids segment id input mask mask lm position break print model summary optimizer keras optimizer adam lr 0 0002 loss output 1 loss1 output 2 binary crossentropy lossweight output 1 1 0 output 2 1 0 model compile optimizer optimizer loss loss sample weight mode output 1 temporal output 2 none model fit x gen all file 128 epoch 100 step per epoch 100 validation datum gen val file 128 validation step 100 validation freq 1 |
tensorflowtensorflow | tf trt convert deep model int8 inference crash | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 version tf gpu 1 15 m45 dlvm base on debian gnu linux 9 12 stretch gnu linux 4 9 0 12 amd64 x86 64 n mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v1 15 2 1 g61ff2cb 1 15 2 python version bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version gpu model and memory cuda compilation tool release 10 0 v10 0 130 define cudnn major 7 define cudnn minor 6 define cudnn patchlevel 5 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be use tf trt to convert some of the model to different precision mode it work fine for fp32 and fp16 get convert and inference run fine but int8 only get convert but on inference give the follow error 2020 04 12 10 20 31 123734 I tensorflow compiler tf2tensorrt kernel trt engine op cc 812 start calibration thread on device 0 calibration resource 0x7fd990004ea0 2020 04 12 10 20 31 123883 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer so 5 2020 04 12 10 20 31 124498 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer plugin so 5 2020 04 12 10 20 58 770669 I tensorflow compiler tf2tensorrt kernel trt engine op cc 812 start calibration thread on device 0 calibration resource 0x7fd96c004e80 2020 04 12 10 21 20 543550 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 04 12 10 29 21 108202 f tensorflow compiler tf2tensorrt kernel trt engine op cc 349 check fail t totalbyte device tensor totalbyte 1832000 vs 21467376 abort describe the expect behavior model should run fast with int8 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook n a other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach already attach |
tensorflowtensorflow | batchnormalization with renorm true doesn t work with tpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary colab tensorflow version use command below colab python version colab bazel version if compile from source colab gcc compiler version if compile from source colab cuda cudnn version colab gpu model and memory exact command to reproduce see below describe the problem batchnormalization with the argument renorm true use tpus in colab produce an error it seem to be a bug since the code below work with cpus and gpu source code log full code to reproduce the error import tensorflow as tf tpu stuff tpu tf distribute cluster resolver tpuclusterresolver tf config experimental connect to cluster tpu tf tpu experimental initialize tpu system tpu strategy tf distribute experimental tpustrategy tpu build the network def build model from tensorflow keras import model from tensorflow keras layer import dense input batchnormalization input input 1 x input x dense 1 x x batchnormalization renorm true x x dense 1 relu x model model input x return model with strategy scope model build model model compile loss mse optimizer adam error log typeerror traceback most recent call last in 21 22 with strategy scope 23 model build model 24 model compile loss mse optimizer adam 12 frame in build model 14 x input 15 x dense 1 x 16 x batchnormalization renorm true x 17 x dense 1 relu x 18 model model input x usr local lib python3 6 dist package tensorflow python keras engine base layer py in call self args kwargs 920 not base layer util be in eager or tf function 921 with auto control dep automaticcontroldependencie as acd 922 output call fn cast input args kwargs 923 wrap tensor in output in tf identity to avoid 924 circular dependency usr local lib python3 6 dist package tensorflow python keras layers normalization py in call self input training 817 if self renorm 818 r d new mean new variance self renorm correction and moment 819 new mean new variance training input size 820 when train the normalize value say x will be transform as 821 x gamma beta without renorm and x r d gamma beta usr local lib python3 6 dist package tensorflow python keras layers normalization py in renorm correction and moment self mean variance training input size 676 todo yuefengz colocate the operation 677 update new mean update renorm variable self renorm mean mean 678 input size 679 update new stddev update renorm variable self renorm stddev stddev 680 input size usr local lib python3 6 dist package tensorflow python keras layers normalization py in update renorm variable var value input size 672 def fake update 673 return array op identity var 674 return tf util smart cond training do update fake update 675 676 todo yuefengz colocate the operation usr local lib python3 6 dist package tensorflow python keras util tf util py in smart cond pre true fn false fn name 63 pre true fn true fn false fn false fn name name 64 return smart module smart cond 65 pre true fn true fn false fn false fn name name 66 67 usr local lib python3 6 dist package tensorflow python framework smart cond py in smart cond pre true fn false fn name 57 else 58 return control flow op cond pre true fn true fn false fn false fn 59 name name 60 61 usr local lib python3 6 dist package tensorflow python util deprecation py in new func args kwargs 505 in a future version if date be none else after s date 506 instruction 507 return func args kwargs 508 509 doc add deprecate arg notice to docstre usr local lib python3 6 dist package tensorflow python op control flow op py in cond pre true fn false fn strict name fn1 fn2 1175 if util enablecontrolflowv2 op get default graph and 1176 not context execute eagerly 1177 return cond v2 cond v2 pre true fn false fn name 1178 1179 we need to make true fn false fn keyword argument for usr local lib python3 6 dist package tensorflow python op cond v2 py in cond v2 pre true fn false fn name 99 false graph external capture 100 building gradient false 101 name scope 102 103 usr local lib python3 6 dist package tensorflow python op cond v2 py in build cond pre true graph false graph true input false input building gradient name 219 220 make indexed slice index type match cond true graph false graph 221 check same output cond true graph false graph 222 223 add input to true graph and false graph to make they match note that usr local lib python3 6 dist package tensorflow python op cond v2 py in check same output op type graph 799 for b0 out bn out in zip graph 0 output graphs b output 800 if b0 out dtype bn out dtype 801 error b s and s have different type b0 out bn out 802 803 usr local lib python3 6 dist package tensorflow python op cond v2 py in error branch idx error detail 777 b0 out graph 0 structure output 778 bn out graphs branch idx structured output 779 detail error detail 780 781 for b in range 1 len graph typeerror true fn and false fn argument to tf cond must have the same number type and overall structure of return value true fn output tensor identity 1 0 dtype bool false fn output tensor identity 1 0 shape 1 dtype float32 error detail tensor identity 1 0 dtype bool and tensor identity 1 0 shape 1 dtype float32 have different type |
tensorflowtensorflow | currently log param be none for on train end and on test end | Bug | you can see the current implementation of fit here l950 and evaluate here l1180 method that the log pass to the method on train end and on test end be none which be as per the documentation can be change in future in tensorflow addon there be addition of tqdmprogressbar callback and recently I have raise pr 1649 to add code to make progress bar work in case of evaluate too here we come across the problem that there be log pass to on test batch end method to update the progress bar but after the epoch be complete and when on test end method be call there be no log pass to that because of this there be no metric result pass to the method but in my opinion and also from shun lin s 1649 comment issuecomment 612785521 it be good to pass log which be output from the last call to on test batch end method currenly in tqdm callback we be store the on test batch end log in class variable and use they in on test end which we think be temporary fix cc shun lin gabrieldemarmiesse |
tensorflowtensorflow | coredump in tensorflow 1 12 0 | Bug | corefile information as follow 0 0x00007fb46bf905f7 in raise from lib64 libc so 6 1 0x00007fb46bf91ce8 in abort from lib64 libc so 6 2 0x00007fb46bfd0317 in libc message from lib64 libc so 6 3 0x00007fb46bfd8023 in int free from lib64 libc so 6 4 0x00007fb3dd81616b in std function base base manager tensorflow anonymous namespace executorstate tensorflow anonymous namespace executorstate taggednode long long m manager std any datum std any datum const std manager operation from opt anaconda2 lib python2 7 site package tensorflow python libtensorflow framework so 5 0x00007fb3dd89254d in eigen nonblockingthreadpooltempl workerloop int from opt anaconda2 lib python2 7 site package tensorflow python libtensorflow framework so 6 0x00007fb3dd891582 in std function handler lambda 1 m invoke std any datum const from opt anaconda2 lib python2 7 site package tensorflow python libtensorflow framework so 7 0x00007fb3dcdaf220 in from lib64 libstdc so 6 8 0x00007fb46ca2cdc5 in start thread from lib64 libpthread so 0 9 0x00007fb46c05129d in clone from lib64 libc so 6 system information python 2 7 cento tensorflow 1 12 0 tensorflow instal from tensorflow 1 12 0 cp27 cp27mu manylinux1 x86 64 whl gcc 4 8 5 have anyone ever be face with same issue |
tensorflowtensorflow | how to store tf dataset object to file | Bug | url s with the issue description of issue what need change how to store tf dataset object to file for instance dataset1 tf datum dataset from tensor slice tf random uniform 4 10 minval 1 maxval 10 dtype tf int32 dataset1 how to store the dataset1 to file clear description for I a save copy of tokenized dataset save lot of training time python from transformer import alberttokenizer import tensorflow as tf import datareader import tokenizer def encode type datapath qgdata nq train sample json entry datareader read datapath encoding for entry in entry if type context context tokenizer encode entry passage entry answer entry question true encode append context else question tokenizer encode entry passage entry answer entry question false encoding append question datum tf datum dataset from generator lambda encode tf int64 output shape 512 return data def make dataset datapath qgdata nq train sample json batch size 1 contextdata encode context datapath questiondata encode question datapath dataset tf datum dataset zip contextdata questiondata return dataset batch batch size instead of run this batching script before each training it would be very efficient to store the tokenzied dataset object to file and avoid retokenize usage example maybe like python dataset1 tf datum dataset from tensor slice tf random uniform 4 10 minval 1 maxval 10 dtype tf int32 dataset1 save dataset path to store |
tensorflowtensorflow | embed a preprocessing function inside a tf keras model for serve | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to embed a simple image preprocesse function inside an already train tf keras model this be a useful feature to have because it can help we reduce a lot of boilerplate code need while use any model for serve purpose with this capability you get a lot more flexibility and modularity to your model so after train my model I be first define a preprocessing function like so python def preprocess image cv2 image path img cv2 imread image path img cv2 cvtcolor img cv2 color bgr2gray img cv2 resize img 28 28 astype float32 img img 255 img np expand dim img 0 img tf convert to tensor img return img I be then use it to create another model class along with the train model python define the model for predcition purpose class exportmodel tf keras model def init self preproc func model super init self self preproc func preproc func self model model tf function def my serve self image path print inside preprocesse image self preproc func image path preprocesse probability self model preprocesse image training false model prediction class i d tf argmax probability 0 axis 1 postprocesse return class index class i d I be then able to run inference on a sample image with this set python now initialize a dummy model and fill its parameter with that of the model we train restore model get training model restore model set weight apparel model get weight now use this model preprocessing function and the same image for check if everything be work serve model exportmodel preprocess image cv2 restore model class index serve model my serve sample image png class class index class index numpy print dress but I be unable to export this model for serve I be do the following for export python make sure we be not let the model to train tf keras backend set learning phase 0 serialize model export path model preprocesse func tf save model save serve model export path signature serve default serve model my serve this yield valueerror traceback most recent call last in 1 export path model preprocesse func 2 tf save model save serve model export path signature serve default serve model my serve 2 frame usr local lib python3 6 dist package tensorflow python save model save py in save obj export dir signature option 949 950 export graph object saver asset info build meta graph 951 obj export dir signature option meta graph def 952 save model save model schema version constant save model schema version 953 usr local lib python3 6 dist package tensorflow python save model save py in build meta graph obj export dir signature option meta graph def 1009 1010 signature wrap function 1011 signature serialization canonicalize signature signature 1012 signature serialization validate saveable view checkpoint graph view 1013 signature map signature serialization create signature map signature usr local lib python3 6 dist package tensorflow python save model signature serialization py in canonicalize signature signature 110 expect a tensorflow function to generate a signature for but 111 get only tf function with an input signature or 112 concrete function can be use as a signature format function 113 114 wrap function original function signature function valueerror expect a tensorflow function to generate a signature for but get only tf function with an input signature or concrete function can be use as a signature I be able to interpret the last part of the error but I be unable to figure out what step should I take to resolve it describe the expect behavior standalone code to reproduce the issue one can reproduce the issue with this colab notebook 1 help be appreciate 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf datum dataset map use only 1 cpu | Bug | system information have I write custom code no os platform and distribution linux ubuntu 16 04 tensorflow instal from binary tensorflow version v2 2 0 rc1 34 ge6e5d6df2a 2 2 0 rc2 python version 3 7 5 cuda cudnn version 10 1 1 7 6 5 gpu model and memory gtx 1080 ti describe the current behavior when check with the top command during training only 1 cpu be use describe the expect behavior multiple cpus should be use standalone code to reproduce the issue you can take the code from the official tensorflow tutorial on image segmentation for a more convincing experiment replace the two line python train dataset train map load image train num parallel call tf data experimental autotune test dataset test map load image test with e g python train dataset train map load image train num parallel call 4 test dataset test map load image test num parallel call 4 |
tensorflowtensorflow | can not set tf variable as model input | Bug | tensorflow 2 1 0 I intend to utilize a pre train model and input a trainable input noise but it be not allow in tensorflow 2 0 want to find out whether it be a bug or by design python init value tf random normal 1 512 512 3 noise input tf variable init value trainable true pretraine vgg19 tf keras application vgg19 include top false input shape 512 512 3 pretraine vgg19 predict noise input image |
tensorflowtensorflow | more clarity on tflite gpu | Bug | url s with the issue description of issue what need change after a couple of day dig through documentation and source code I m still very confused about the current state of gpu support in tensorflow lite 1 android cc talk about c c which give the illusion that one might use the lite c api but as far as I can see the modifygraphwithdelegate function be not present in lite c why it would be very helpful even though it have the concept of delegate 2 android cc suggest a build command that generate a 60 mb share library I don t see any benefit in give such suggestion since other command list in other page will generate properly optimize binary 3 android cc build on 2 I m also under the impression that build the delegate as a separate share lib would not be the good option for minimize the overall size in this case a target for build the delegate libtensorflowlite together would be highly appreciate at least as a documentation snippet not to mention prebuilt binary which be refer by the team as come soon in several not so recent issue comment 3 inputoutput buffer suggest the use of gpudelegate which as far as I understand come from lite delegates gpu gl delegate h and as such be deprecate a big notice in the source code warn to migrate to the new implementation before the end of 2019 so it probably shouldn t be in documentation 4 inputoutput buffer while a replacement exist lite delegates gpu delegate h it do not have any bindglbuffertotensor function and it be not clear how to achieve the same thing with the new delegate there be several unanswered question on so about this 5 inputoutput buffer the example use a ssbo however the delegate seem to support texture as well l53 l57 if this can be a different way to send initial input it would be nice to have it document it be hard for we to plan the adoption of tflite without a clear view over what you have or at least where you re head for example I m especially interested in use gl buffer as input sound like a game changer but I have no clue about what s the state of this in tflite same with use delegate in lite c the abstraction be there but modifygraphwithdelegate be not so doc fix apart could we have a very brief description of where tflite gpu delegate be head and what s come in the next couple of month so that people can decide if it meet their need and plan accordingly I understand that some of these apis be mark as experimental and I really appreciate your work thank |
tensorflowtensorflow | 1 14 1 15 tensorrt convertgraphdeftoengine sigsegv | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 nixos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 6 10 bazel v0 26 0 version if compile from source gcc compiler version if compile from source 0 9 2 cuda cudnn version 10 0 130 7 6 5 32 gpu model and memory no gpu in the system sandboxe environment you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when make tensorflow to invoke tensorrt graph optimiser on the system without a gpu inside a sandbox tensorflow will segfault with a null dereference log 2020 04 11 19 39 35 067438 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2020 04 11 19 39 35 069624 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit cuda error no device no cuda capable device be detect 2020 04 11 19 39 35 069654 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host shirobox proc driver nvidia version do not exist 2020 04 11 19 39 35 088189 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2999610000 hz 2020 04 11 19 39 35 089136 I tensorflow compiler xla service service cc 168 xla service 0x40682b0 execute computation on platform host device 2020 04 11 19 39 35 089190 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2020 04 11 19 39 35 829560 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile 2020 04 11 19 39 35 858 warning tensorflow deprecation new func from test tensorrt enable py 26 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant 2020 04 11 19 39 35 858 warning tensorflow deprecation new func from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python framework graph util impl py 270 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph 2020 04 11 19 39 35 876 info tensorflow graph util impl convert variable to constant freeze 11 variable 2020 04 11 19 39 35 890 info tensorflow graph util impl convert variable to constant convert 11 variable to const op 2020 04 11 19 39 35 891 info tensorflow trt convert check trt version compatibility link tensorrt version 7 0 0 2020 04 11 19 39 35 891 info tensorflow trt convert check trt version compatibility load tensorrt version 7 0 0 2020 04 11 19 39 35 891 info tensorflow trt convert check trt version compatibility run against tensorrt version 7 0 0 2020 04 11 19 39 35 948446 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2020 04 11 19 39 35 948611 I tensorflow core grappler cluster single machine cc 359 start new session 2020 04 11 19 39 35 997673 I tensorflow compiler tf2tensorrt segment segment cc 460 there be 5 op of 4 different type in the graph that be not convert to tensorrt identity fusedbatchnorm noop placeholder for more information see support op 2020 04 11 19 39 35 997810 I tensorflow compiler tf2tensorrt convert convert graph cc 733 number of tensorrt candidate segment 1 2020 04 11 19 39 36 003553 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2020 04 11 19 39 36 006215 e tensorflow compiler tf2tensorrt convert convert graph cc 797 couldn t get current device no cuda capable device be detect 2020 04 11 19 39 36 006242 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 0 do not exist not find tensorflow device gpu 0 be not register 2020 04 11 19 39 36 006267 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 1 do not exist not find tensorflow device gpu 1 be not register 2020 04 11 19 39 36 006279 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 2 do not exist not find tensorflow device gpu 2 be not register 2020 04 11 19 39 36 006289 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 3 do not exist not find tensorflow device gpu 3 be not register 2020 04 11 19 39 36 006300 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 4 do not exist not find tensorflow device gpu 4 be not register 2020 04 11 19 39 36 006311 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 5 do not exist not find tensorflow device gpu 5 be not register 2020 04 11 19 39 36 006321 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 6 do not exist not find tensorflow device gpu 6 be not register 2020 04 11 19 39 36 006332 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 7 do not exist not find tensorflow device gpu 7 be not register 2020 04 11 19 39 36 006342 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 8 do not exist not find tensorflow device gpu 8 be not register 2020 04 11 19 39 36 006352 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 9 do not exist not find tensorflow device gpu 9 be not register 2020 04 11 19 39 36 006363 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 10 do not exist not find tensorflow device gpu 10 be not register 2020 04 11 19 39 36 006373 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 11 do not exist not find tensorflow device gpu 11 be not register 2020 04 11 19 39 36 006384 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 12 do not exist not find tensorflow device gpu 12 be not register 2020 04 11 19 39 36 006395 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 13 do not exist not find tensorflow device gpu 13 be not register 2020 04 11 19 39 36 006406 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 14 do not exist not find tensorflow device gpu 14 be not register 2020 04 11 19 39 36 006416 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 15 do not exist not find tensorflow device gpu 15 be not register 2020 04 11 19 39 36 006426 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 16 do not exist not find tensorflow device gpu 16 be not register 2020 04 11 19 39 36 006436 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 17 do not exist not find tensorflow device gpu 17 be not register 2020 04 11 19 39 36 006447 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 18 do not exist not find tensorflow device gpu 18 be not register 2020 04 11 19 39 36 006457 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 19 do not exist not find tensorflow device gpu 19 be not register 2020 04 11 19 39 36 006467 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 20 do not exist not find tensorflow device gpu 20 be not register 2020 04 11 19 39 36 006478 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 21 do not exist not find tensorflow device gpu 21 be not register 2020 04 11 19 39 36 006488 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 22 do not exist not find tensorflow device gpu 22 be not register 2020 04 11 19 39 36 006498 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 23 do not exist not find tensorflow device gpu 23 be not register 2020 04 11 19 39 36 006509 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 24 do not exist not find tensorflow device gpu 24 be not register 2020 04 11 19 39 36 006519 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 25 do not exist not find tensorflow device gpu 25 be not register 2020 04 11 19 39 36 006529 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 26 do not exist not find tensorflow device gpu 26 be not register 2020 04 11 19 39 36 006540 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 27 do not exist not find tensorflow device gpu 27 be not register 2020 04 11 19 39 36 006550 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 28 do not exist not find tensorflow device gpu 28 be not register 2020 04 11 19 39 36 006561 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 29 do not exist not find tensorflow device gpu 29 be not register 2020 04 11 19 39 36 006571 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 30 do not exist not find tensorflow device gpu 30 be not register 2020 04 11 19 39 36 006582 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 31 do not exist not find tensorflow device gpu 31 be not register 2020 04 11 19 39 36 006592 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 32 do not exist not find tensorflow device gpu 32 be not register 2020 04 11 19 39 36 006602 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 33 do not exist not find tensorflow device gpu 33 be not register 2020 04 11 19 39 36 006613 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 34 do not exist not find tensorflow device gpu 34 be not register 2020 04 11 19 39 36 006625 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 35 do not exist not find tensorflow device gpu 35 be not register 2020 04 11 19 39 36 006636 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 36 do not exist not find tensorflow device gpu 36 be not register 2020 04 11 19 39 36 006647 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 37 do not exist not find tensorflow device gpu 37 be not register 2020 04 11 19 39 36 006659 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 38 do not exist not find tensorflow device gpu 38 be not register 2020 04 11 19 39 36 006670 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 39 do not exist not find tensorflow device gpu 39 be not register 2020 04 11 19 39 36 006682 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 40 do not exist not find tensorflow device gpu 40 be not register 2020 04 11 19 39 36 006693 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 41 do not exist not find tensorflow device gpu 41 be not register 2020 04 11 19 39 36 006705 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 42 do not exist not find tensorflow device gpu 42 be not register 2020 04 11 19 39 36 006717 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 43 do not exist not find tensorflow device gpu 43 be not register 2020 04 11 19 39 36 006728 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 44 do not exist not find tensorflow device gpu 44 be not register 2020 04 11 19 39 36 006738 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 45 do not exist not find tensorflow device gpu 45 be not register 2020 04 11 19 39 36 006749 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 46 do not exist not find tensorflow device gpu 46 be not register 2020 04 11 19 39 36 006761 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 47 do not exist not find tensorflow device gpu 47 be not register 2020 04 11 19 39 36 006772 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 48 do not exist not find tensorflow device gpu 48 be not register 2020 04 11 19 39 36 006784 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 49 do not exist not find tensorflow device gpu 49 be not register 2020 04 11 19 39 36 006794 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 50 do not exist not find tensorflow device gpu 50 be not register 2020 04 11 19 39 36 006805 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 51 do not exist not find tensorflow device gpu 51 be not register 2020 04 11 19 39 36 006817 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 52 do not exist not find tensorflow device gpu 52 be not register 2020 04 11 19 39 36 006828 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 53 do not exist not find tensorflow device gpu 53 be not register 2020 04 11 19 39 36 006838 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 54 do not exist not find tensorflow device gpu 54 be not register 2020 04 11 19 39 36 006849 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 55 do not exist not find tensorflow device gpu 55 be not register 2020 04 11 19 39 36 006860 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 56 do not exist not find tensorflow device gpu 56 be not register 2020 04 11 19 39 36 006872 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 57 do not exist not find tensorflow device gpu 57 be not register 2020 04 11 19 39 36 006883 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 58 do not exist not find tensorflow device gpu 58 be not register 2020 04 11 19 39 36 006894 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 59 do not exist not find tensorflow device gpu 59 be not register 2020 04 11 19 39 36 006905 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 60 do not exist not find tensorflow device gpu 60 be not register 2020 04 11 19 39 36 006917 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 61 do not exist not find tensorflow device gpu 61 be not register 2020 04 11 19 39 36 006928 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 62 do not exist not find tensorflow device gpu 62 be not register 2020 04 11 19 39 36 006939 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 63 do not exist not find tensorflow device gpu 63 be not register 2020 04 11 19 39 36 006950 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 64 do not exist not find tensorflow device gpu 64 be not register 2020 04 11 19 39 36 006962 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 65 do not exist not find tensorflow device gpu 65 be not register 2020 04 11 19 39 36 006972 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 66 do not exist not find tensorflow device gpu 66 be not register 2020 04 11 19 39 36 006983 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 67 do not exist not find tensorflow device gpu 67 be not register 2020 04 11 19 39 36 006994 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 68 do not exist not find tensorflow device gpu 68 be not register 2020 04 11 19 39 36 007011 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 69 do not exist not find tensorflow device gpu 69 be not register 2020 04 11 19 39 36 007024 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 70 do not exist not find tensorflow device gpu 70 be not register 2020 04 11 19 39 36 007035 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 71 do not exist not find tensorflow device gpu 71 be not register 2020 04 11 19 39 36 007045 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 72 do not exist not find tensorflow device gpu 72 be not register 2020 04 11 19 39 36 007056 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 73 do not exist not find tensorflow device gpu 73 be not register 2020 04 11 19 39 36 007067 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 74 do not exist not find tensorflow device gpu 74 be not register 2020 04 11 19 39 36 007078 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 75 do not exist not find tensorflow device gpu 75 be not register 2020 04 11 19 39 36 007089 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 76 do not exist not find tensorflow device gpu 76 be not register 2020 04 11 19 39 36 007100 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 77 do not exist not find tensorflow device gpu 77 be not register 2020 04 11 19 39 36 007110 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 78 do not exist not find tensorflow device gpu 78 be not register 2020 04 11 19 39 36 007121 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 79 do not exist not find tensorflow device gpu 79 be not register 2020 04 11 19 39 36 007132 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 80 do not exist not find tensorflow device gpu 80 be not register 2020 04 11 19 39 36 007142 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 81 do not exist not find tensorflow device gpu 81 be not register 2020 04 11 19 39 36 007153 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 82 do not exist not find tensorflow device gpu 82 be not register 2020 04 11 19 39 36 007164 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 83 do not exist not find tensorflow device gpu 83 be not register 2020 04 11 19 39 36 007175 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 84 do not exist not find tensorflow device gpu 84 be not register 2020 04 11 19 39 36 007185 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 85 do not exist not find tensorflow device gpu 85 be not register 2020 04 11 19 39 36 007195 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 86 do not exist not find tensorflow device gpu 86 be not register 2020 04 11 19 39 36 007206 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 87 do not exist not find tensorflow device gpu 87 be not register 2020 04 11 19 39 36 007217 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 88 do not exist not find tensorflow device gpu 88 be not register 2020 04 11 19 39 36 007228 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 89 do not exist not find tensorflow device gpu 89 be not register 2020 04 11 19 39 36 007240 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 90 do not exist not find tensorflow device gpu 90 be not register 2020 04 11 19 39 36 007251 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 91 do not exist not find tensorflow device gpu 91 be not register 2020 04 11 19 39 36 007260 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 92 do not exist not find tensorflow device gpu 92 be not register 2020 04 11 19 39 36 007272 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 93 do not exist not find tensorflow device gpu 93 be not register 2020 04 11 19 39 36 007282 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 94 do not exist not find tensorflow device gpu 94 be not register 2020 04 11 19 39 36 007293 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 95 do not exist not find tensorflow device gpu 95 be not register 2020 04 11 19 39 36 007304 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 96 do not exist not find tensorflow device gpu 96 be not register 2020 04 11 19 39 36 007315 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 97 do not exist not find tensorflow device gpu 97 be not register 2020 04 11 19 39 36 007326 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 98 do not exist not find tensorflow device gpu 98 be not register 2020 04 11 19 39 36 007338 e tensorflow compiler tf2tensorrt convert convert graph cc 659 tf gpu with i d 99 do not exist not find tensorflow device gpu 99 be not register 2020 04 11 19 39 36 007348 w tensorflow compiler tf2tensorrt convert convert graph cc 824 can t identify the cuda device run on device 0 2020 04 11 19 39 36 007394 e tensorflow compiler tf2tensorrt util trt logg cc 41 defaultlogger cuda initialization failure with error 38 please check your cuda installation fatal python error segmentation fault current thread 0x00007f02020e7f40 most recent call first file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python grappler tf optimizer py line 41 in optimizegraph file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python compiler tensorrt trt convert py line 204 in run conversion file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python compiler tensorrt trt convert py line 226 in convert graph def file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python compiler tensorrt trt convert py line 298 in convert file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python compiler tensorrt trt convert py line 1146 in create inference graph file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow contrib tensorrt python trt convert py line 51 in create inference graph file test tensorrt enable py line 38 in test tensorrt enable file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt python py line 167 in pyt pyfunc call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy caller py line 187 in multicall file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 87 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 93 in hookexec file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy hook py line 286 in call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt python py line 1445 in runtest file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 134 in pyt runt call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy caller py line 187 in multicall file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 87 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 93 in hookexec file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy hook py line 286 in call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 210 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 237 in from call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 210 in call runtest hook file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 185 in call and report file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 99 in runtestprotocol file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt runner py line 84 in pyt runtest protocol file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy caller py line 187 in multicall file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 87 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 93 in hookexec file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy hook py line 286 in call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt main py line 271 in pyt runtestloop file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy caller py line 187 in multicall file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 87 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 93 in hookexec file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy hook py line 286 in call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt main py line 247 in main file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt main py line 197 in wrap session file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt main py line 240 in pyt cmdline main file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy caller py line 187 in multicall file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 87 in file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy manager py line 93 in hookexec file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pluggy hook py line 286 in call file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt config init py line 93 in main file nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package pyt main py line 7 in file nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib python3 6 runpy py line 85 in run code file nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib python3 6 runpy py line 193 in run module as main gdb backtrace 0 0x00007fff359762f7 in tensorflow tensorrt convert convertgraphdeftoengine tensorflow graphdef const tensorflow tensorrt trtprecisionmode int unsigned long std vector const tensorflow tensorrt logg nvinfer1 igpuallocator tensorflow tensorrt trtint8calibrator std unique ptr bool bool from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow compiler tf2tensorrt python ops libtftrt so 1 0x00007fff3593f8e0 in tensorflow tensorrt convert createtrtnode tensorflow tensorrt convert conversionparam const std vector const int int tensorflow graph nvinfer1 igpuallocator std vector from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow compiler tf2tensorrt python ops libtftrt so 2 0x00007fff35947551 in tensorflow tensorrt convert convertaftershape tensorflow tensorrt convert conversionparam const from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow compiler tf2tensorrt python ops libtftrt so 3 0x00007fff3597ff50 in tensorflow tensorrt convert trtoptimizationpass optimize tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow compiler tf2tensorrt python ops libtftrt so 4 0x00007fff9c7d2105 in tensorflow grappler metaoptimizer runoptimizer tensorflow grappler graphoptimizer tensorflow grappler cluster tensorflow grappler grappleritem tensorflow graphdef tensorflow grappler metaoptimizer graphoptimizationresult from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python pywrap tensorflow internal so 5 0x00007fff9c7d3631 in tensorflow grappler metaoptimizer optimizegraph tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python pywrap tensorflow internal so 6 0x00007fff9c7d51fe in tensorflow grappler metaoptimizer optimize tensorflow grappler cluster tensorflow grappler grappleritem const tensorflow graphdef from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python pywrap tensorflow internal so 7 0x00007fff94cfaa43 in tf optimizegraph gcluster tensorflow configproto const tensorflow metagraphdef const bool std cxx11 basic string std allocator const tf status from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python pywrap tensorflow internal so 8 0x00007fff94d024fa in wrap tf optimizegraph from nix store w7dqhlrb6qv0im8hnwl309f13q86pnb4 python3 3 6 10 env lib python3 6 site package tensorflow python pywrap tensorflow internal so 9 0x00007ffff7d76cf3 in pycfunction fastcalldict from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 10 0x00007ffff7dfb4c4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 11 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 12 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 13 0x00007ffff7dfb3d4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 14 0x00007ffff7e00a8f in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 15 0x00007ffff7dfa5d3 in pyfunction fastcall from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 16 0x00007ffff7dfb59c in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 17 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 18 0x00007ffff7dfa5d3 in pyfunction fastcall from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 19 0x00007ffff7dfb59c in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 20 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 21 0x00007ffff7dfa5d3 in pyfunction fastcall from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 22 0x00007ffff7dfb59c in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 23 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 24 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 25 0x00007ffff7dfb3d4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 26 0x00007ffff7e00a8f in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 27 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 28 0x00007ffff7dfb3d4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 29 0x00007ffff7e00a8f in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 30 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 31 0x00007ffff7dfb5ee in pyeval evalcodeex from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 32 0x00007ffff7d4f60e in function call from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 33 0x00007ffff7d21087 in pyobject call from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 34 0x00007ffff7e0084c in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 35 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 36 0x00007ffff7dfb5ee in pyeval evalcodeex from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 37 0x00007ffff7d4f52e in function call from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 38 0x00007ffff7d21087 in pyobject call from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 39 0x00007ffff7e0084c in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 40 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 41 0x00007ffff7dfb3d4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 42 0x00007ffff7e00a8f in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 43 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 44 0x00007ffff7dfb3d4 in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 45 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 46 0x00007ffff7dfa5d3 in pyfunction fastcall from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 47 0x00007ffff7dfb59c in call function from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 48 0x00007ffff7e0032a in pyeval evalframedefault from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 49 0x00007ffff7dfafda in pyeval evalcodewithname from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 50 0x00007ffff7e03c37 in pyfunction fastcalldict from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 51 0x00007ffff7d212a1 in pyobject fastcalldict from nix store 2ard4hsnrajrxdwvp20kgql2r5j2fl82 python3 3 6 10 lib libpython3 6 m so 1 0 disassembly register 0x00007fff359762db 155 je 0x7fff359762e0 zn10tensorflow8tensorrt7convert23convertgraphdeftoengineerkns 8graphdefens0 16trtprecisionmodeeimrkst6vectorins 18partialtensorshapeesais7 eepns0 6loggerepn8nvinfer113igpuallocatorepns0 17trtint8calibratorepst10unique ptrinse 11icudaengineens0 12trtdestroyerisk eeebpb 160 0x00007fff359762dd 157 movb 0x0 rcx 0x00007fff359762e0 160 mov 0x1b58 esi 0x00007fff359762e5 165 mov rax rdi 0x00007fff359762e8 168 callq 0x7fff35923510 0x00007fff359762ed 173 mov rax rdi 0x00007fff359762f0 176 mov rax 0x440 rbp 0x00007fff359762f7 183 mov rax rax 0x00007fff359762fa 186 mov r15d esi 0x00007fff359762fd 189 mov rdi r15 0x00007fff35976300 192 callq 0x8 rax 0x00007fff35976303 195 mov r15 rax gdb info register rax 0x0 0 I suspect that what s happen be that nvinfer1 createinferbuilder here l4917 return a nullptr becaue of miss gpu and then tensorflow dereference it without check the return value from what I can see this issue be still present in master l1204 the trt builder be never check for nullptr before it be dereference with a couple line below describe the expect behavior tensorflow should return an error raise an exception instead of try to dereference a nullptr standalone code to reproduce the issue python import tensorflow as tf import tensorflow contrib slim as slim import tensorflow contrib tensorrt as trt with tf session as sess input0 tf placeholder tf float32 10 3 224 224 out input0 out slim batch norm out datum format nhwc scope bn2 with slim arg scope slim conv2d slim batch norm datum format nchw slim arg scope slim batch norm out slim conv2d out 64 3 3 normalizer fn slim batch norm activation fn tf nn relu out slim conv2d out 64 3 3 normalizer fn slim batch norm out tf identity out out tf identity out out name out name 2 init tf global variable initializer sess run init frozen graph tf graph util convert variable to constant sess tf get default graph as graph def output node name out name trt graph trt create inference graph frozen graph out name max batch size 10 be dynamic op false |
tensorflowtensorflow | both mean and variance must be none when be training be true and exponential avg factor 1 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below 2 2 0 dev20200411 python version 3 6 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when instantiate a batch norm layer like this tf keras layer batchnormalization momentum 0 0 center true scale false name bn1 I get the error both mean and variance must be none when be training be true and exponential avg factor 1 0 describe the expect behavior it be not always the expect behavior consider meta learning for example we be go to see just one batch of training datum and we want to adapt all mean and variance to this batch this mean the momentum should be zero then after apply a few training iteration we evaluate on the same batch norm layer with training false and that also should work fine standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import numpy as np inp tf keras layers input shape 84 84 3 dense tf keras layer conv2d 10 3 activation none inp bn tf keras layer batchnormalization momentum 0 0 center true scale false name bn1 dense rel tf keras layers relu bn flat tf keras layer flatten rel out tf keras layer dense 1 flat model tf keras model model input inp output out model compile loss tf keras loss meansquarederror optimizer tf keras optimizer adam model fit x np random uniform size 4 84 84 3 y np random uniform size 4 1 epoch 1 model evaluate x np random uniform size 3 84 84 3 y np random uniform size 3 1 model predict x np random uniform size 1 84 84 3 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | random nan loss when use float16 dtype and batch size of 1 | Bug | system information platform ubuntu linux kernel 5 3 with python 3 6 9 or google colab test on tensorflow tf v2 1 0 and tf v2 2 0rc2 background I come across the follow bug in one of my tensorflow project and be able to successfully reproduce the bug with minimal code in a google colab please see the link to this colab below the execution of this colab also show the bug occur in iteration 248 reproduce code see import math import numpy as np import tensorflow as tf if name main x np array 0 0 0 1 1 0 1 1 y np array 0 1 1 0 loss function tf keras loss binarycrossentropy for I in range 2000 if I 100 0 print iteration format I model tf keras sequential tf keras layer dense unit 1 activation tanh dtype tf float16 model compile optimizer sgd loss loss function model fit x x y y epoch 1 batch size 1 loss result loss function y model x if math isnan loss result raise runtimeerror nan error in iteration format I behaviour description seemingly non deterministic occurence of a nan result when calculate loss of a very simple dense model the nan loss seem to happen randomly and can occur on the 60th or 600th iteration in the supply google colab code it happen in the 248th iteration the bug only seem to occur use a dtype of float16 and batch size of 1 debug the error lead I to see that the model produce a nan loss seem to have be initialize with a nan bias and kernel though I couldn t get to the bottom of why |
tensorflowtensorflow | valueerror error when check expect input 1 to have shape 608 608 3 but get array with shape 416 416 3 | Bug | please give I suggestion how to solve this error if name main yolo yolo 0 6 0 5 file datum coco class txt all class get class file detect image in test floder for root dir file in os walk image test if file for f in file print f path os path join root f image cv2 imread path image np array image image detect image image yolo all class cv2 imwrite image re f image def process image img resize reduce and expand image argument img original image return image ndarray 64 64 3 process image image cv2 resize img 416 416 interpolation cv2 inter cubic image np array image dtype float32 image 255 image np expand dim image axis 0 return image valueerror error when check expect input 1 to have shape 608 608 3 but get array with shape 416 416 3 |
tensorflowtensorflow | ko notebook out of sync | Bug | hello please sync the ko notebook to the source of truth notebook use the nb code sync tool here currently many of they be fail |
tensorflowtensorflow | tf ragged tf tile like operation for each dimension | Bug | system information os platform and distribution macos catalina 10 15 3 tensorflow instal from binary tensorflow version 1 15 2 python version 3 7 3 I have a1 and a2 ragged tensor a1 tf ragged constant b a1 b a2 b a3 b b1 b b2 b b3 b c1 b c2 c3 b d1 b e1 b e2 b1 tf ragged constant b t1 b t2 b t3 b u1 b u2 b v1 b v2 b w1 b x1 b x2 b x3 I have a1 b1 as above I want c as below c1 tf ragged constant b a1 b a2 b a3 b a1 b a2 b a3 b a1 b a2 b a3 b b1 b b2 b b3 b b1 b b2 b b3 b c1 b c2 c3 b c1 b c2 c3 b d1 b e1 b e2 b e1 b e2 b e1 b e2 ie b a1 b a2 b a3 be repeat equal to number of element in b t1 b t2 b t3 ie 3 so in c1 we will have b a1 b a2 b a3 b a1 b a2 b a3 b a1 b a2 b a3 similarly b b1 b b2 b b3 be repeat equal to number of element in b u1 b u2 ie 2 so in c 1we will more have b b1 b b2 b b3 b b1 b b2 b b3 similarly b c1 b c2 c3 be repeat equal to number of element in b v1 b v2 ie 2 so in c1 we will more have b c1 b c2 c3 b c1 b c2 c3 till here all will be past of first tensor ie b a1 b a2 b a3 b a1 b a2 b a3 b a1 b a2 b a3 b b1 b b2 b b3 b b1 b b2 b b3 b c1 b c2 c3 b c1 b c2 c3 now for second ragged tensor same process as above where b d1 should be repeat 1 time as number of element in b w1 I be use this to convert raw signal to feature as part of my savedmodel |
tensorflowtensorflow | ru fail notebook | Bug | nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell import tensorflow compat v2 as tf except exception pass tf enable v2 behavior import tensorflow dataset as tfds file line 2 except exception syntaxerror invalid syntax syntaxerror invalid syntax line 2 nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell import time save model path save model format int time time tf keras experimental export save model model save model path save model path attributeerror traceback most recent call last in 2 save model path save model format int time time 3 4 tf keras experimental export save model model save model path 5 save model path attributeerror module tensorflow core python keras api v2 kera experimental have no attribute export save model attributeerror module tensorflow core python keras api v2 kera experimental have no attribute export save model nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell 60 40 15 000 10 000 25 000 train validation split tfds split train subsplit 6 4 train datum validation datum test datum tfds load name imdb review split train validation split tfds split test as supervise true assertionerror traceback most recent call last in 6 name imdb review 7 split train validation split tfds split test 8 as supervise true local lib python3 6 site package tensorflow dataset core api util py in disallow positional args dec fn instance args kwargs 50 check no positional fn args ismethod allow allow 51 check require fn kwargs 52 return fn args kwargs 53 54 return disallow positional args dec wrap pylint disable no value for parameter local lib python3 6 site package tensorflow dataset core tfrecord reader py in str to relative instruction spec 354 re sub spec re match spec 355 if not res 356 raise assertionerror unrecognized instruction format s spec 357 unit if re group from pct or re group to pct else abs 358 return readinstruction assertionerror unrecognized instruction format namedsplit train tfds percent 0 60 assertionerror unrecognized instruction format namedsplit train tfds percent 0 60 nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell train datum all encode data skip take size shuffle buffer size train datum train datum pad batch batch size test datum all encode datum take take size test datum test datum pad batch batch size typeerror traceback most recent call last in 1 train datum all encode data skip take size shuffle buffer size 2 train datum train datum pad batch batch size 3 4 test datum all encode datum take take size 5 test datum test datum pad batch batch size typeerror padded batch miss 1 require positional argument pad shape typeerror padded batch miss 1 require positional argument pad shape |
tensorflowtensorflow | batch jacobian incorrect | Bug | system information have I write custom code yes os platform and distribution window 10 anaconda python 3 7 6 tensorflow 2 1 0 debian gnu linux 8 11 jessie anaconda python 3 7 3 tensorflow 2 0 0 standalone code to reproduce the issue python import numpy tensorflow x tensorflow variable 1 1 dtype float64 with tensorflow gradienttape persistent true as t with tensorflow gradienttape as tt obj x 0 2 x 1 2 x 0 x 1 dx tt gradient obj x print t batch jacobian dx x numpy output be python 3 3 which be not jacobian y I x I for I in range x shape 0 as state here batch jacobian it be more like gradient sum y x I for I in range x shape 0 same as gradient of gradient python import numpy tensorflow x tensorflow variable 1 1 dtype float64 with tensorflow gradienttape persistent true as t with tensorflow gradienttape as tt obj x 0 2 x 1 2 x 0 x 1 dx tt gradient obj x print t jacobian dx x numpy print t gradient dx x numpy output be python 2 1 1 2 3 3 which be perfectly fine other info log f x y x 2 y 2 x y df dx 2x y df dy 2y x ddf dxdx 2 ddf dxdy 1 ddf dydy 2 what I want be the diag hessian without calculate full hessian |
tensorflowtensorflow | tf2 x eager mode can not support parameterserverstrategy now | Bug | tf version late master b083ceafd48b3c8e4d9dfcc40a6b743bed7b371a below be a simple example use tf2 0 eager mode and it run successful with mirroredstrategy but error with parameterserverstrategy from future import absolute import division print function unicode literal import tensorflow as tf import tensorflow dataset as tfds import os json dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test os environ tf config json dump cluster worker localhost 12345 ps localhost 12346 task type worker index 0 strategy tf distribute experimental parameterserverstrategy strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync num train example info split train num example num test example info split test num example buffer size 10000 batch size per replica 64 batch size batch size per replica strategy num replicas in sync def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale shuffle buffer size batch batch size eval dataset mnist test map scale batch batch size with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer tf keras optimizer adam metric accuracy checkpoint dir training checkpoint name of the checkpoint file checkpoint prefix os path join checkpoint dir ckpt epoch function for decay the learning rate you can define any decay function you need def decay epoch if epoch 3 return 1e 3 elif epoch 3 and epoch 7 return 1e 4 else return 1e 5 callback for print the lr at the end of each epoch class printlr tf keras callbacks callback def on epoch end self epoch log none print nlearne rate for epoch be format epoch 1 model optimizer lr numpy callback tf keras callbacks tensorboard log dir log tf keras callbacks modelcheckpoint filepath checkpoint prefix save weight only true tf keras callbacks learningratescheduler decay printlr model fit train dataset epoch 12 callback callback model load weight tf train late checkpoint checkpoint dir eval loss eval acc model evaluate eval dataset print eval loss eval accuracy format eval loss eval acc error message tf keras layer dense 10 activation softmax file usr local lib python3 7 site package tensorflow python training tracking base py line 456 in method wrapper result method self args kwargs file usr local lib python3 7 site package tensorflow python keras engine sequential py line 116 in init super sequential self init name name autocast false file usr local lib python3 7 site package tensorflow python keras engine training py line 199 in init self init batch counter file usr local lib python3 7 site package tensorflow python training tracking base py line 456 in method wrapper result method self args kwargs file usr local lib python3 7 site package tensorflow python keras engine training py line 206 in init batch counter self train counter variable variable 0 dtype int64 aggregation agg file usr local lib python3 7 site package tensorflow python op variable py line 261 in call return cls variable v2 call args kwargs file usr local lib python3 7 site package tensorflow python op variable py line 255 in variable v2 call shape shape file usr local lib python3 7 site package tensorflow python op variable py line 66 in getter return capture getter capture previous kwargs file usr local lib python3 7 site package tensorflow python distribute distribute lib py line 1769 in creator with resource var return self create variable next creator kwargs file usr local lib python3 7 site package tensorflow python distribute parameter server strategy py line 455 in create variable with op device self variable device file usr local lib python3 7 site package tensorflow python framework op py line 5183 in device tf device do not support function when eager execution runtimeerror tf device do not support function when eager execution be enable |
tensorflowtensorflow | predict result with savedmodel be not same in python api and java api | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution window 7 tensorflow instal from binary tensorflow version cpu 1 15 0 python version 3 6 3 describe the current behavior I save model with estimator export save model I load the model with python from tensorflow contrib import predictor self model predictor from save model self mode dir predict a input ids a input mask a segment ids self build input a input output self model a input ids a input ids a input mask a input mask a segment id a segment id a output output a output layer a output 0 05960074 0 03045687 0 20487925 0 36802548 0 07898629 0 35250664 0 21251363 0 23284832 0 30972436 0 20010747 0 00487598 0 48967522 0 1831991 0 28579575 0 15075627 0 2821794 0 02628851 0 05371238 0 06514908 0 38573033 0 34205046 0 3108538 0 01758813 0 59596956 0 5169708 0 46524945 0 6804516 0 32393196 0 36948654 0 46160206 0 15634336 0 44929808 0 39321676 0 18401513 0 3726705 0 19476992 0 33169916 0 11876976 0 36055735 0 19275247 0 12676252 0 10232886 0 63154477 0 07467962 0 17044203 0 47212833 0 26961723 0 33468968 0 22710937 0 05272907 0 6149754 0 02799183 0 10492884 0 23291017 0 20572647 0 13610545 0 05362191 0 44776174 0 4095006 0 43816873 0 22285426 0 33557323 0 31537503 0 07024186 0 38216737 0 12280162 0 27534372 0 41657594 0 05565406 0 33100575 0 29913923 0 00283101 0 10702493 0 31459734 0 2403451 0 42180565 0 03365724 0 3264306 0 5190079 0 21016245 0 3 I load the model with java savedmodelbundle bundle savedmodelbundle load exportdir serve predict threetuple input buildinput sentencelist tensor ainputidstensor tensor create input param1 tensor ainputmasktensor tensor create input param2 tensor asegmentidstensor tensor create input param3 list tensor this bundle session runner feed a input ids ainputidstensor feed a input mask ainputmasktensor feed a segment ids asegmentidstensor fetch a output run tensor result tensor get 0 float outresult new float 1 dim result copyto outresult float a output outresult 0 a output 0 018384377 0 06955972 0 051764473 0 015454082 0 06722302 0 07839473 0 02793679 0 072589695 0 051289488 0 039070662 0 049129114 0 11508791 0 0076964055 0 042025223 0 05569571 0 05739251 0 04230939 0 05749864 0 10970076 0 15183078 0 08809995 0 07375819 0 08044808 0 12184837 0 043990605 0 12923256 0 056834757 0 056434825 0 033050016 0 022836037 0 09641873 0 029169578 0 0059487675 0 084053494 0 095500425 0 009507669 0 032067284 0 026453126 0 070464775 0 058229186 0 016397119 0 0129444385 0 07648615 0 014742567 0 01920672 0 10167458 0 040589973 0 037671003 0 02273454 I use the same version of tensorflow same savedmodel same vocab same everything but the code language apparently the two result above be not same describe the expect behavior expect same predict result standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf keras model parameter suddenly update to nan during back propagation when train | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 gpu model and memory nvidia 1080ti describe the current behavior I be try to train a small net similar to pnet of mtcnn and I write a custom loss to test if it be work during the training after several epoch the model weight and loss become nan I test the training procedure one epoch by one and find that the last epoch that give a normal loss 0 0814 also output a model with all parameter of nan thus I think when give a normal loss the backward propagation have something wrong and give the model a nan update what I have do to rule out some other possibility 1 check clean the datum my datum set be x image of shape 12 12 3 y label box regression coord 6 landmark regression coord concatenate together of shape 17 for the label it could be 1 1 0 2 where only label 1 and 0 will participate in calculate the custom loss I write myself for the roi landmark coord they all belong to 1 1 for the image datum it will be process as x 127 5 128 before be send into the training stream I try both the tfrecords dataflow numpy array as the input for training 2 add batchnormalization layer add l2 norm to the weight use xavi initialization and pick a small learning rate from 0 001 to 0 0001 to avoid problem like gradient explode 3 replace the custom loss I write myself with mse all the three change make do not fix the nan loss thing describe the expect behavior the training procedure should work well standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook def pnet train1 train with landmark false x input shape 12 12 3 name pnet input m conv2d 10 3 stride 1 padding valid kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv1 x m prelu share axis 1 2 name pnet prelu1 m m maxpooling2d pool size 2 name pnet maxpool1 m default pool size be 2 m conv2d 16 3 stride 1 padding valid kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv2 m m prelu share axis 1 2 name pnet prelu2 m m conv2d 32 3 stride 1 padding valid kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv3 m m prelu share axis 1 2 name pnet prelu3 m classifi conv conv2d 1 1 activation sigmoid name pnet classifier conv kernel initializer glorot normal m bbox regressor conv conv2d 4 1 name pnet bbox regressor conv kernel initializer glorot normal m landmark regressor conv conv2d 12 1 name pnet landmark regressor conv kernel initializer glorot normal m classifi reshape 1 name pnet classifier classifier conv bbox regressor reshape 4 name pnet bbox regressor bbox regressor conv if train with landmark landmark regressor reshape 12 name pnet landmark regressor landmark regressor conv pnet output concatenate classifier bbox regressor landmark regressor model model x pnet output else pnet output concatenate classifier bbox regressor model model x pnet output return model def pnet train2 train with landmark false x input shape 12 12 3 name pnet input m conv2d 10 3 stride 1 padding valid use bias false kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv1 x m batchnormalization axis 1 name pnet bn1 m m prelu share axis 1 2 name pnet prelu1 m m maxpooling2d pool size 2 name pnet maxpool1 m default pool size be 2 m conv2d 16 3 stride 1 padding valid use bias false kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv2 m m batchnormalization axis 1 name pnet bn2 m m prelu share axis 1 2 name pnet prelu2 m m conv2d 32 3 stride 1 padding valid use bias false kernel initializer glorot normal kernel regularizer l2 0 00001 name pnet conv3 m m batchnormalization axis 1 name pnet bn3 m m prelu share axis 1 2 name pnet prelu3 m classifi conv conv2d 1 1 activation sigmoid name pnet classifier conv kernel initializer glorot normal m bbox regressor conv conv2d 4 1 name pnet bbox regressor conv kernel initializer glorot normal m landmark regressor conv conv2d 12 1 name pnet landmark regressor conv kernel initializer glorot normal m classifi reshape 1 name pnet classifier classifier conv bbox regressor reshape 4 name pnet bbox regressor bbox regressor conv if train with landmark landmark regressor reshape 12 name pnet landmark regressor landmark regressor conv pnet output concatenate classifier bbox regressor landmark regressor model model x pnet output else pnet output concatenate classifier bbox regressor model model x pnet output return model here just check the the first classify loss def custom loss y true y pre zero index k zero like y true 0 one index k one like y true 0 label y true 0 class pred y pre 0 bi crossentropy loss label k log class pred 1 label k log 1 class pred classify valid index tf where k less y true 0 0 zero index one index classify keep num k cast tf cast tf reduce sum classify valid index tf float32 0 7 dtype tf int32 classify loss sum bi crossentropy loss tf cast classify valid index bi crossentropy loss dtype classify loss sum filter tf nn top k classify loss sum k classify keep num classify loss tf where k equal classify keep num 0 tf constant 0 dtype tf float32 k mean classify loss sum filter loss classify loss return loss other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach 1 |
tensorflowtensorflow | smart reply aar example for tensorflow lite win t build | Bug | I have not change any code and follow the procedure as per I try to build on window and linux 16 04 and I get the follow error from bazel linux amsha amsha linux local mnt workspace workspace tensorflowliteexample smartreply example lite example smart reply android app bazel build lib cc smartreply runtime aar warn output base usr2 amsha cache bazel bazel amsha 09e97388c3884f6fff92e89b26f572b5 be on nfs this may lead to surprising failure and undetermined behavior warn download from fail class com google devtool build lib bazel repository downloader unrecoverablehttpexception get return 404 not find error local mnt workspace workspace tensorflowliteexample smartreply example lite example smart reply android app lib cc build 163 1 lib cc smartreply runtime aar dummy app for so no such attribute aapt version in android binary rule error error loading package lib cc package lib cc contain error info elapse time 0 401s info 0 process fail build do not complete successfully 0 package load window warning download from fail class com google devtool build lib bazel repository downloader unrecoverablehttpexception get return 404 not find error c workspace bazelbuilds smart reply android app lib cc build 163 1 lib cc smartreply runtime aar dummy app for so no such attribute aapt version in android binary rule error error loading package lib cc package lib cc contain error info elapse time 88 009s info 0 process fail build do not complete successfully 1 package load |
tensorflowtensorflow | distribute training cause cuda oom | Bug | system information have I write custom code yes os platform and distribution cento tensorflow instal from pip3 install tensorflow gpu 2 0 0 python version 3 6 8 cuda cudnn version 10 0 7 6 describe the current behavior when start the worker use v1 distirbute training we could observe the follow error bash warn tensorflow from usr local lib64 python3 6 site package tensorflow core python training training util py 236 variable initialize value from tensorflow python op variable be deprecate and will be remove in a future version instruction for update use variable read value variable in 2 x be initialize automatically both in eager and graph inside tf defun contexts w0409 18 55 45 554535 140176829970240 deprecation py 323 from usr local lib64 python3 6 site package tensorflow core python training training util py 236 variable initialize value from tensorflow python op variable be deprecate and will be remove in a future version instruction for update use variable read value variable in 2 x be initialize automatically both in eager and graph inside tf defun contexts 2020 04 09 18 55 45 563568 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name tesla p40 major 6 minor 1 memoryclockrate ghz 1 531 pcibusid 0000 0d 00 0 2020 04 09 18 55 45 563627 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 04 09 18 55 45 563644 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 04 09 18 55 45 563658 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 04 09 18 55 45 563671 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 04 09 18 55 45 563687 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 04 09 18 55 45 563702 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 04 09 18 55 45 563716 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 09 18 55 45 568025 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 info tensorflow graph be finalize i0409 18 55 45 699084 140176829970240 monitored session py 240 graph be finalize 2020 04 09 18 55 47 161048 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 22 40 g 24056830208 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 162735 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 20 16 g 21651146752 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 164113 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 18 15 g 19486031872 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 165473 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 16 33 g 17537427456 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 166829 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 14 70 g 15783684096 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 168178 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 13 23 g 14205315072 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 169530 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 11 91 g 12784783360 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 170874 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 10 72 g 11506305024 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 172225 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 9 64 g 10355674112 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 173575 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 8 68 g 9320105984 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 174945 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 7 81 g 8388094976 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 176291 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 7 03 g 7549285376 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 177639 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 6 33 g 6794356736 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 178980 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 5 69 g 6114920960 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 180325 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 5 12 g 5503428608 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 181714 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 4 61 g 4953085440 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 183053 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 4 15 g 4457776640 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 184402 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 3 74 g 4011998976 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 185768 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 3 36 g 3610799104 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 187108 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 3 03 g 3249719040 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 188455 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 2 72 g 2924747008 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 189805 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 2 45 g 2632272128 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 191145 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 2 21 g 2369044736 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 192492 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 99 g 2132140288 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 193864 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 79 g 1918926336 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 195203 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 61 g 1727033600 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 196724 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 45 g 1554330368 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 198186 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 30 g 1398897408 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 199649 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 17 g 1259007744 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 201138 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 1 05 g 1133106944 byte from device cuda error out of memory out of memory 2020 04 09 18 55 47 202527 I tensorflow stream executor cuda cuda driver cc 830 fail to allocate 972 55 m 1019796224 byte from device cuda error out of memory out of memory info tensorflow run local init op i0409 18 55 47 254083 140176829970240 session manager py 500 run local init op info tensorflow do run local init op i0409 18 55 47 262292 140176829970240 session manager py 502 do run local init op 2020 04 09 18 55 51 072444 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 04 09 18 55 51 096914 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 110052 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 122872 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 135759 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 148557 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 161419 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 174027 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 187117 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 262033 e tensorflow stream executor cuda cuda blas cc 238 fail to create cubla handle cubla status not initialize 2020 04 09 18 55 51 262068 w tensorflow stream executor stream cc 1919 attempt to perform bla operation use streamexecutor without blas support traceback most recent call last file usr local lib64 python3 6 site package tensorflow core python client session py line 1365 in do call return fn args file usr local lib64 python3 6 site package tensorflow core python client session py line 1350 in run fn target list run metadata file usr local lib64 python3 6 site package tensorflow core python client session py line 1443 in call tf sessionrun run metadata tensorflow python framework error impl internalerror from job worker replica 0 task 0 bla gemm launch fail a shape 32 784 b shape 784 500 m 32 n 500 k 784 node dense matmul during handling of the above exception another exception occur traceback most recent call last file distribute mnist py line 123 in tf app run file usr local lib64 python3 6 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 site package absl app py line 299 in run run main main args file usr local lib python3 6 site package absl app py line 250 in run main sys exit main argv file distribute mnist py line 116 in main ls step mon sess run train op loss global step file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 756 in run run metadata run metadata file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 1261 in run run metadata run metadata file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 1362 in run raise six reraise original exc info file usr local lib python3 6 site package six py line 703 in reraise raise value file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 1347 in run return self sess run args kwargs file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 1420 in run run metadata run metadata file usr local lib64 python3 6 site package tensorflow core python training monitor session py line 1178 in run return self sess run args kwargs file usr local lib64 python3 6 site package tensorflow core python client session py line 956 in run run metadata ptr file usr local lib64 python3 6 site package tensorflow core python client session py line 1180 in run feed dict tensor option run metadata file usr local lib64 python3 6 site package tensorflow core python client session py line 1359 in do run run metadata file usr local lib64 python3 6 site package tensorflow core python client session py line 1384 in do call raise type e node def op message tensorflow python framework error impl internalerror from job worker replica 0 task 0 bla gemm launch fail a shape 32 784 b shape 784 500 m 32 n 500 k 784 node dense matmul define at usr local lib64 python3 6 site package tensorflow core python framework op py 1751 original stack trace for dense matmul file distribute mnist py line 123 in tf app run file usr local lib64 python3 6 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 site package absl app py line 299 in run run main main args file usr local lib python3 6 site package absl app py line 250 in run main sys exit main argv file distribute mnist py line 82 in main logit model image file distribute mnist py line 28 in model net tf layer dense image 500 activation tf nn relu file usr local lib64 python3 6 site package tensorflow core python util deprecation py line 324 in new func return func args kwargs file usr local lib64 python3 6 site package tensorflow core python layers core py line 187 in dense return layer apply input file usr local lib64 python3 6 site package tensorflow core python util deprecation py line 324 in new func return func args kwargs file usr local lib64 python3 6 site package tensorflow core python keras engine base layer py line 1695 in apply return self call input args kwargs file usr local lib64 python3 6 site package tensorflow core python layers base py line 548 in call output super layer self call input args kwargs file usr local lib64 python3 6 site package tensorflow core python keras engine base layer py line 847 in call output call fn cast input args kwargs file usr local lib64 python3 6 site package tensorflow core python autograph impl api py line 234 in wrapper return convert call f option args kwargs file usr local lib64 python3 6 site package tensorflow core python autograph impl api py line 439 in convert call return call unconverted f args kwargs option file usr local lib64 python3 6 site package tensorflow core python autograph impl api py line 330 in call unconverted return f args kwargs file usr local lib64 python3 6 site package tensorflow core python keras layers core py line 1056 in call output gen math op mat mul input self kernel file usr local lib64 python3 6 site package tensorflow core python ops gen math op py line 6136 in mat mul name name file usr local lib64 python3 6 site package tensorflow core python framework op def library py line 793 in apply op helper op def op def file usr local lib64 python3 6 site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file usr local lib64 python3 6 site package tensorflow core python framework op py line 3360 in create op attrs op def compute device file usr local lib64 python3 6 site package tensorflow core python framework op py line 3429 in create op internal op def op def file usr local lib64 python3 6 site package tensorflow core python framework op py line 1751 in init self traceback tf stack extract stack we ve try with different task number but observe the same problem standalone code to reproduce the issue python import tensorflow compat v1 as tf tf disable eager execution mnist tf keras datasets mnist tf app flag define string ps host localhost 2222 ps host tf app flag define string worker host localhost 2223 localhost 2224 worker host tf app flag define string job name worker ps or worker tf app flag define integer task index 0 index of task within the job tf app flag define integer num worker 2 number of worker tf app flag define boolean be sync false use synchronous training or not flag tf app flag flag def model image define a simple mnist classifier net tf layer dense image 500 activation tf nn relu net tf layer dense net 500 activation tf nn relu net tf layer dense net 10 activation none return net def main ps host flag ps host split worker host flag worker host split create the cluster configure by ps host and worker host cluster tf train clusterspec ps ps host worker worker host create a server for local task server tf train server cluster job name flag job name task index flag task index if flag job name ps server join ps host only join elif flag job name worker worker perform the operation ps strategy tf contrib training greedyloadbalancingstrategy flag num ps note tf train replica device setter automatically place the paramter variable on the ps host default placement strategy round robin over all ps host and also place multi copy of operation to each worker host with tf device tf train replica device setter worker device job worker task d flag task index cluster cluster load mnist dataset x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 x train x train reshape x train shape 0 1 x test x test reshape x test shape 0 1 train ds tf datum dataset from tensor slice x train y train shuffle 10000 batch 32 train iter train ds make initializable iterator train init train iter make initializer train ds test ds tf datum dataset from tensor slice x test y test batch 32 print x train shape print y train shape the model image tf placeholder tf float32 none 784 label tf placeholder tf int32 none 10 xs ys train iter get next xs tf cast xs dtype tf float32 ys tf cast ys dtype tf int32 image xs label tf one hot ys 10 logit model image loss tf reduce mean tf nn softmax cross entropy with logit logit logit label label the stopatstephook handle stop after run give step hook tf train stopatstephook last step 2000 global step tf train get or create global step optimizer tf train adamoptimizer learning rate 1e 04 if flag be sync synchronous training use tf train syncreplicasoptimizer wrap optimizer ref optimizer tf train syncreplicasoptimizer optimizer replicas to aggregate flag num worker total num replicas flag num worker create the hook which handle initialization and queue hook append optimizer make session run hook flag task index 0 train op optimizer minimize loss global step global step aggregation method tf aggregationmethod add n the monitoredtrainingsession take care of session initialization restore from a checkpoint save to a checkpoint and close when do or an error occur with tf train monitoredtrainingsession master server target be chief flag task index 0 checkpoint dir checkpoint dir hook hook as mon sess mon sess run train init while not mon sess should stop mon sess run handle abortederror in case of preempt ps ls step mon sess run train op loss global step if step 100 0 print train step d loss f step ls if name main tf app run the bash use to run this code this be a one worker one ps example we ve vary the number of task and observe the similar behavior bash python3 distribute mnist py ps host localhost 2222 worker host localhost 2224 job name ps task index 0 num worker 1 python3 distribute mnist py ps host localhost 2222 worker host localhost 2224 job name worker task index 0 num worker 1 |
tensorflowtensorflow | window 10 shut down when use gpu tf 2 1 0 with tf keras and tf 1 14 with kera | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes custom code os platform and distribution e g linux ubuntu 16 04 window 10 64 bit tensorflow instal from source or binary tensorflow be instal use pip install tensorflow tensorflow version use command below 2 1 0 but occur on 1 14 1 15 2 0 as well python version occur on python 3 5 3 6 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version I have the follow instal cuda 9 2 cuda 10 0 cuda 10 1 cuda 10 2 with appropriate cudnn version gpu model and memory occur on new rtx 2080 ti 11 gb and my old gtx 1070 8 gb describe the current behavior when use either tensorflow or tensorflow gpu with two of my graphic card 1070 2080 ti my system shut off completely no warning as if I unplug the power if I train a model with a high batch size 32 42 specifically use tf keras s conv2d and cudnnlstm cudnngru it do not seem to have any problem train a fully connect model with batch size up to 512 everything be fine if I train my conv2d model with a batch size of 16 or low also if I use nvidia smi to limit my gpu s power consumption from stock 250w to 150w it work fine but be very slow however when it train with 16 batch size it go above 200w regularly and have no problem describe the expect behavior system do not shut off and should spit back a memory error if it s run out of memory standalone code to reproduce the issue import tensorflow as tf import numpy as np import random from tensorflow keras layer import dense conv2d maxpooling2d flatten from tensorflow keras import sequential num sample 500 h 665 w 814 c 3 x np random rand num sample h w c y 0 4 for in range num sample for I in range len y y I random randint 0 3 1 y np array y kernel size 3 3 model sequential model add conv2d 12 kernel size stride 1 activation relu input shape x shape 1 model add conv2d 24 kernel size stride 1 activation relu model add maxpooling2d pool size 2 2 model add conv2d 48 kernel size stride 1 activation relu model add maxpooling2d pool size 3 3 model add conv2d 64 kernel size stride 1 activation relu model add maxpooling2d pool size 2 2 model add conv2d 12 kernel size stride 1 activation relu model add flatten model add dense 32 activation relu model add dense 64 activation relu model add dense 64 activation relu model add dense 4 activation softmax model compile loss categorical crossentropy optimizer adam metric accuracy model fit x y batch size 64 epoch 500 other info log include any log or source code that would be helpful to diagnose the problem no error be generate when it shut down it just cut power immediately extra info I ve fully replace my cpu power supply graphic card motherboard and ram and this issue still occur psu be purchase about a month ago brand new only thing that be the same in the system since it last occur be my pc case storage ssds and the window 10 install it also never occur in gpu and cpu stress test and neither gpu or cpu overheat when it shut down it s always after the first epoch start training or just before |
tensorflowtensorflow | attributeerror tensor object have no attribute in graph mode | Bug | I be have an error tensor object have no attribute in graph mode I ve debug the code and I think it s in this gradienttape function but I don t know why if anyone know please help I system information tensorflow version 2 0 2 2 0 dev20200407 os platform and distribution linux mint python version python 3 7 4 describe the current behavior I be try to minimize a function use opt tf keras optimizer adam and I be get a typeerror when I apply opt apply gradient standalone code to reproduce the issue def explain self validation datum model class index layer name none colormap cv2 colormap viridis image weight 0 7 grid true return numpy ndarray grid of all the invert image or 4d array batch size height width channel tf execute eagerly image validation datum if layer name be none layer name self infer target layer model invert image invertedimage get optimize image image model class index layer name if grid return grid display invert image else return invert image staticmethod def infer target layer model return str name of the target layer for layer in reverse model layer select close 4d layer to the end of the network if len layer output shape 4 and layer name count conv 0 return layer name raise valueerror model do not seem to contain 4d layer invert image can not be apply tf function def get optimize image image model class index layer name grad model tf keras model model model input model get layer layer name output opt tf keras optimizer sgd learn rate 1 4 momentum 0 9 dtype model get layer layer name output dtype tensor image tf convert to tensor image opt img tf variable 1e 1 tf random normal tensor image shape 0 tensor image shape 1 tensor image shape 2 tensor image shape 3 trainable true step 50 for I in range step with tf gradienttape as tape invert feature tf cast opt img dtype content feature tf cast image dtype conv invert output grad model invert feature conv content output grad model content feature loss invertedimage get loss conv content output conv invert output content feature invert feature print initial loss 3f format loss grad tape gradient loss conv invert output conv content output print grad process grad g for g in grad opt apply gradient zip process grad conv invert output conv content output return opt img loss function def get loss conv content output conv invert output content feature invert feature euclidian tf norm conv content output conv invert output ord euclidean tf norm conv content output ord euclidean reg alpha 1e 7 tf math reduce sum tf norm invert feature ord 6 total variation 1e 8 tf math reduce sum tf image total variation content feature invert feature return euclidian reg alpha total variation traceback traceback most recent call last file usr local lib python3 7 runpy py line 193 in run module as main main mod spec file usr local lib python3 7 runpy py line 85 in run code exec code run global file home helena vscode extension ms python python 2020 2 64397 pythonfile lib python new ptvsd wheel ptvsd main py line 45 in cli main file home helena vscode extension ms python python 2020 2 64397 pythonfile lib python new ptvsd wheel ptvsd ptvsd server cli py line 361 in main run file home helena vscode extension ms python python 2020 2 64397 pythonfile lib python new ptvsd wheel ptvsd ptvsd server cli py line 203 in run file runpy run path option target run name main file usr local lib python3 7 runpy py line 263 in run path pkg name pkg name script name fname file usr local lib python3 7 runpy py line 96 in run module code mod name mod spec pkg name script name file usr local lib python3 7 runpy py line 85 in run code exec code run global file home helena document lar celesc lar computer vision objdet api test invert image py line 20 in data model class index tabby cat class index layer name block5 conv3 file home helena document lar celesc lar computer vision objdet api tf explain core invert image py line 54 in explain image model class index layer name file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager def function py line 497 in initialize args kwd file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file home helena document lar celesc larenv lib python3 7 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home helena document lar celesc larenv lib python3 7 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file home helena document lar celesc larenv lib python3 7 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e attributeerror in convert code home helena document lar celesc lar computer vision objdet api tf explain core invert image py 125 get optimize image opt apply gradient grad and var home helena document lar celesc larenv lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 434 apply gradient self create slot var list home helena document lar celesc larenv lib python3 7 site package tensorflow core python keras optimizer v2 gradient descent py 100 create slot self add slot var momentum home helena document lar celesc larenv lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 574 add slot var key var key var home helena document lar celesc larenv lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 1065 var key if var in graph mode attributeerror tensor object have no attribute in graph mode |
tensorflowtensorflow | tflite number of detection | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior hi I ve train a model to produce only one detection I have convert it to tflite I use this command to freeze the model export tflite ssd graph py pipeline config path pipeline config train checkpoint prefix plate detector training model ckpt 33539 output directory tflite add postprocesse op true max detection 1 and I have change num detection to 1 in tfliteobjectdetectionapimodel but still get this error can not copy between a tensorflowlite tensor with shape 1 10 4 and a java object with shape 1 1 4 any help describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | mixedprecision dynamiclossscale should accept scale loss small than one | Bug | l391 python new loss scale math op maximum self current loss scale self multipli 1 self current loss scale 1 be unnecessary and wrong in some use case loss be possible overflow inf in float16 therefore self current loss scale 1 be necessary for mixed precision training |
tensorflowtensorflow | runtimeerror can t copy tensor with type string to device | Bug | system information os platform and distribution ubuntu 18 04 2 lts tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 7 cuda cudnn version cuda 10 2 gpu model and memory tesla v100 sxm2 32 gb describe the current behavior runtimeerror can t copy tensor with type string to device job localhost replica 0 task 0 device gpu 0 describe the expect behavior run perfectly on cpu standalone code to reproduce the issue on a gpu import tensorflow hub as hub embed hub load with tf device gpu 0 embedding embe the quick brown fox jump over the lazy dog print embedding other info log traceback most recent call last file create sentence embedding py line 25 in temp embed ele file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python save model load py line 438 in call attribute return instance call args kwargs file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python eager def function py line 636 in call args kwd file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python eager function py line 2185 in canonicalize function input self flat input signature file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python eager function py line 2240 in convert input to signature value dtype hint spec dtype file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python framework op py line 1302 in convert to tensor value dtype prefer dtype name name as ref as ref file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python framework constant op py line 317 in constant tensor conversion function return constant v dtype dtype name name file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python framework constant op py line 258 in constant allow broadcast true file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python framework constant op py line 266 in constant impl t convert to eager tensor value ctx dtype file home kasaxen anaconda3 envs qaenv lib python3 7 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype runtimeerror can t copy tensor with type string to device job localhost replica 0 task 0 device gpu 0 |
tensorflowtensorflow | new project inquiry | Bug | hello I be totally new at programming and want some advice I want to create a program use the google maps api be python the adequate programming language to use thank in advance for your answer |
tensorflowtensorflow | loss not change for an adversarial example | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to port some of the example from this nip tutorial to tensorflow 2 x I have be able to port some of they from chapter 1 however when create the perturbation vector the loss wouldn t change and I unable to figure out why describe the expect behavior standalone code to reproduce the issue colab notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras mismatch output and target | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os tensorflow version use command below v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version 3 6 9 describe the current behavior we build a keras model with output like task a logit task a probs task b logit task b prob we compile our model by pass a loss dict like task a logit task b logit note that no loss be apply on the probability all loss be apply on the logit we build a dataset that yield tuple like x task a logit task b logit to keras fit we pass a dataset that yield target for each task also in a dict this work without eager execution but use tf 2 0 s default keras get confused because there be less target and loss than output this patch fix the issue with the original code and the example above loss fns and target have two element and out have four they get zip together and then shape mismatch logit and label must be broadcastable logit size 18 4 label size 18 3 describe the expect behavior keras match output and target correctly standalone code to reproduce the issue I provide a patch that point to the bug in the keras source code |
tensorflowtensorflow | can not convert lstm with stateful true to tflite | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 7 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 2 0 rc0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook import tensorflow as tf model tf keras sequential model add tf keras layers input shape none 32 batch size 1 name input model add tf keras layers lstm 256 return sequence true stateful true model compile optimizer tf keras optimizer adam loss binary crossentropy metric acc print model input print model ctor summary print tf version converter tf lite tfliteconverter from keras model model converter target spec support op tf lite opsset tflite builtins tf lite opsset select tf op converter experimental new converter true tflite model converter convert copy and paste the output here tensor input 12 0 shape 1 none 32 dtype float32 model sequential 12 layer type output shape param lstm 15 lstm 1 none 256 295936 total param 295 936 trainable param 295 936 non trainable param 0 none 2 2 0 rc0 invalidargumenterror traceback most recent call last c program file python37 lib site package tensorflow python framework importer py in import graph def internal graph def input map return element validate colocation constraint name producer op list 496 result c api tf graphimportgraphdefwithresult 497 graph c graph serialize option pylint disable protect access 498 result c api util scopedtfimportgraphdefresult result invalidargumenterror input 0 of node sequential 13 lstm 16 assignvariableop be pass float from sequential 13 lstm 16 23648 0 incompatible with expect resource during handling of the above exception another exception occur valueerror traceback most recent call last in 10 converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op 11 converter experimental new converter true 12 tflite model converter convert c program file python37 lib site package tensorflow lite python lite py in convert self 462 frozen func graph def 463 convert to constant convert variable to constant v2 as graph 464 self func 0 low control flow false 465 input tensor 466 tensor for tensor in frozen func input c program file python37 lib site package tensorflow python framework convert to constant py in convert variable to constant v2 as graph func low control flow aggressive inline 705 graph def convert input convert variable to constant v2 impl 706 func low control flow aggressive inline 707 frozen func construct concrete function func graph def convert input 708 return frozen func graph def c program file python37 lib site package tensorflow python framework convert to constant py in construct concrete function func output graph def convert input indice 404 new func wrap function function from graph def output graph def 405 new input name 406 new output name 407 408 manually propagate shape for input tensor where the shape be not correctly c program file python37 lib site package tensorflow python eager wrap function py in function from graph def graph def input output 631 importer import graph def graph def name 632 633 wrap import wrap function import graph def 634 import graph wrap import graph 635 return wrap import prune c program file python37 lib site package tensorflow python eager wrap function py in wrap function fn signature name 609 signature signature 610 add control dependency false 611 collection 612 variable holder holder 613 signature signature c program file python37 lib site package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 979 original func tf decorator unwrap python func 980 981 func output python func func args func kwargs 982 983 invariant func output contain only tensor compositetensor c program file python37 lib site package tensorflow python eager wrap function py in call self args kwargs 84 85 def call self args kwargs 86 return self call with variable creator scope self fn args kwargs 87 88 def call with variable creator scope self fn c program file python37 lib site package tensorflow python eager wrap function py in wrap args kwargs 90 def wrap args kwargs 91 with variable scope variable creator scope self variable creator scope 92 return fn args kwargs 93 94 return wrap c program file python37 lib site package tensorflow python eager wrap function py in import graph def 629 630 def import graph def 631 importer import graph def graph def name 632 633 wrap import wrap function import graph def c program file python37 lib site package tensorflow python util deprecation py in new func args kwargs 505 in a future version if date be none else after s date 506 instruction 507 return func args kwargs 508 509 doc add deprecate arg notice to docstre c program file python37 lib site package tensorflow python framework importer py in import graph def fail resolve argument 403 return element return element 404 name name 405 producer op list producer op list 406 407 c program file python37 lib site package tensorflow python framework importer py in import graph def internal graph def input map return element validate colocation constraint name producer op list 499 except error invalidargumenterror as e 500 convert to valueerror for backwards compatibility 501 raise valueerror str e 502 503 create definedfunction for any import function valueerror input 0 of node sequential 13 lstm 16 assignvariableop be pass float from sequential 13 lstm 16 23648 0 incompatible with expect resource also please include a link to the save model or graphdef put link here or attach to the issue failure detail when I try to convert a lstm model with stateful false it can convert success but when I change stateful to true it convert fail I need lstm with stateful how can I do |
tensorflowtensorflow | tf keras custom layer do not use compute output shape | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below python version v2 2 0 rc1 34 ge6e5d6df2a 2 2 0 rc2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version driver version 440 33 01 cuda version 10 2 gpu model and memory gtx 1080 ti 11162mib ram describe the current behavior when implement a custom layer where the output shape be not computable directly from the code within the call method the compute output shape function be not use to determine the output shape this cause issue when pass to a layer which must know some of the shape e g a conv d layer which need to know the number of channel ahead of time describe the expect behavior the compute output shape function be use to determine the output shape standalone code to reproduce the issue python import tensorflow as tf class spectrogram tf keras layers layer def init self num freqs max freq kwargs super spectrogram self init kwargs self num freq num freq self max freq max freq self input spec tf keras layers inputspec ndim 2 tf keras layers inputspec ndim 2 def call self x fs x fs x fs nfft tf cast fs 0 0 self num freq 1 self max freq tf int32 y tf signal stft x nfft 256 nfft pad end true y tf sqrt tf abs y self num freq return y def compute output shape self input shape return input shape 0 none self num freq signal tf keras layers input shape none fs tf keras layers input shape 1 x spectrogram 257 10 000 signal fs y tf keras layer conv1d 16 3 x tf keras model model signal fs y summary log traceback most recent call last file chorus repro py line 32 in y tf keras layer conv1d 16 3 x file home kevin pyenv version chorus lib python3 7 site package tensorflow python keras engine base layer py line 897 in call self maybe build input file home kevin pyenv version chorus lib python3 7 site package tensorflow python keras engine base layer py line 2416 in maybe build self build input shape pylint disable not callable file home kevin pyenv version chorus lib python3 7 site package tensorflow python keras layers convolutional py line 153 in build input channel self get input channel input shape file home kevin pyenv version chorus lib python3 7 site package tensorflow python keras layers convolutional py line 293 in get input channel raise valueerror the channel dimension of the input valueerror the channel dimension of the input should be define find none other info this be originally open in 19961 but be unfortunately close as not a bug but I m pretty sure this be not expect behavior accord to the keras docs |
tensorflowtensorflow | tensorflow1 13 add an op which use mkldnn can not build sucessfully | Bug | have I write custom code as oppose to use a stock example script provide in tensorflow yes in tensorflow core user op addop os platform and distribution e g linux ubuntu 16 04 windows10 tensorflow instal from source or binary source tensorflow version use command below 1 13 python version 3 6 bazel version if compile from source 0 20 0 gcc compiler version if compile from source cuda cudnn version none gpu model and memory none exact command to reproduce I just add an matmul op which use a function in mkldnn and I can build successfully with bazel build c opt copt msse4 1 copt msse4 2 tensorflow libtensorflow so but when I build the target with bazel build c opt copt msse4 1 copt msse4 2 tensorflow tool pip package build pip package I get an error error d tf install tensorflow addop tensorflow python build 4057 1 in cmd attribute of genrule rule tensorflow python gen pywrap tensorflow internal pyd variable more than one input file since this rule be create by the macro tf py wrap cc the error might have be cause by the macro implementation in d tf install tensorflow addop tensorflow tensorflow bzl 1704 15 error analysis of target tensorflow tool pip package build pip package fail build abort analysis of target tensorflow python gen pywrap tensorflow internal pyd fail build abort I just add some code in tensorflow python user op user op py tf export v1 user op own dnnl mul def own dnnl mul arg1 arg2 example of override the generate code for an op return gen user op own dnnl mul arg1 arg2 |
tensorflowtensorflow | unsupported full integer tensorflow lite model in tf 2 | Bug | describe the issue in tf2 the full integer quantize model produce by the tflite converter can only have float input and output type this be a blocker for user who require int8 or uint8 input and or output type update we now support this workflow end to end tutorial only tflite conversion convert tf model to tflite full integer model you can refer to the code here integer only also give below import tensorflow as tf converter tf lite tfliteconverter from keras model keras model converter optimization tf lite optimize default def representative dataset gen for in range num calibration step get sample input datum as a numpy array in a method of your choose yield input converter representative dataset representative dataset gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf int8 or tf uint8 converter inference output type tf int8 or tf uint8 tflite model converter convert only tflite inference run inference on the tflite model note that the one caveat with integer only model be this you need to manually map aka quantize the float input to integer input during inference to understand how this can be do refer to the equation provide in tensorflow lite 8 bit quantization specification document and it s equivalent code in python below import numpy as np import tensorflow as tf input to the tf model be float value in the range 0 10 and of size 1 100 np random seed 0 tf input np random uniform low 0 high 10 size 1 100 astype np float32 output of the tf model tf output keras model predict input output of the tflite model interpreter tf lite interpreter model content tflite model interpreter allocate tensor input detail interpreter get input detail 0 manually quantize the input from float to integer scale zero point input detail quantization tflite integer input tf input scale zero point tflite integer input tflite integer input astype input detail dtype interpreter set tensor input detail index tflite integer input interpreter invoke output detail interpreter get output detail 0 tflite integer output interpreter get tensor output detail index manually dequantize the output from integer to float scale zero point output detail quantization tflite output tflite integer output astype np float32 tflite output tflite output zero point scale verify that the tflite model s output be approximately expect some loss in accuracy due to quantization the same as the tf model s output assert np allclose tflite output tf output atol 1e 04 true |
tensorflowtensorflow | mutablegraphview sorttopologically error with tf nn conv2d in custom rnn cell | Bug | system information have I write custom code yes os platform and distribution ubuntu 18 04 lt tensorflow instal from binary conda install tensorflow gpu 2 1 tensorflow version 2 1 0 python version 3 7 7 cuda cudnn version cuda 10 1 cudnn 7 6 gpu model and memory geforce gtx 1080 8 gb describe the current behavior call predict on a model contain a custom rnn layer whose cell call tf nn conv2d result in the follow error be print to the console 2020 04 06 12 40 50 843591 e tensorflow core grappler optimizer meta optimizer cc 561 layout fail invalid argument mutablegraphview sorttopologically error detect edge s create cycle s func sequential rnn while body 1 input 55 sequential rnn while body 1 add the prediction proceed and appear to be correct but have poor performance this do not occur if the tf nn conv2d be replace with other similar operation tf nn conv1d for example describe the expect behavior the expect behavior be for no error to be produce standalone code to reproduce the issue the follow code produce the error python import tensorflow as tf class customcell tf keras layers layer def init self kwargs self state size tf tensorshape 16 16 1 super customcell self init kwargs def call self input state kwargs output state 0 tf nn conv2d input tf one 3 3 1 1 1 1 same new state output return output new state model tf keras model sequential model add tf keras layer rnn customcell batch input shape 1 1 16 16 1 error here model predict tf one model input shape if the conv2d be replace with conv1d however no error occur python import tensorflow as tf class customcell tf keras layers layer def init self kwargs self state size tf tensorshape 16 1 super customcell self init kwargs def call self input state kwargs output state 0 tf nn conv1d input tf one 3 1 1 1 same new state output return output new state model tf keras model sequential model add tf keras layer rnn customcell batch input shape 1 1 16 1 no error model predict tf one model input shape |
tensorflowtensorflow | tf 2 2 0 rc2 tf keras api model loss function no long exist | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution ubuntu 18 04 tensorflow instal from binary v2 2 0 rc1 34 ge6e5d6df2a 2 2 0 rc2 cuda cudnn version cuda 10 1 describe the current behavior in tf keras api in tensorflow 2 2 0 model loss function no long exist instead the loss function be bury and seem to be only accessible with code like this python model compile loss get loss object model compile loss loss fn be able to easily access loss function associate with a model be important for many application that be build on tensorflow moreover this be an undocumented breaking change describe the expect behavior the loss function associate with a model should be more directly and easily accessible like model loss function which be the way loss function be access in tensorflow 2 1 0 standalone code to reproduce the issue python from tensorflow keras model import sequential from tensorflow keras layer import dense activation model sequential model add dense 10 input dim 10 activation softmax model compile optimizer sgd loss categorical crossentropy metric accuracy work in tf 2 2 0 rc2 but do not work in previous version like 2 1 0 model compile loss get loss object model compile loss loss fn do not work in tf 2 2 0 rc2 but work in tf 2 1 0 model loss function 0 fn |
tensorflowtensorflow | train a simple audio recognition model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | import error in tensorflow dataset | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os catalina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary tensorflow version use command below pip install tensorflow describe the current behavior modulenotfounderror no module name tensorflow compat I encounter this error after try to import tensorflow dataset describe the expect behavior no import error standalone code to reproduce the issue import tensorflow dataset as tfds other info log |
tensorflowtensorflow | problem of comple with tensorflow | Bug | I have a problem with comple with tensorflow in jupyternotebook this be the error message traceback most recent call last file c user home miniconda3 lib site package ipython core interactiveshell py line 3331 in run code exec code obj self user global ns self user n file line 1 in import tensorflow as tf file c user home miniconda3 lib site package tensorflow init py line 22 in from tensorflow python import pywrap tensorflow pylint disable unused import file c user home miniconda3 lib site package tensorflow python init py line 49 in from tensorflow python import pywrap tensorflow file c user home miniconda3 lib site package tensorflow python pywrap tensorflow py line 58 in from tensorflow python pywrap tensorflow internal import file c user home miniconda3 lib site package tensorflow python pywrap tensorflow internal py line 114 def tfe contextoptionssetasync arg1 async syntaxerror invalid syntax help please 3 |
tensorflowtensorflow | in nmt with attention the gru in decoder be not connect last step state be not pass to this step | Bug | the decoder be train step by step and it s not pass last step state to this step pass the concatenate vector to the gru output state self gru x be this a feature or a bug I check a lot of nmt with attention paper unlike the document those decoder be connect thank in advance |
tensorflowtensorflow | tensorflow lite image segmentation example for io doesn t build | Bug | url s with the issue description of issue what need change the readme must be provide with an explanation of how to change the setting of the project at the moment I have problem adjust the development team sign error message no profile for team rob de putter personal team match wildcard find xcode couldn t find any provision profile match gpc87jxmxd wildcard install the profile by drag and drop it onto xcode s dock item or select a different one in the signing capability tab of the target editor |
tensorflowtensorflow | valueerror no gradient provide for any variable conv2d kernel 0 conv2d bias 0 | Bug | system information colab tensorflow 2 2 0 describe the current behavior I face this error when I try to solve my own data issue which be multiple label semantic segmentation when I run on jupiter notebook on my local mac book with keras installation not tf keras but keras only model could train normally as expect as below image however I stop train the whole model on local mac book due to its limited memory and capability and I switch to colab tensorflow version 2 2 0 rc2 and face this error valueerror no gradient provide for any variable conv2d kernel 0 conv2d bias 0 conv2d 1 kernel 0 conv2d 1 bias 0 conv2d 2 kernel 0 conv2d 2 bias 0 conv2d 3 kernel 0 conv2d 3 bias 0 conv2d 4 kernel 0 conv2d 4 bias 0 conv2d 5 kernel 0 conv2d 5 bias 0 conv2d 6 kernel 0 conv2d 6 bias 0 conv2d 7 kernel 0 conv2d 7 bias 0 conv2d 8 kernel 0 conv2d 8 bias 0 conv2d 9 kernel 0 conv2d 9 bias 0 conv2d transpose kernel 0 conv2d transpose bias 0 conv2d 10 kernel 0 conv2d 10 bias 0 conv2d 11 kernel 0 conv2d 11 bias 0 conv2d transpose 1 kernel 0 conv2d transpose 1 bias 0 conv2d 12 kernel 0 conv2d 12 bias 0 conv2d 13 kernel 0 conv2d 13 bias 0 conv2d transpose 2 kernel 0 conv2d transpose 2 bias 0 conv2d 14 kernel 0 conv2d 14 bias 0 conv2d 15 kernel 0 conv2d 15 bias 0 conv2d transpose 3 kernel 0 conv2d transpose 3 bias 0 conv2d 16 kernel 0 conv2d 16 bias 0 conv2d 17 kernel 0 conv2d 17 bias 0 conv2d 18 kernel 0 conv2d 18 bias 0 full error epoch 1 40 valueerror traceback most recent call last in 5 callback callback 6 validation datum valid dataloader 7 validation step no of validation image 1 verbose 1 8 10 frame usr local lib python3 6 dist package tensorflow python keras engine training py in method wrapper self args kwargs 64 def method wrapper self args kwargs 65 if not self in multi worker mode pylint disable protect access 66 return method self args kwargs 67 68 run inside run distribute coordinator already usr local lib python3 6 dist package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse kwargs 783 batch size batch size 784 callback on train batch begin step 785 tmp log train function iterator 786 catch outofrangeerror for dataset of unknown size 787 this block until the batch have finish execute usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 578 xla context exit 579 else 580 result self call args kwd 581 582 if trace count self get trace count usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 625 this be the first call of call so we have to initialize 626 initializer 627 self initialize args kwd add initializer to initializer 628 finally 629 at this point we know that the initialization be complete or less usr local lib python3 6 dist package tensorflow python eager def function py in initialize self args kwd add initializer to 504 self concrete stateful fn 505 self stateful fn get concrete function internal garbage collect pylint disable protect access 506 args kwd 507 508 def invalid creator scope unused args unused kwd usr local lib python3 6 dist package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 2444 args kwargs none none 2445 with self lock 2446 graph function self maybe define function args kwargs 2447 return graph function 2448 usr local lib python3 6 dist package tensorflow python eager function py in maybe define function self args kwargs 2775 2776 self function cache miss add call context key 2777 graph function self create graph function args kwargs 2778 self function cache primary cache key graph function 2779 return graph function args kwargs usr local lib python3 6 dist package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 2665 arg name arg name 2666 override flat arg shape override flat arg shape 2667 capture by value self capture by value 2668 self function attribute 2669 tell the concretefunction to clean up its graph once it go out of usr local lib python3 6 dist package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 979 original func tf decorator unwrap python func 980 981 func output python func func args func kwargs 982 983 invariant func output contain only tensor compositetensor usr local lib python3 6 dist package tensorflow python eager def function py in wrap fn args kwd 439 wrap allow autograph to swap in a converted function we give 440 the function a weak reference to itself to avoid a reference cycle 441 return weak wrap fn wrap args kwd 442 weak wrap fn weakref ref wrap fn 443 usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise valueerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 505 train function output self distribute strategy run usr local lib python3 6 dist package tensorflow python distribute distribute lib py 951 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 475 train step self trainable variable usr local lib python3 6 dist package tensorflow python keras engine training py 1741 minimize trainable variable usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 525 aggregate gradient filter grad and var filter grad grad and var usr local lib python3 6 dist package tensorflow python keras optimizer v2 optimizer v2 py 1203 filter grad v name for v in grad and var valueerror no gradient provide for any variable conv2d kernel 0 conv2d bias 0 conv2d 1 kernel 0 conv2d 1 bias 0 conv2d 2 kernel 0 conv2d 2 bias 0 conv2d 3 kernel 0 conv2d 3 bias 0 conv2d 4 kernel 0 conv2d 4 bias 0 conv2d 5 kernel 0 conv2d 5 bias 0 conv2d 6 kernel 0 conv2d 6 bias 0 conv2d 7 kernel 0 conv2d 7 bias 0 conv2d 8 kernel 0 conv2d 8 bias 0 conv2d 9 kernel 0 conv2d 9 bias 0 conv2d transpose kernel 0 conv2d transpose bias 0 conv2d 10 kernel 0 conv2d 10 bias 0 conv2d 11 kernel 0 conv2d 11 bias 0 conv2d transpose 1 kernel 0 conv2d transpose 1 bias 0 conv2d 12 kernel 0 conv2d 12 bias 0 conv2d 13 kernel 0 conv2d 13 bias 0 conv2d transpose 2 kernel 0 conv2d transpose 2 bias 0 conv2d 14 kernel 0 conv2d 14 bias 0 conv2d 15 kernel 0 conv2d 15 bias 0 conv2d transpose 3 kernel 0 conv2d transpose 3 bias 0 conv2d 16 kernel 0 conv2d 16 bias 0 conv2d 17 kernel 0 conv2d 17 bias 0 conv2d 18 kernel 0 conv2d 18 bias 0 describe the expect behavior could train model standalone code to reproduce the issue thank you for look into this |
tensorflowtensorflow | no example show how to generate arduino file from source for arduino ide | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change no example of generate arduino ide specific file from source cc file clear description there be documentation and code generate microlite project function transform arduino source py etc suggest that the make build system allow for easily create file and a directory from source file to be use in the arduino ide but there be no arduino example that show how this be or should be do it would be useful to show how the hello world project example be build for the arduino ide from the repo s source use make usage example be there a usage example not for arduino |
tensorflowtensorflow | train speech model ipynb tutorial with colab not work | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc describe the problem I be try to use this to train a set the base tutorial will not compile the train py section I be copy and paste the command to colab to test and get a module not find error I have use this in the past about a month or so ago and it work fine something must have be change recently to cause this issue here be the output I get when try to run the begin train section 2020 04 04 20 08 28 890032 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 traceback most recent call last file tensorflow tensorflow example speech command train py line 81 in import input datum file content tensorflow tensorflow example speech command input datum py line 35 in from tensorflow contrib framework python ops import audio op as contrib audio modulenotfounderror no module name tensorflow contrib please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | mutablegraphview mutablegraphview error node have self cycle fanin but graph be not cyclic | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information python 3 6 6 on fedora python3 c import tensorflow as tf print tf version 2 1 0 tf version git version v2 1 0 rc2 17 ge5bf8de tensorflow instal use pip3 install upgrade tensorflow run on amd cpu no gpu acceleration describe the current behavior for graph that be not cyclic I sometimes receive message like this 2020 04 04 11 20 02 351817 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 02 355486 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 357350 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 371116 e tensorflow core grappler optimizer meta optimizer cc 561 arithmetic optimizer fail invalid argument the graph couldn t be sort in topological order 2020 04 04 11 20 02 372145 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 02 373345 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 374334 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 377600 w tensorflow core common runtime process function library runtime cc 697 ignore multi device function optimization failure invalid argument the graph couldn t be sort in topological order describe the expect behavior I expect that tensorflow should not print these message and presumably be able to topologically order the non cyclic graph standalone code to reproduce the issue import tensorflow as tf import numpy as np import random random seed 1 np random seed 1 input datum np random random 20000 reshape 10000 2 output datum np sin input datum json class name model config name model 4 layer class name inputlayer config batch input shape null 2 dtype float32 sparse false ragged false name input 5 name input 5 inbound nodes class name reshape config name reshape 2 trainable true dtype float32 target shape 1 2 name reshape 2 inbound node input 5 0 0 class name reshape config name reshape 1 trainable true dtype float32 target shape 1 2 name reshape 1 inbound node input 5 0 0 class name reshape config name reshape trainable true dtype float32 target shape 2 1 name reshape inbound node input 5 0 0 class name upsampling1d config name up sampling1d 2 trainable true dtype float32 size 3 name up sampling1d 2 inbound node reshape 2 0 0 class name upsampling1d config name up sampling1d 1 trainable true dtype float32 size 3 name up sampling1d 1 inbound node reshape 1 0 0 class name upsampling1d config name up sampling1d trainable true dtype float32 size 2 name up sampling1d inbound nodes reshape 0 0 class name reshape config name reshape 5 trainable true dtype float32 target shape 2 1 name reshape 5 inbound node input 5 0 0 class name zeropadding1d config name zero padding1d 1 trainable true dtype float32 padding 0 1 name zero padding1d 1 inbound node up sampling1d 2 0 0 class name zeropadding1d config name zero padding1d trainable true dtype float32 padding 0 1 name zero padding1d inbound nodes up sampling1d 1 0 0 class name conv1d config name conv1d trainable true dtype float32 filter 2 kernel size 1 stride 1 padding valid datum format channel last dilation rate 1 activation linear use bias true kernel initializer class name glorotuniform config seed null bias initializer class name zero config kernel regularizer null bias regularizer null activity regularizer null kernel constraint null bias constraint null name conv1d inbound node up sampling1d 0 0 class name maxpooling1d config name max pooling1d trainable true dtype float32 stride 2 pool size 2 padding valid datum format channel last name max pooling1d inbound node reshape 5 0 0 class name reshape config name reshape 3 trainable true dtype float32 target shape 2 1 name reshape 3 inbound node input 5 0 0 class name conv1d config name conv1d 1 trainable true dtype float32 filter 4 kernel size 1 stride 1 padding valid datum format channel last dilation rate 1 activation linear use bias true kernel initializer class name glorotuniform config seed null bias initializer class name zero config kernel regularizer null bias regularizer null activity regularizer null kernel constraint null bias constraint null name conv1d 1 inbound nodes zero padding1d 1 0 0 class name dot config name dot trainable true dtype float32 axis 1 normalize false name dot inbound nodes zero padding1d 0 0 conv1d 0 0 class name spatialdropout1d config name spatial dropout1d trainable true dtype float32 rate 0 4 noise shape null seed null name spatial dropout1d inbound nodes max pooling1d 0 0 class name globalaveragepooling1d config name global average pooling1d trainable true dtype float32 datum format channel last name global average pooling1d inbound node reshape 3 0 0 class name dot config name dot 1 trainable true dtype float32 axis 1 normalize false name dot 1 inbound node conv1d 1 0 0 dot 0 0 class name alphadropout config name alpha dropout trainable true dtype float32 rate 0 24308844217362846 name alpha dropout inbound nodes spatial dropout1d 0 0 class name reshape config name reshape 4 trainable true dtype float32 target shape 1 1 name reshape 4 inbound nodes global average pooling1d 0 0 class name dropout config name dropout trainable true dtype float32 rate 0 4 noise shape null seed null name dropout inbound node dot 1 0 0 class name flatten config name flatten trainable true dtype float32 datum format channel last name flatten inbound nodes alpha dropout 0 0 class name globalaveragepooling1d config name global average pooling1d 1 trainable true dtype float32 datum format channel last name global average pooling1d 1 inbound node reshape 4 0 0 class name flatten config name flatten 1 trainable true dtype float32 datum format channel last name flatten 1 inbound node dropout 0 0 class name concatenate config name concatenate trainable true dtype float32 axis 1 name concatenate inbound node input 5 0 0 flatten 0 0 global average pooling1d 1 0 0 flatten 1 0 0 class name dense config name dense 4 trainable true dtype float32 unit 2 activation relu use bias true kernel initializer class name glorotuniform config seed null bias initializer class name zero config kernel regularizer null bias regularizer null activity regularizer null kernel constraint null bias constraint null name dense 4 inbound node concatenate 0 0 input layer input 5 0 0 output layer dense 4 0 0 keras version 2 2 4 tf backend tensorflow model tf keras model model from json json model compile loss mse optimizer adam model summary model fit x input data reshape 1 2 y output datum reshape 1 2 epoch 3 validation split 0 2 this run and produce the follow output 2020 04 04 11 19 57 409765 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libnvinfer so 6 dlerror libnvinfer so 6 can not open share object file no such file or directory 2020 04 04 11 19 57 409910 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libnvinfer plugin so 6 dlerror libnvinfer plugin so 6 can not open share object file no such file or directory 2020 04 04 11 19 57 409938 w tensorflow compiler tf2tensorrt util py util cc 30 can not dlopen some tensorrt librarie if you would like to use nvidia gpu with tensorrt please make sure the miss library mention above be instal properly 2020 04 04 11 20 00 326542 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2020 04 04 11 20 00 326615 e tensorflow stream executor cuda cuda driver cc 351 fail call to cuinit unknown error 303 2020 04 04 11 20 00 326705 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host sapling6 proc driver nvidia version do not exist 2020 04 04 11 20 00 357649 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2195855000 hz 2020 04 04 11 20 00 358616 I tensorflow compiler xla service service cc 168 xla service 0x55b2db3a0250 initialize for platform host this do not guarantee that xla will be use device 2020 04 04 11 20 00 358698 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version model model 4 layer type output shape param connect to input 5 inputlayer none 2 0 reshape 2 reshape none 1 2 0 input 5 0 0 reshape 1 reshape none 1 2 0 input 5 0 0 reshape reshape none 2 1 0 input 5 0 0 up sampling1d 2 upsampling1d none 3 2 0 reshape 2 0 0 up sampling1d 1 upsampling1d none 3 2 0 reshape 1 0 0 up sampling1d upsampling1d none 4 1 0 reshape 0 0 reshape 5 reshape none 2 1 0 input 5 0 0 zero padding1d 1 zeropadding1d none 4 2 0 up sampling1d 2 0 0 zero padding1d zeropadding1d none 4 2 0 up sampling1d 1 0 0 conv1d conv1d none 4 2 4 up sampling1d 0 0 max pooling1d maxpooling1d none 1 1 0 reshape 5 0 0 reshape 3 reshape none 2 1 0 input 5 0 0 conv1d 1 conv1d none 4 4 12 zero padding1d 1 0 0 dot dot none 4 4 0 zero padding1d 0 0 conv1d 0 0 spatial dropout1d spatialdropo none 1 1 0 max pooling1d 0 0 global average pooling1d globa none 1 0 reshape 3 0 0 dot 1 dot none 4 4 0 conv1d 1 0 0 dot 0 0 alpha dropout alphadropout none 1 1 0 spatial dropout1d 0 0 reshape 4 reshape none 1 1 0 global average pooling1d 0 0 dropout dropout none 4 4 0 dot 1 0 0 flatten flatten none 1 0 alpha dropout 0 0 global average pooling1d 1 glo none 1 0 reshape 4 0 0 flatten 1 flatten none 16 0 dropout 0 0 concatenate concatenate none 20 0 input 5 0 0 flatten 0 0 global average pooling1d 1 0 0 flatten 1 0 0 dense 4 dense none 2 42 concatenate 0 0 total param 58 trainable param 58 non trainable param 0 train on 8000 sample validate on 2000 sample epoch 1 3 2020 04 04 11 20 02 351817 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 02 355486 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 357350 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 371116 e tensorflow core grappler optimizer meta optimizer cc 561 arithmetic optimizer fail invalid argument the graph couldn t be sort in topological order 2020 04 04 11 20 02 372145 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 02 373345 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 374334 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 02 377600 w tensorflow core common runtime process function library runtime cc 697 ignore multi device function optimization failure invalid argument the graph couldn t be sort in topological order 7904 8000 eta 0s loss 0 16842020 04 04 11 20 04 124781 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 04 126366 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 04 127223 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 04 132842 e tensorflow core grappler optimizer meta optimizer cc 561 arithmetic optimizer fail invalid argument the graph couldn t be sort in topological order 2020 04 04 11 20 04 133249 e tensorflow core grappler optimizer meta optimizer cc 561 remapper fail invalid argument mutablegraphview mutablegraphview error node model 4 up sampling1d 2 concat have self cycle fanin model 4 up sampling1d 2 concat 2020 04 04 11 20 04 133728 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 0 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 04 134178 e tensorflow core grappler optimizer dependency optimizer cc 717 iteration 1 topological sort fail with message the graph couldn t be sort in topological order 2020 04 04 11 20 04 135585 w tensorflow core common runtime process function library runtime cc 697 ignore multi device function optimization failure invalid argument the graph couldn t be sort in topological order 8000 8000 3s 429us sample loss 0 1671 val loss 0 0197 epoch 2 3 8000 8000 2s 204us sample loss 0 0260 val loss 0 0089 epoch 3 3 8000 8000 2s 204us sample loss 0 0112 val loss 0 0054 |
tensorflowtensorflow | load model cause memory leak | Bug | model load model model lstm radam batchsize50 data5040 algorithm lstm radam batchsize50 data5040 0 h5 if I use tensorflow2 0 will got this bug and tensorflow2 1 0 no this bug save py 146 2020 04 04 20 14 39 885989 hdf5 format py 153 2020 04 04 20 14 39 886990 hdf5 format py 159 2020 04 04 20 14 39 886990 model config py 54 2020 04 04 20 14 39 890988 hdf5 format py 169 2020 04 04 20 14 45 900547 hdf5 format py 172 2020 04 04 20 14 46 118488 optimizer v2 py 253 2020 04 04 20 14 46 118488 hdf5 format py 185 2020 04 04 20 14 48 779033 hdf5 format py 193 2020 04 04 20 14 48 779033 trae py 2094 2020 04 04 20 14 48 779033 trae py 2112 2020 04 04 20 14 48 822994 trae py 2116 2020 04 04 20 14 48 823996 optimizer v2 py 501 2020 04 04 20 14 48 823996 optimizer v2 py 390 2020 04 04 20 14 48 823996 optimizer v2 py 393 2020 04 04 20 14 48 824993 gradient impl py 154 2020 04 04 20 14 48 824993 gradient impl py 156 2020 04 04 20 14 48 824993 gradient util py 504 2020 04 04 20 14 48 824993 513 27 mb 0 gradient util py 680 maybecompile 340 513 27 mb maybecompile 350 513 27 mb maybecompile 358 at 0x0000024aa779c730 513 27 mb gradient util py 682 gradient util py 711 gradient util py 711 gradient util py 680 maybecompile 340 544 09 mb maybecompile 350 544 09 mb maybecompile 358 at 0x0000024aac1750d0 544 09 mb gradient util py 504 2020 04 04 20 14 52 229995 571 5 mb 1 gradient util py 680 maybecompile 340 571 5 mb gradient util py 680 maybecompile 340 2058 01 mb maybecompile 350 2058 01 mb maybecompile 358 at 0x0000024b0a7b08c8 2058 01 mb gradient util py 504 2020 04 04 20 15 10 104159 2101 93 mb 84 gradient gradientshelper maybecompile grad fn gradientshelper |
tensorflowtensorflow | save model cli can not import name save model aot compile from tensorflow python tool 2 2 0 rc2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 yes mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary no tensorflow version use command below v2 2 0 rc1 34 ge6e5d6df2a 2 2 0 rc2 python version 3 7 6 bazelversion if compile from source na gcc compiler version if compile from source na cuda cudnn version gpu model and memory na describe the current behavior save model cl I traceback most recent call last file user tarrade anaconda release conda env env test bin save model cli line 5 in from tensorflow python tool save model cli import main file user tarrade anaconda release conda env env test lib python3 7 site package tensorflow python tool save model cli py line 51 in from tensorflow python tool import save model aot compile importerror can not import name save model aot compile from tensorflow python tool user tarrade anaconda release conda env env test lib python3 7 site package tensorflow python tool init py describe the expect behavior with tf 2 1 save model cli usage save model cli h v show run scan convert save model cli error too few argument and if we pass all need argument then it be work as expect standalone code to reproduce the issue just use the command line save model cli |
tensorflowtensorflow | miss trainable variable and variable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 04 tensorflow instal from source or binary tf 2 1 instal from pip python version python 3 7 5 import os os environ tf cpp min log level 2 import tensorflow as tf class foolayer tf keras layers layer def init self siz super foolayer self init self siz siz self buildfoo siz def call self in datum foo0 tf multiply in datum self footns0 foolist foolist append foo0 for it in range 1 self siz 1 tmp tf multiply foolist it 1 self footns it 1 foolist append tmp return foolist self siz def buildfoo self siz self footns0 tf variable 1 name tns0 self footns for it in range 0 self siz self footns append tf variable it name tns str it 1 class foomodel tf keras model def init self siz super foomodel self init self flayer foolayer siz def call self in datum return self flayer in data model foomodel 5 for v in model trainable variable print v name for v in model variable print v name the output currently be only tns0 0 tns0 0 while the expect output be list all 6 tensor self footns0 and self footns |
tensorflowtensorflow | tf keras loss categorical hinge mention 1 1 value while it work with one hot encode tensor | Bug | hi please see l866 l882 both mention y true the ground truth value y true value be expect to be 1 or 1 if binary 0 or 1 label be provide they will be convert to 1 or 1 while the code be as expect a transcription of keras one y pre op convert to tensor y pre y true math op cast y true y pre dtype pos math op reduce sum y true y pre axis 1 neg math op reduce max 1 y true y pre axis 1 return math op maximum 0 neg pos 1 and this code be mean to work with one hot encode tensor see the original discussion here |
tensorflowtensorflow | tf 2 1 inserting into mutablehashtable result into error | Bug | system information have write custom code os platform and distribution linux ubuntu 18 04 tensorflow instal from binary pip install tensorflow gpu 2 1 0 tensorflow version 2 1 0 python version 3 6 cuda cudnn version 10 7 0 gpu model and memory gtx1070 and 6 gb describe the current behavior I be try to insert some key value pair into a mutablehashtable describe the expect behavior use the contrib equivalent of mutablehashtable do not produce this error standalone code to reproduce the issue import tensorflow as tf tf compat v1 disable eager execution charmap list 0123456789abcdefghijklmnopqrstuvwxyz with tf device gpu 0 table tf raw op mutablehashtable key dtype tf int64 value dtype tf string insert table insert tf constant list range len charmap dtype tf int64 tf constant charmap other info log here be the output traceback most recent call last file test py line 12 in insert table insert tf constant list range len charmap dtype tf int64 attributeerror tensor object have no attribute insert |
tensorflowtensorflow | model not deterministic even though os environ tf deterministic op 1 be set | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow pretty much the mirroredstrategy fmnist example os platform and distribution e g linux ubuntu 16 04 tensorflow tensorflow 2 2 0rc2 gpu py3 tensorflow instal from source or binary tensorflow tensorflow 2 2 0rc2 gpu py3 tensorflow version use command below tensorflow tensorflow 2 2 0rc2 gpu py3 python version tensorflow tensorflow 2 2 0rc2 gpu py3 cuda cudnn version tensorflow tensorflow 2 2 0rc2 gpu py3 gpu model and memory 1050 m describe the current behavior model be not deterministic reproducible two run download datum from 11493376 11490434 2s 0us step epoch 1 loss 0 17844311892986298 accuracy 0 9466999769210815 test loss 0 057941436767578125 test accuracy 0 9815000295639038 epoch 2 loss 0 05286668613553047 accuracy 0 9836500287055969 test loss 0 044471099972724915 test accuracy 0 9853000044822693 epoch 3 loss 0 03694676235318184 accuracy 0 9883000254631042 test loss 0 034947194159030914 test accuracy 0 9897000193595886 epoch 4 loss 0 028592929244041443 accuracy 0 9910500049591064 test loss 0 027234185487031937 test accuracy 0 9907000064849854 epoch 5 loss 0 022629836574196815 accuracy 0 9927666783332825 test loss 0 029115190729498863 test accuracy 0 9904000163078308 epoch 6 loss 0 0172086451202631 accuracy 0 9944999814033508 test loss 0 027797872200608253 test accuracy 0 9902999997138977 epoch 7 loss 0 013981950469315052 accuracy 0 9956499934196472 test loss 0 02764272689819336 test accuracy 0 9909999966621399 epoch 8 loss 0 01210874691605568 accuracy 0 9961333274841309 test loss 0 035009630024433136 test accuracy 0 9896000027656555 epoch 9 loss 0 008961305022239685 accuracy 0 9971666932106018 test loss 0 034057389944791794 test accuracy 0 9905999898910522 epoch 10 loss 0 00800476036965847 accuracy 0 9972166419029236 test loss 0 033878158777952194 test accuracy 0 9900000095367432 gpu run time 70 80781483650208 second download datum from 11493376 11490434 2s 0us step epoch 1 loss 0 1761329025030136 accuracy 0 9478499889373779 test loss 0 05224931612610817 test accuracy 0 9835000038146973 epoch 2 loss 0 05251472815871239 accuracy 0 9836666584014893 test loss 0 04059470072388649 test accuracy 0 9860000014305115 epoch 3 loss 0 03771379590034485 accuracy 0 98785001039505 test loss 0 03189479187130928 test accuracy 0 9894000291824341 epoch 4 loss 0 027971116825938225 accuracy 0 9912333488464355 test loss 0 03176414594054222 test accuracy 0 9890000224113464 epoch 5 loss 0 022653400897979736 accuracy 0 9925000071525574 test loss 0 03643624112010002 test accuracy 0 9876999855041504 epoch 6 loss 0 01727919466793537 accuracy 0 9942166805267334 test loss 0 02887595444917679 test accuracy 0 9901000261306763 epoch 7 loss 0 01397143118083477 accuracy 0 9957500100135803 test loss 0 03118096850812435 test accuracy 0 9905999898910522 epoch 8 loss 0 01202292088419199 accuracy 0 9961333274841309 test loss 0 03164077177643776 test accuracy 0 9909999966621399 epoch 9 loss 0 008715414442121983 accuracy 0 9971333146095276 test loss 0 04146642982959747 test accuracy 0 9896000027656555 epoch 10 loss 0 008586470037698746 accuracy 0 9969000220298767 test loss 0 033046264201402664 test accuracy 0 9902999997138977 gpu run time 72 08828902244568 second describe the expect behavior I expect the model to be reproducible with the same loss accuracy etc standalone code to reproduce the issue usr bin env python import tensorflow as tf import numpy as np import argparse import time import random import os from tensorflow keras layer import dense flatten conv2d from tensorflow keras import model def random seed seed os environ pythonhashsee str seed python general np random seed seed random seed seed python random tf random set seed seed os environ tf deterministic op 1 not yet use click due to docker issue parser argparse argumentparser description tensorflow entry point parser add argument epoch type int default 10 parser add argument seed type int default 0 args parser parse args detect gpu print f num gpu available len tf config experimental list physical device gpu load mnist mnist tf keras datasets mnist train image train label test image test label mnist load datum add a dimension to the array new shape 28 28 1 since the first layer in our model be a convolutional layer and it require a 4d input batch size height width channel batch size dimension will be add later on train image train image none test image test image none normalize the image to 0 1 range train image train image np float32 255 test image test image np float32 255 use mirroredstrategy for multi gpu support if the list of device be not specify in the tf distribute mirroredstrategy constructor it will be auto detect strategy tf distribute mirroredstrategy buffer size len train image batch size per replica 64 global batch size batch size per replica strategy num replicas in sync batch and distribute datum train dataset tf datum dataset from tensor slice train image train label shuffle buffer size batch global batch size test dataset tf datum dataset from tensor slice test image test label shuffle buffer size batch global batch size train dist dataset strategy experimental distribute dataset train dataset test dist dataset strategy experimental distribute dataset test dataset fix seed random seed 0 define model def create model model tf keras sequential tf keras layer conv2d 32 3 activation relu tf keras layer maxpooling2d tf keras layer conv2d 64 3 activation relu tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 return model define loss and accuracyc metric with strategy scope set reduction to none so reduction can be do afterwards and divide by global batch size loss object tf keras loss sparsecategoricalcrossentropy from logit true reduction tf keras loss reduction none def compute loss label prediction per example loss loss object label prediction return tf nn compute average loss per example loss global batch size global batch size test loss tf keras metric mean name test loss train accuracy tf keras metric sparsecategoricalaccuracy name train accuracy test accuracy tf keras metric sparsecategoricalaccuracy name test accuracy define model optimizer training and test step with strategy scope model create model optimizer tf keras optimizer adam def train step input image label input with tf gradienttape as tape prediction model image train true loss compute loss label prediction gradient tape gradient loss model trainable variable optimizer apply gradient zip gradient model trainable variable train accuracy update state label prediction return loss def test step input image label input prediction model image training false t loss loss object label prediction test loss update state t loss test accuracy update state label prediction with strategy scope run replicate the provide computation and run it with the distribute input tf function def distribute train step dataset input per replica loss strategy run train step args dataset input return strategy reduce tf distribute reduceop sum per replica loss axis none tf function def distribute test step dataset input return strategy run test step args dataset input gpu runtime time time for epoch in range args epoch train loop total loss 0 0 num batch 0 for dist dataset in train dist dataset total loss distribute train step dist dataset num batch 1 train loss total loss num batch test loop for dist dataset in test dist dataset distribute test step dist dataset print f epoch epoch 1 loss train loss accuracy train accuracy result f test loss test loss result test accuracy test accuracy result reset state test loss reset state train accuracy reset states test accuracy reset states print f gpu run time str time time gpu runtime second other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach def random seed seed os environ pythonhashsee str seed python general np random seed seed random seed seed python random tf random set seed seed os environ tf deterministic op 1 I guess this should cover everything the code be currently run on a single gpu even though I m plan to run it on several gpu |
tensorflowtensorflow | error while pass initial state to bidirectional lstm | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 6 tensorflow instal from source or binary binary anaconda python version 3 6 10 tensorflow version 2 1 0 describe the current behavior error while pass initial state to tf keras layers bidirectional with tf keras layers lstm as cell describe the expect behavior should be able to pass the initial state standalone code to reproduce the issue import tensorflow as tf import tensorflow keras layer as layer encoder unit 100 batch size 5 embed dim 300 lstm layer bidirectional layers lstm encoder unit return sequence true return state true time major false initial state tf zero batch size encoder unit tf zero batch size encoder unit embed inp tf zero batch size encoder unit embed dim encoder op state h state c lstm embed inp initial state initial state other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach anaconda3 envs tf2 bin python user jshah02 library preference pycharmce2019 2 scratch scratch 8 py 2020 04 03 16 02 31 592278 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 04 03 16 02 31 606727 I tensorflow compiler xla service service cc 168 xla service 0x7fe10d5c24e0 initialize for platform host this do not guarantee that xla will be use device 2020 04 03 16 02 31 606745 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version traceback most recent call last file user jshah02 library preference pycharmce2019 2 scratch scratch 8 py line 15 in encoder op state h state c lstm embed inp initial state initial state file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras layers wrappers py line 605 in call return super bidirectional self call input kwargs file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 818 in call self maybe build input file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 2116 in maybe build self build input shape file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras layers wrappers py line 697 in build self forward layer build input shape file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras layers recurrent py line 574 in build self validate state spec state size self state spec file anaconda3 envs tf2 lib python3 6 site package tensorflow core python keras layers recurrent py line 605 in validate state spec raise validation error valueerror an initial state be pass that be not compatible with cell state size receive state spec listwrapper inputspec shape 5 100 ndim 2 however cell state size be 100 100 process finish with exit code 1 |
tensorflowtensorflow | error on confliction between custom op argument name and python builtin name | Bug | system information have I write custom code yes os platform and distribution linux tensorflow instal from binary tensorflow version 1 13 1 python version 3 6 describe the current behavior if we define custom op as register op mycustomop input list list int32 float32 attr my list attr list int other setting and invoke in python as lib load library load op library my custom op so output lib my custom op argument the error raise typeerror isinstance arg 2 must be a type or tuple of type other info log refer to l484 I find python side wrapper will be create as def my custom op list if we define input name as list and for attribute my list attr check code be generate as if not isinstance my list attr list tuple but list do not refer to builtin type anymore which result to python type error can some name checking get conduct to avoid such kind of confusing error |
tensorflowtensorflow | example person detection test can not make form source code of tensorflow | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version python version instal use virtualenv pip conda bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the problem third party s down shell file should update image provide the exact sequence of command step that you execute before run into the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | fftshift be fail for negative axis | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 6 8 describe the current behavior when use the fftshift op I would like to specify the shift axis use negative index right now the op fail if I specify negative axis describe the expect behavior I would like the op not to fail standalone code to reproduce the issue python import tensorflow as tf tf signal fftshift tf one 1 32 32 axis 2 1 other info log invalidargumenterror traceback most recent call last in 1 tf signal fftshift tf one 1 32 32 axis 2 1 workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python op signal fft ops py in fftshift x axis name 389 shift array op shape x axis 2 390 else 391 shift array op gather array op shape x axis 2 392 393 return manip op roll x shift axis name workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python op array ops py in gather fail resolve argument 4106 return param sparse read index name name 4107 except attributeerror 4108 return gen array op gather v2 param indice axis name name 4109 4110 workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python ops gen array ops py in gather v2 param indice axis batch dim name 3677 try 3678 return gather v2 eager fallback 3679 param indice axis batch dim batch dim name name ctx ctx 3680 except core symbolicexception 3681 pass add node to the tensorflow graph workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python ops gen array ops py in gather v2 eager fallback param indice axis batch dim name ctx 3715 attr tindice taxi attr taxi 3716 result execute execute b gatherv2 1 input input flat 3717 attrs attrs ctx ctx name name 3718 if execute must record gradient 3719 execute record gradient workspace fastmri reproducible benchmark venv lib python3 6 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor workspace fastmri reproducible benchmark venv lib python3 6 site package six py in raise from value from value invalidargumenterror indice 0 2 be not in 0 3 op gatherv2 |
tensorflowtensorflow | with experimental run function eagerly true tf function run by dataset doesn t get eager tensor | Bug | system information have I write custom code yes os platform and distribution ubuntu 18 04 mobile device n a tensorflow instal from binary tensorflow version use command below 2 1 0 2 2 0rc2 python version 3 7 6 the manual suggest switch eager computation on for tf function s if debugging be desire via tf config experimental run function eagerly true however this be not possible in situation like show below to I it seem that the in case of run eagerly the function pass to map would need to be execute with tensorflow python framework op eagertensor tensor argument not a regular tensorflow python framework op tensor describe the current behavior the example below output the follow fail with an exception traceback most recent call last file test py line 14 in perform test file test py line 8 in perform test print list tf datum dataset from tensor slice 1 0 1 0 map non negative file tensorflow python data op dataset op py line 1621 in map return mapdataset self map func preserve cardinality true file tensorflow python data op dataset op py line 3974 in init use legacy function use legacy function file tensorflow python data op dataset op py line 3221 in init self function wrapper fn get concrete function file tensorflow python eager function py line 2532 in get concrete function args kwargs file tensorflow python eager function py line 2496 in get concrete function garbage collect graph function args kwargs self maybe define function args kwargs file tensorflow python eager function py line 2777 in maybe define function graph function self create graph function args kwargs file tensorflow python eager function py line 2667 in create graph function capture by value self capture by value file tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file tensorflow python data op dataset op py line 3214 in wrapper fn ret wrapper helper args file tensorflow python data op dataset op py line 3156 in wrapper helper ret autograph tf convert func ag ctx nest args file tensorflow python eager def function py line 564 in call return self python function args kwd file test py line 5 in non negative return 1 0 if value 0 0 else 0 0 file tensorflow python framework op py line 778 in bool self disallow bool cast file tensorflow python framework op py line 542 in disallow bool cast use a tf tensor as a python bool file tensorflow python framework op py line 527 in disallow when autograph disable try decorate it directly with tf function format task tensorflow python framework error impl operatornotallowedingrapherror use a tf tensor as a python bool be not allow autograph be disabled in this function try decorate it directly with tf function describe the expect behavior the example below work as desire output standalone code to reproduce the issue python import tensorflow as tf tf function def non negative value return 1 0 if value 0 0 else 0 0 def perform test print list tf datum dataset from tensor slice 1 0 1 0 map non negative perform test tf config experimental run function eagerly true perform test |
tensorflowtensorflow | make csv dataset valueerror receive a feature column from tensorflow v1 | Bug | system information os platform and distribution mac os 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow version use command below 2 1 0 v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 6 describe the current behavior when try to train a dnnregressor use a dataset from make csv dataset I obtain a very strange error message python valueerror receive a feature column from tensorflow v1 but this be a tensorflow v2 estimator please either use v2 feature column accessible via tf feature column in tf 2 x with this estimator or switch to a v1 estimator for use with v1 feature column accessible via tf compat v1 estimator and tf compat v1 feature column respectively describe the expect behavior I be expect that this would work directly standalone code to reproduce the issue python def train input fn df tf datum experimental make csv dataset file pattern batch size column name none column default none label name label name 0 select column column name field delim use quote delim true na value header true num epoch none shuffle true shuffle buffer size 10000 shuffle seed none prefetch buffer size none num parallel read none sloppy false num row for inference 100 compression type none ignore error false df batch df cache repeat shuffle 500 prefetch tf datum experimental autotune return df batch tf keras backend set floatx float32 nfeat len feature name ncovs nfeat nfeat 1 2 model tf estimator dnnregressor ncovs feature name activation fn tf nn relu dropout 0 3 optimizer adam weight column weight history model train train input fn step 40000 I do not understand how to make this work honestly I can not find a end to end minimal example that use a csv input datum file that be too large to fit in memory |
tensorflowtensorflow | attributeerror module tensorflow keras optimizer have no attribute dam | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary tensorflow version use command below 2 2 0 rc2 describe the current behavior tensorflow 2 2 0 rc2 can not find adam in tf keras optimizer standalone code to reproduce the issue optimizer tf keras optimizers dam 2e 4 |
tensorflowtensorflow | unable to batch tensorflowlite object detection model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 7 describe the current behavior I m use the tflite object detection model in python and upon try to resize the input to increase the batch size I m get an error as follow traceback most recent call last file test crop face py line 70 in interpreter allocate tensor file user harshitdwivedi desktop tf env lib python3 7 site package tensorflow lite python interpreter py line 73 in allocate tensor return self interpreter allocatetensor file user harshitdwivedi desktop tf env lib python3 7 site package tensorflow lite python interpreter wrapper tensorflow wrap interpreter wrapper py line 106 in allocatetensor return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self runtimeerror tensorflow lite kernel reshape cc 58 num input element num output element 3276800 65536 node number 88 reshape fail to prepare describe the expect behavior set a custom batch size for an image classification model work without any issue so I expect the same thing to happen here as well standalone code to reproduce the issue interpreter tf contrib lite interpreter model path crop face tflite input detail interpreter get input detail output detail interpreter get output detail resize the input to run inference on more than 1 image at a time the default size be 1 512 512 3 interpreter resize tensor input input detail 0 index 50 512 512 3 interpreter allocate tensor here s the model I m use I have train it via the gcp s cloud vision dashboard |
tensorflowtensorflow | java savedmodelbundle import lookuptable core dump | Bug | I use libtensorflow jar to load a model with save model format the core dump occur in lookuptableimportop computation stage however this model could be load successfully via c tensoflow serve executable or python tf save model loader tensorflow 1 12 mac os cpu only public static void main string args system out println tensorflow version savedmodelbundle b savedmodelbundle load model serve b close error message 1 12 0 2020 04 02 12 09 07 285904 I tensorflow cc save model reader cc 31 reading savedmodel from model 2020 04 02 12 09 07 295754 I tensorflow cc save model reader cc 54 read meta graph with tag serve 2020 04 02 12 09 07 342467 I tensorflow cc save model loader cc 162 restore savedmodel bundle thread 23555 also have an error thread 42243 also have an error a fatal error have be detect by the java runtime environment sigsegv 0xb at pc 0x00000001163dec0e thread 23299 also have an error pid 93091 tid 0x000000000000a103 thread 41475 also have an error thread 41731 also have an error jre version java tm se runtime environment 8 0 201 b09 build 1 8 0 201 b09 thread 42499 also have an error java vm java hotspot tm 64 bit server vm 25 201 b09 mixed mode bsd amd64 compress oop problematic frame c libtensorflow framework so 0x44c0e tensorflow lookup lookupinterface checkkeyandvaluetensorshelper tensorflow tensor const tensorflow tensor const 0x6e fail to write core dump core dump have be disable to enable core dumping try ulimit c unlimited before start java again stack message stack 0x000070000e240000 0x000070000e2c0000 sp 0x000070000e2bf4b0 free space 509k native frame j compile java code j interpret vv vm code c native code c libtensorflow framework so 0x44c0e tensorflow lookup lookupinterface checkkeyandvaluetensorshelper tensorflow tensor const tensorflow tensor const 0x6e c libtensorflow framework so 0x44e2e tensorflow lookup lookupinterface checkkeyandvaluetensorsforimport tensorflow tensor const tensorflow tensor const 0xe c libtensorflow jni dylib 0x10454e0 tensorflow lookuptableimportop compute tensorflow opkernelcontext 0x140 c libtensorflow framework so 0x23b362 tensorflow anonymous namespace executorstate process tensorflow anonymous namespace executorstate taggednode long long 0x1f12 c libtensorflow framework so 0x2434ba std 1 function func std 1 allocator void operator 0x3a c libtensorflow framework so 0x29c58f eigen nonblockingthreadpooltempl workerloop int 0x54f c libtensorflow framework so 0x29bf3f std 1 function func lambda std 1 allocator lambda void operator 0x2f c libtensorflow framework so 0x2c0990 void std 1 thread proxy std 1 function void 0x30 c libsystem pthread dylib 0x3305 pthread body 0x7e c libsystem pthread dylib 0x626f pthread start 0x46 c libsystem pthread dylib 0x2415 thread start 0xd |
tensorflowtensorflow | extra metric add to model metric in tf 2 2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 run on colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary run on colab tensorflow version use command below 2 2 0 rc2 python version 3 6 9 bazel version if compile from source run on colab gcc compiler version if compile from source run on colab cuda cudnn version no gpu gpu model and memory no gpu describe the current behavior when I compile my model with model compile metric mae I end up with an extra metric in the first position this break my exist code for example when access model metric 0 now I get another metric than the one that be except describe the expect behavior when I compile the model with n metric I expect model metric to return a list with n metric not 1 n standalone code to reproduce the issue you can run the follow code in this colab python import numpy as np from tensorflow import kera x train np random rand 100 10 y train np random rand 100 1 model keras model sequential kera layer dense 2 activation relu input shape 10 kera layer dense 1 model compile loss mse optimizer sgd metric mae mse model fit x train y train epoch 2 assert len model metric 2 the assertion fail other info log this may be relate to issue 37990 but it feel more severe there s another behavior change that really confuse I the model metric list be empty until model fit be call in short model metric seem really broken and unintuitive now |
tensorflowtensorflow | keras model error on loading list object have no attribute item with tf 2 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 productname mac os x productversion 10 15 2 buildversion 19c57 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 6 8 cuda cudnn version none gpu model and memory none describe the current behavior when try to load one of my model use tf keras model load model an error be throw at the follow location tensorflow core python keras util generic util py line 254 in class and config for serialized keras object for key item in cls config item attributeerror list object have no attribute item this code expect cls config to be a dictionary while for this model it be a list of dictionary I can successfully load and run this model use tensorflow version 2 0 0 1 15 0 and 1 14 0 this section of code be introduce when add support for passive serialization in kera describe the expect behavior can successfully load a model from a hdf5 file code to reproduce the issue import tensorflow as tf model tf keras model load model cnn multichannel dense f0 b0 h5 compile false other info log I be also attach a dummy hdf5 model below which can be use to test complete stacktrace of the error file lib python3 6 site package tensorflow core python keras save save py line 146 in load model return hdf5 format load model from hdf5 filepath custom object compile file lib python3 6 site package tensorflow core python keras save hdf5 format py line 168 in load model from hdf5 custom object custom object file lib python3 6 site package tensorflow core python keras saving model config py line 55 in model from config return deserialize config custom object custom object file lib python3 6 site package tensorflow core python keras layers serialization py line 106 in deserialize printable module name layer file lib python3 6 site package tensorflow core python keras util generic util py line 292 in deserialize keras object config module object custom object printable module name file lib python3 6 site package tensorflow core python keras util generic util py line 254 in class and config for serialized keras object for key item in cls config item attributeerror list object have no attribute item when load with tf kera in v2 0 0 the layer model config input output summary etc be all parse correctly as well as be able to run datum through the model |
tensorflowtensorflow | test | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | error convert mobilenet and mobilenetv2 to tflite fusedbatchednormv3 | Bug | system information linux ubuntu 18 04 use pip install conda install use 2 0 0 2 1 0 tf nightly 2 2 0 dev20200401 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook from future import absolute import from future import division from future import print function import os import tensorflow as tf import tensorflow addon as tfa import panda as pd import matplotlib pyplot as plt import tensorflow as tf import numpy as np import pathlib from sklearn util import class weight print tf version print eagerly enable tf execute eagerly model load weight mobilenet model3 with reg 6 c h5 converter tf lite tfliteconverter from keras model model converter experimental new converter true the output from the converter invocation convertererror see console for info home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 516 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 517 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 518 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 519 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 520 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home vectorweb4 local lib python3 6 site package tensorflow python framework dtype py 525 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 541 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 542 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 543 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 544 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 545 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home vectorweb4 local lib python3 6 site package tensorboard compat tensorflow stub dtype py 550 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 2020 04 01 07 24 52 175680 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183563 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183617 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183639 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183661 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183680 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183700 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183720 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183739 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183756 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183775 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183795 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183817 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183834 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183853 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183870 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183890 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183907 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183927 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183945 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183965 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 183982 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 184002 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 184021 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 184039 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 184058 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 184077 I tensorflow lite toco import tensorflow cc 1336 convert unsupported operation fusedbatchnormv3 2020 04 01 07 24 52 185918 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 118 operator 397 array 0 quantize 2020 04 01 07 24 52 188931 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 118 operator 397 array 0 quantize 2020 04 01 07 24 52 201162 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 90 operator 396 array 0 quantize 2020 04 01 07 24 52 205488 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 89 operator 395 array 0 quantize 2020 04 01 07 24 52 208850 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 3 88 operator 393 array 0 quantize 2020 04 01 07 24 52 212215 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 88 operator 393 array 0 quantize 2020 04 01 07 24 52 215005 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 88 operator 393 array 0 quantize 2020 04 01 07 24 52 220046 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 4513344 byte theoretical optimal value 4513344 byte 2020 04 01 07 24 52 220849 e tensorflow lite toco toco tooling cc 462 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use conv 2d depthwise conv 2d fully connect pad relu6 softmax here be a list of operator for which you will need custom implementation fusedbatchnormv3 traceback most recent call last file home vectorweb4 local bin toco from protos line 11 in sys exit main file home vectorweb4 local lib python3 6 site package tensorflow lite toco python toco from protos py line 59 in main app run main execute argv sys argv 0 unparse file home vectorweb4 local lib python3 6 site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home vectorweb4 local lib python3 6 site package absl app py line 299 in run run main main args file home vectorweb4 local lib python3 6 site package absl app py line 250 in run main sys exit main argv file home vectorweb4 local lib python3 6 site package tensorflow lite toco python toco from protos py line 33 in execute output str tensorflow wrap toco tococonvert model str toco str input str exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use conv 2d depthwise conv 2d fully connect pad relu6 softmax here be a list of operator for which you will need custom implementation fusedbatchnormv3 |
tensorflowtensorflow | bug map method of tf datum dataset have a bug tensorflow version 2 1 0 | Bug | one example of map method in the follow official website say that map func get same shape and dtype as tf tensor however it s not map dataset dataset range 5 map func take a single argument of type tf tensor with the same shape and dtype result dataset map lambda x x 1 accord to the official example I think item in func of the follow code should be an eagettensor but it turn out to be a tensor instead import tensorflow as tf def func item I expect an eagertensor bug get a tensor here print type item return item tensor tf convert to tensor hello world print type tensor dataset tf datum dataset from tensor slice tensor dataset map func I want to use numpy to convert an eagertensor to numpy array and then make some operation use numpy but very suprisingly I get a tensor in func and sadly I don t know how to make it for tensor |
tensorflowtensorflow | solve while convert savedmodel to tflite valueerror as list be not define on an unknown tensorshape | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 1 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook tflite convert output file model tflite save model dir input array serve default input 1 input shape 1 800 800 3 output array statefulpartitionedcall 1 the output from the converter invocation 2020 04 01 15 38 11 537934 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer so 6 2020 04 01 15 38 11 540538 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer plugin so 6 2020 04 01 15 38 12 243351 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 04 01 15 38 12 251378 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 12 251935 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 12 252000 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 12 252535 I tensorflow core common runtime gpu gpu device cc 1555 find device 1 with property pcibusid 0000 03 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 12 252651 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory 2020 04 01 15 38 12 252693 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 dlerror libcubla so 10 can not open share object file no such file or directory 2020 04 01 15 38 12 252731 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory 2020 04 01 15 38 12 252767 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 dlerror libcurand so 10 can not open share object file no such file or directory 2020 04 01 15 38 12 252806 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory 2020 04 01 15 38 12 252842 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 dlerror libcusparse so 10 can not open share object file no such file or directory 2020 04 01 15 38 12 252869 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 01 15 38 12 252875 w tensorflow core common runtime gpu gpu device cc 1592 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 04 01 15 38 12 253276 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 04 01 15 38 12 259296 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3191935000 hz 2020 04 01 15 38 12 259632 I tensorflow compiler xla service service cc 168 xla service 0x563e568f9430 initialize for platform host this do not guarantee that xla will be use device 2020 04 01 15 38 12 259645 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 04 01 15 38 12 402933 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 12 406735 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 12 407499 I tensorflow compiler xla service service cc 168 xla service 0x563e5698fad0 initialize for platform cuda this do not guarantee that xla will be use device 2020 04 01 15 38 12 407515 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2020 04 01 15 38 12 407520 I tensorflow compiler xla service service cc 176 streamexecutor device 1 geforce rtx 2080 ti compute capability 7 5 2020 04 01 15 38 12 407689 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 04 01 15 38 12 407697 I tensorflow core common runtime gpu gpu device cc 1102 2020 04 01 15 38 13 801484 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 13 802395 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 13 803088 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 2 2020 04 01 15 38 13 803208 I tensorflow core grappler cluster single machine cc 356 start new session 2020 04 01 15 38 13 804559 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 13 805223 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 13 805314 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 13 805958 I tensorflow core common runtime gpu gpu device cc 1555 find device 1 with property pcibusid 0000 03 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 13 806094 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory 2020 04 01 15 38 13 806151 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 dlerror libcubla so 10 can not open share object file no such file or directory 2020 04 01 15 38 13 806196 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory 2020 04 01 15 38 13 806240 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 dlerror libcurand so 10 can not open share object file no such file or directory 2020 04 01 15 38 13 806283 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory 2020 04 01 15 38 13 806326 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 dlerror libcusparse so 10 can not open share object file no such file or directory 2020 04 01 15 38 13 806339 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 01 15 38 13 806346 w tensorflow core common runtime gpu gpu device cc 1592 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 04 01 15 38 13 806423 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 04 01 15 38 13 806432 I tensorflow core common runtime gpu gpu device cc 1102 0 1 2020 04 01 15 38 13 806440 I tensorflow core common runtime gpu gpu device cc 1115 0 n n 2020 04 01 15 38 13 806445 I tensorflow core common runtime gpu gpu device cc 1115 1 n n 2020 04 01 15 38 13 922212 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2020 04 01 15 38 13 922539 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer graph size after 1542 node 1413 2635 edge 2506 time 71 453ms 2020 04 01 15 38 13 922555 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 1 68ms 2020 04 01 15 38 15 070366 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 15 071111 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 15 071664 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 2 2020 04 01 15 38 15 071742 I tensorflow core grappler cluster single machine cc 356 start new session 2020 04 01 15 38 15 072130 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 15 072644 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 15 072689 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 01 15 38 15 073234 I tensorflow core common runtime gpu gpu device cc 1555 find device 1 with property pcibusid 0000 03 00 0 name geforce rtx 2080 ti computecapability 7 5 coreclock 1 545ghz corecount 68 devicememorysize 10 73gib devicememorybandwidth 573 69gib s 2020 04 01 15 38 15 073368 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory 2020 04 01 15 38 15 073410 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 dlerror libcubla so 10 can not open share object file no such file or directory 2020 04 01 15 38 15 073445 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 dlerror libcufft so 10 can not open share object file no such file or directory 2020 04 01 15 38 15 073480 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 dlerror libcurand so 10 can not open share object file no such file or directory 2020 04 01 15 38 15 073520 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 dlerror libcusolver so 10 can not open share object file no such file or directory 2020 04 01 15 38 15 073552 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 dlerror libcusparse so 10 can not open share object file no such file or directory 2020 04 01 15 38 15 073562 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 01 15 38 15 073568 w tensorflow core common runtime gpu gpu device cc 1592 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 04 01 15 38 15 073658 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 04 01 15 38 15 073664 I tensorflow core common runtime gpu gpu device cc 1102 0 1 2020 04 01 15 38 15 073668 I tensorflow core common runtime gpu gpu device cc 1115 0 n n 2020 04 01 15 38 15 073671 I tensorflow core common runtime gpu gpu device cc 1115 1 n n 2020 04 01 15 38 15 346279 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2020 04 01 15 38 15 346362 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 1300 node 204 2189 edge 408 time 140 857m 2020 04 01 15 38 15 346373 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 1300 node 0 2189 edge 0 time 89 953ms traceback most recent call last file home bigdata anaconda3 envs cartoongan v2 bin tflite convert line 10 in sys exit main file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 594 in main app run main run main argv sys argv 1 file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package absl app py line 300 in run run main main args file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package absl app py line 251 in run main sys exit main argv file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 577 in run main convert tf2 model tflite flag file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 235 in convert tf2 model tflite model converter convert file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python lite py line 442 in convert shape list tensor shape as list file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core python framework tensor shape py line 1166 in as list raise valueerror as list be not define on an unknown tensorshape valueerror as list be not define on an unknown tensorshape also please include a link to the save model or graphdef model zip it s small and about 1 7 mb failure detail can not convert to tflite 2020 04 01 16 44 20 851662 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2020 04 01 16 44 20 851708 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 1300 node 204 2189 edge 408 time 106 827m 2020 04 01 16 44 20 851715 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 1300 node 0 2189 edge 0 time 61 656ms traceback most recent call last file home bigdata anaconda3 envs cartoongan v2 bin tflite convert line 10 in sys exit main file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 594 in main app run main run main argv sys argv 1 file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package absl app py line 300 in run run main main args file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package absl app py line 251 in run main sys exit main argv file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 577 in run main convert tf2 model tflite flag file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python tflite convert py line 235 in convert tf2 model tflite model converter convert file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core lite python lite py line 442 in convert shape list tensor shape as list file home bigdata anaconda3 envs cartoongan v2 lib python3 7 site package tensorflow core python framework tensor shape py line 1166 in as list raise valueerror as list be not define on an unknown tensorshape valueerror as list be not define on an unknown tensorshape any other info log the model be train with tensorflow 2 0 0a0 but I convert it under tensorflow 2 1 0 |
tensorflowtensorflow | information about keras tuner be miss in tensorflow org website | Bug | url s with the issue description of issue what need change in the tf dev summit 2020 paige bailey have talk about keras tuner and have show its implementation I like the functionality but I couldn t information documentation about it in tensorflow org site clear description this be a new functionality the documentation about that functionality in the website would help the community correct link be the link to the source code correct n a parameter define be all parameter define and format correctly n a return define be return value define n a raise list and define be the error define n a usage example be there a usage example n a |
tensorflowtensorflow | pad batch miss 1 require positional argument pad shape in line train datum train datum pad batch batch size | Bug | pad batch miss 1 require positional argument pad shape in line train datum train datum pad batch batch size |
tensorflowtensorflow | unclear explanation in the text classification with tensorflow hub movie review example | Bug | url s with the issue build the model please provide a link to the documentation entry for example build the model description of issue what need change the last layer in the model be model add tf keras layer dense 1 however in the follow description it say the last layer be densely connect with a single output node use the sigmoid activation function I check the api doc and find that the default activation be none for dense layer without activation sigmoid the prediction be not in the range of 0 1 as show below which be not interpretable pre model predict test datum batch 512 print pre 0 29496038 1 2088487 0 11580676 1 610341 0 8496179 1 3117154 so shall the example code be model add tf keras layer dense 1 activation sigmoid |
tensorflowtensorflow | tfliteconverter object have no attribute experimental new quantizer tf 2 2 0 rc2 | Bug | system information have I write custom code see below os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below 2 2 0 rc2 python version 3 6 9 bazel version if compile from source 2 0 0 gcc compiler version xcode 11 4 cuda cudnn version gpu model and memory 10 2 gtx1080ti describe the current behavior use the current code converter tf compat v1 lite tfliteconverter from keras model file dp model name converter optimization tf lite optimize optimize for size converter representative dataset representative dataset gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite quant model converter convert I get the follow error and crash use tf 2 2 0 rc2 tflite quant model converter convert file usr local lib python3 6 dist package tensorflow lite python lite py line 1094 in convert result inference input type inference output type file usr local lib python3 6 dist package tensorflow lite python lite py line 267 in calibrate quantize model inference output type allow float self experimental new quantizer file usr local lib python3 6 dist package tensorflow lite python lite py line 944 in getattribute return object getattribute self name attributeerror tfliteconverter object have no attribute experimental new quantizer describe the expect behavior note this error be specific to tf 2 2 0 rc2 all work well with tf 2 2 0 rc1 |
tensorflowtensorflow | invalid output shape to invalid input | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 yes mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow 2 1 source python version 3 7 version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when kenerl size 0 the size of image increase describe the expect behavior value error standalone code to reproduce the issue import tensorflow as tf input tf keras layers input shape 32 32 3 x tf keras layer conv2d 64 kernel size 0 input x tf keras layer flatten x output tf keras layer dense 64 x model tf keras model input output model summary output issue1 |
tensorflowtensorflow | colab tpu break on late tf nightly | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow 6 line of adaptation to stock example script from os platform and distribution e g linux ubuntu 16 04 google colaboratory tensorflow version use command below 2 2 0 dev20200330 python version 3 6 9 describe the current behavior on google colaboratory tpu s no long work with the tf nightly build I be use the trick as mention here to run tpu s on tf nightly issuecomment 598399912 issuecomment 598399912 but it stop work suddenly when do the same thing as before it throw an error while try to initialize the tpu system give invalidargumenterror nodedef expect input string do not match 0 input specify op attr t type attr tensor name string attr send device string attr send device incarnation int attr recv device string attr client terminate bool default false be stateful true nodedef node send describe the expect behavior the expect behavior be for it to not throw an error so that the tpu work standalone code to reproduce the issue I have a minimal reproduction base on the tensorflow tpu tutorial with the trick from above issuecomment 598399912 add before the first cell other info log the nodedef error occur at this cell resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver and the error it throw be info tensorflow initialize the tpu system grpc 10 79 85 146 8470 info tensorflow initialize the tpu system grpc 10 79 85 146 8470 info tensorflow clear out eager cache info tensorflow clear out eager caches invalidargumenterror traceback most recent call last in 1 resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr 2 tf config experimental connect to cluster resolver 3 tf tpu experimental initialize tpu system resolver 3 frame usr local lib python3 6 dist package six py in raise from value from value invalidargumenterror nodedef expect input string do not match 0 input specify op attr t type attr tensor name string attr send device string attr send device incarnation int attr recv device string attr client terminate bool default false be stateful true nodedef node send note current tensorflow version be 2 2 0 dev20200330 to use tf 1 x instead restart your runtime ctrl m and run tensorflow version 1 x before you run import tensorflow I have be try to find out in what nightly this be introduce but I sometimes get error that I be try too frequently essentially anyway I will list all the version I try tf nightly version 2 2 0 dev20200327 work nightlie that fail with nodedef error 2 2 0 dev20200328 2 2 0 dev20200329 2 2 0 dev20200330 some potentially relate useful information but maybe unrelated I couldn t directly find which one be work and which one weren t so I also test old version first and notice there be a lot of version which have a different error so maybe this be simply explicitly fix or maybe implicitly in which case it could add additional information 2 2 0 dev20200311 work nightlie that fail with a different error namely a mesh shape error see error detail below 2 2 0 dev20200312 2 2 0 dev20200313 2 2 0 dev20200316 2 2 0 dev20200319 2 2 0 dev20200323 mesh shape error that occur on certain nightlie with same notebook and occur in the same cell info tensorflow initialize the tpu system grpc 10 18 110 18 8470 info tensorflow initialize the tpu system grpc 10 18 110 18 8470 info tensorflow clear out eager cache info tensorflow clear out eager cache info tensorflow finish initialize tpu system info tensorflow finish initialize tpu system valueerror traceback most recent call last in 1 resolver tf distribute cluster resolver tpuclusterresolver 2 tf config experimental connect to cluster resolver 3 tf tpu experimental initialize tpu system resolver 2 frame usr local lib python3 6 dist package tensorflow python tpu topology py in parse topology self serialize 107 if len self mesh shape 4 or any self mesh shape 1 108 raise valueerror mesh shape must be a vector of size 4 with positive 109 entry get format self mesh shape 110 111 if proto num task 0 valueerror mesh shape must be a vector of size 4 with positive entry get 2 2 2 |
tensorflowtensorflow | custom loss order of argument | Bug | I ve find a little mistake in the documentation on the follow website the order of y true and y pre be reverse def loss predict y target y return tf reduce mean tf square predict y target y it s usually the other way around keras loss mean square error y true y pre it make no difference for mse since this loss be symmetric it do make a difference for mmse mask mse where random value of the target be map to zero |
tensorflowtensorflow | unboundlocalerror local variable log reference before assignment on training with little datum | Bug | I find an error cause by an attempt of cope training log from a not yet assign variable the error occur on my machine arch linux tensorflow v2 2rc2 compile from source and I manage to reproduce the error on colab on a stock environment it only happen when the model fit method be call with very little training eval datum the log variable be assign inside a for loop that never happen when there be no sufficient datum the code live here l793 the notebook gist link for reproduce the bug |
tensorflowtensorflow | maxpooling1d layer cause esp32 to crash | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 macos catalina 10 15 4 tensorflow instal from source or binary instal with pip pip install upgrade tensorflow tensorflow version commit sha if source version 2 1 0 target platform e g arm mbe os arduino nano 33 etc esp32 describe the problem use a maxpooling1d in my model cause the esp32 to crash however the model work fine when I remove the maxpooling1d layer here be the error from the exception didn t find op for builtin opcode max pool 2d version 2 fail to get registration from op code max pool 2d allocatetensor fail guru meditation error core 1 panic ed loadprohibite exception be unhandle please provide the exact sequence of command step when you run into the problem here be my model python model sequential model add conv1d filter 32 kernel size 3 activation relu input shape n timestep 6 model add conv1d filter 32 kernel size 3 activation relu model add dropout 0 5 model add maxpooling1d pool size 2 model add flatten model add dense 100 activation relu model add dense 4 activation softmax model compile loss categorical crossentropy optimizer tf keras optimizer adam lr 1e 3 metric accuracy to convert the model python converter lite tfliteconverter from keras model model converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op converter optimization lite optimize default converter representative dataset representative dataset gen tfmodel converter convert open path model tflite wb write tfmodel for the op resolver I be use allopsresolver static tflite op micro allopsresolver resolver if I look in the file all op resolver cc there be no min max version for max pool 2d addbuiltin builtinoperator max pool 2d register max pool 2d good regard victor douet |
tensorflowtensorflow | error with tensorflow 2 input to eager execution function can not be keras symbolic tensor | Bug | hi everyone the tensorflow 2 release note request that an issue be file when experience problem with the new single path execution code I regularly work with custom loss function that require additional information other than the predictor and observe outcome a simple example be estimate a general binomial regression model where the number of trial be part of the likelihood loss function but they be not part of the predictor or observe outcome sample r code be provide in would you please make available a permanent option in tensorflow 2 to pass additional input layer into custom loss function so that we can keep use tensorflow to estimate such model I guess that set experimental run tf function false be only a temporary fix thank you |
tensorflowtensorflow | save model cli break on 2 2rc | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 all platform linux win macos tensorflow instal from binary tensorflow version 2 2rc 012 python version 3 6 hi save model cli appear to be break on all platform and all version except for rc0 linux simply try to use save model cli from the command line yield on all version and all platform traceback most recent call last file c user marco appdata local program python python37 lib runpy py line 193 in run module as main main mod spec file c user marco appdata local program python python37 lib runpy py line 85 in run code exec code run global file c user marco repos tfutil venv script save model cli exe main py line 5 in file c user marco repos tfutil venv lib site package tensorflow python tool save model cli py line 51 in from tensorflow python tool import save model aot compile importerror can not import name save model aot compile from tensorflow python tool c user marco repos tfutil venv lib site package tensorflow python tool init py linux 2 2rc0 appear to be the only exception usage save model cli h v show run scan convert aot compile cpu save model cli error too few argument good regard marco |
tensorflowtensorflow | use savedmodel with low level api in tf 2 x | Bug | I be not exactly sure whether this be a bug but it seem so system information I have custom code os window tensorflow 2 1 0 conda mkl version python 3 7 describe the current behavior I be develop a system that use tensorflow savedmodel format to serve model in a custom embed environment so I want to create a basic computational graph and move on from there to test my toolchain and find bug easily for that I try to create a simple graph as below standalone code to reproduce the issue python3 import tensorflow as tf import tensorflow core as tfcore import pathlib graph tf graph with graph as default a tf raw op placeholder dtype tf dtype float32 shape none name a b tf raw op placeholder dtype tf dtype float32 shape none name b result tf raw op add x a y b name result with tfcore python session graph graph as sess script dir pathlib path file resolve parent builder tfcore python save model builder savedmodelbuilder str script dir save model save signature tfcore python save model signature def util predict signature def input a a b b output result result builder add meta graph and variable sess sess signature def map predict save signature tag test tag main op result builder save as text true however this throw an exception when execute 2020 03 30 13 10 56 894798 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx avx2 2020 03 30 13 10 56 897115 I tensorflow core common runtime process util cc 147 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance warn tensorflow from c user ongun miniconda3 envs npu tf lib site package tensorflow core python save model signature def util impl py 201 build tensor info from tensorflow python save model util impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model util build tensor info or tf compat v1 save model build tensor info traceback most recent call last file c user ongun code 02 npu simple tf model sum graph py line 25 in main op result file c user ongun miniconda3 envs npu tf lib site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file c user ongun miniconda3 envs npu tf lib site package tensorflow core python save model builder impl py line 582 in add meta graph and variable main op main op or legacy init op file c user ongun miniconda3 envs npu tf lib site package tensorflow core python framework op py line 757 in bool self disallow bool cast file c user ongun miniconda3 envs npu tf lib site package tensorflow core python framework op py line 526 in disallow bool cast self disallow in graph mode use a tf tensor as a python bool file c user ongun miniconda3 envs npu tf lib site package tensorflow core python framework op py line 515 in disallow in graph mode this function with tf function format task tensorflow python framework error impl operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function I trace the bug a little bit and the line l537 seem to cause the error by try to treat main op as a boolean a simple fix seem to be viable by change the line to python3 main op main op if main op be not none else legacy init op I can create a pr if that s an appropriate fix |
tensorflowtensorflow | custom metric and loss attributeerror tensor object have no attribute numpy raise during train | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version bazel version if compile from source 3 6 9 64 bit gcc compiler version if compile from source ubuntu 7 5 0 3ubuntu1 18 04 cuda cudnn version gpu model and memory no describe the current behavior I be try to implement a custom metric function as well as a custom loss function both implementation be face the same issue so I be go to focus this post in just one of they as an example we have the dummy code below the current behaviour be attributeerror tensor object have no attribute numpy the full log be also show below describe the expect behavior my goal be to access the value of a tensor during the fit method in order to make calculation base on say value store in both y true and y pre these calculation can not be do use build in keras backend function I write this dummy function test just to illustrate the issue if only tf print be use the code run and the value in the tensor be print on stdout after the fit be do however if I try something like y true numpy or print y true numpy the code return attributeerror tensor object have no attribute numpy I have try several method from several stackoverflow and github thread e g 27519 36979 include combination of sess tf session with eval tf gradienttape but somehow fail to implement any of they successfully do anyone know how to solve this problem standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras model import sequential model from tensorflow keras layers import input lstm dense from tensorflow keras metric import metric x y list list for in range 10 x append np arange 10 y append np random randint 0 2 x np reshape x len x 1 len x 0 y np asarray y print tf convert to tensor x numpy class custom metric metric def init self name custom metric kwargs super custom metric self init name name kwargs self true positive self add weight name tp initializer zero def update state self y true y pre sample weight none self test y true y pre in a real application new metric would be a function that depend on the value store in both y true and y pre new metric 0 1 self true positive assign add tf reduce sum new metric def result self return self true positive def reset state self self true positive assign 0 def test self y true y pre tf print y true print y true numpy model sequential lstm 5 input shape np asarray x shape 1 np asarray x shape 2 return sequence true recurrent initializer glorot uniform activation tanh recurrent dropout 0 2 dropout 0 2 dense 2 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric sparse categorical accuracy custom metric model run eagerly true model fit x y epoch 1 batch size 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach array 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 attributeerror traceback most recent call last path to file py in 54 optimizer adam 55 loss sparse categorical crossentropy 56 metric sparse categorical accuracy custom metric 57 58 model run eagerly true usr local lib python3 6 dist package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access usr local lib python3 6 dist package tensorflow core python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 437 target self target 438 skip target mask self prepare skip target mask 439 mask self prepare output mask 440 441 prepare sample weight mode list with the same length as model output usr local lib python3 6 dist package tensorflow core python keras engine training py in handle metric self output target skip target mask sample weight mask return weight metric return weight and unweighte metric 2002 metric result extend 2003 self handle per output metric self per output metric I 2004 target output output mask 2005 if return weight and unweighte metric or return weight metric 2006 metric result extend usr local lib python3 6 dist package tensorflow core python keras engine training py in handle per output metric self metric dict y true y pre mask weight 1953 with k name scope metric name 1954 metric result training util call metric function 1955 metric fn y true y pre weight weight mask mask 1956 metric result append metric result 1957 return metric result usr local lib python3 6 dist package tensorflow core python keras engine training util py in call metric function metric fn y true y pre weight mask 1153 1154 if y pre be not none 1155 return metric fn y true y pre sample weight weight 1156 mean metric only take a single value 1157 return metric fn y true sample weight weight usr local lib python3 6 dist package tensorflow core python keras metrics py in call self args kwargs 194 from tensorflow python keras distribute import distribute training util pylint disable g import not at top 195 return distribute training util call replica local fn 196 replica local fn args kwargs 197 198 property usr local lib python3 6 dist package tensorflow core python keras distribute distribute training util py in call replica local fn fn args kwargs 1133 with strategy scope 1134 return strategy extend call for each replica fn args kwargs 1135 return fn args kwargs 1136 1137 usr local lib python3 6 dist package tensorflow core python keras metrics py in replica local fn args kwargs 177 def replica local fn args kwargs 178 update the state of the metric in a replica local context 179 update op self update state args kwargs pylint disable not callable 180 with op control dependency update op 181 result t self result pylint disable not callable usr local lib python3 6 dist package tensorflow core python keras util metric util py in decorate metric obj args kwargs 74 75 with tf util graph context for symbolic tensor args kwargs 76 update op update state fn args kwargs 77 if update op be not none update op will be none in eager execution 78 metric obj add update update op usr local lib python3 6 dist package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count usr local lib python3 6 dist package tensorflow core python eager def function py in call self args kwd 613 this be the first call of call so we have to initialize 614 initializer 615 self initialize args kwd add initializer to initializer 616 finally 617 at this point we know that the initialization be complete or less usr local lib python3 6 dist package tensorflow core python eager def function py in initialize self args kwd add initializer to 495 self concrete stateful fn 496 self stateful fn get concrete function internal garbage collect pylint disable protect access 497 args kwd 498 499 def invalid creator scope unused args unused kwd usr local lib python3 6 dist package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 2387 args kwargs none none 2388 with self lock 2389 graph function self maybe define function args kwargs 2390 return graph function 2391 usr local lib python3 6 dist package tensorflow core python eager function py in maybe define function self args kwargs 2701 2702 self function cache miss add call context key 2703 graph function self create graph function args kwargs 2704 self function cache primary cache key graph function 2705 return graph function args kwargs usr local lib python3 6 dist package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2591 arg name arg name 2592 override flat arg shape override flat arg shape 2593 capture by value self capture by value 2594 self function attribute 2595 tell the concretefunction to clean up its graph once it go out of usr local lib python3 6 dist package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 976 convert func 977 978 func output python func func args func kwargs 979 980 invariant func output contain only tensor compositetensor usr local lib python3 6 dist package tensorflow core python eager def function py in wrap fn args kwd 437 wrap allow autograph to swap in a converted function we give 438 the function a weak reference to itself to avoid a reference cycle 439 return weak wrap fn wrap args kwd 440 weak wrap fn weakref ref wrap fn 441 usr local lib python3 6 dist package tensorflow core python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise attributeerror in convert code path to file py 7 update state self test y true y pre path to file py 20 test y true numpy attributeerror tensor object have no attribute numpy |
tensorflowtensorflow | couldn t match file for checkpoint | Bug | tensorflow can t load weight from a folder with a symbol such as a b but if I simply change the folder name to a it work well see the notebook to reproduce this issue yourself system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution macos 10 15 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below v2 2 0 rc0 43 gacf4951a2f 2 2 0 rc1 python version python 3 7 7 default mar 10 2020 15 43 33 describe the current behavior model load weight datum dev checkpoint a b cp1 ckpt traceback most recent call last file user izhangzhihao library caches pypoetry virtualenvs ve 0mkn22n3 py3 7 lib python3 7 site package tensorflow python train py checkpoint reader py line 95 in newcheckpointreader return checkpointreader compat as bytes filepattern runtimeerror unsuccessful tensorslicereader constructor fail to find any match file for datum dev checkpoint a b cp1 ckpt during handling of the above exception another exception occur traceback most recent call last file line 1 in file user izhangzhihao library caches pypoetry virtualenvs ve 0mkn22n3 py3 7 lib python3 7 site package tensorflow python keras engine training py line 249 in load weight return super model self load weight filepath by name skip mismatch file user izhangzhihao library caches pypoetry virtualenvs ve 0mkn22n3 py3 7 lib python3 7 site package tensorflow python keras engine network py line 1226 in load weight py checkpoint reader newcheckpointreader filepath file user izhangzhihao library caches pypoetry virtualenvs ve 0mkn22n3 py3 7 lib python3 7 site package tensorflow python train py checkpoint reader py line 99 in newcheckpointreader error translator e file user izhangzhihao library caches pypoetry virtualenvs ve 0mkn22n3 py3 7 lib python3 7 site package tensorflow python train py checkpoint reader py line 35 in error translator raise error impl notfounderror none none error message tensorflow python framework error impl notfounderror unsuccessful tensorslicereader constructor fail to find any match file for datum dev checkpoint a b cp1 ckpt describe the expect behavior model load weight datum dev checkpoint a cp1 ckpt standalone code to reproduce the issue |
tensorflowtensorflow | when I replace fit generator with fit behavior be inconsistent | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary conda tensorflow version use command below 2 0 0 python version 3 7 6 you can collect some of this information use our environment capture unknown 2 0 0 describe the current behavior I train a multi output model with a custom data generator it run successfully with the api model fit generator but when I swap to model fit it be break I find that fit can not handle multi output in list such as yield x y1 y2 correctly but tuple such as yield x y1 y2 be ok describe the expect behavior I think both fit generator and fit should have a consistent behavior to a same generator standalone code to reproduce the issue import numpy as np from tensorflow keras import layer optimizer loss model input input input shape 10 name img input x1 layer dense 5 input x2 layer dense 2 input model model inputs input output x1 x2 model compile optimizer optimizer adam loss loss categorical crossentropy img datum np random random sample size 1 10 target 0 np random random sample size 1 5 target 1 np random random sample size 1 2 def generator tuple while true yield img datum target 0 target 1 def generator list while true yield img datum target 0 target 1 model fit generator generator tuple step per epoch 1 epoch 3 ok model fit generator tuple step per epoch 1 epoch 3 ok model fit generator generator list step per epoch 1 epoch 3 ok model fit generator list step per epoch 1 epoch 3 raise error other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file c user yuyang document codehub yolov3 bug py line 40 in model fit generator list step per epoch 1 epoch 3 ok file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python keras engine training v2 py line 224 in fit distribution strategy strategy file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python keras engine training v2 py line 547 in process training input use multiprocesse use multiprocesse file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python keras engine training v2 py line 606 in process input use multiprocesse use multiprocesse file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python keras engine datum adapter py line 566 in init reassemble nest dtype output shape nest shape file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python data op dataset op py line 540 in from generator output type tensor shape as shape output shape file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python data util nest py line 471 in map structure up to result func tensor for tensor in zip all flatten up to file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python data util nest py line 471 in result func tensor for tensor in zip all flatten up to file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python framework tensor shape py line 1216 in as shape return tensorshape shape file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python framework tensor shape py line 776 in init self dim as dimension d for d in dim iter file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python framework tensor shape py line 776 in self dim as dimension d for d in dim iter file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python framework tensor shape py line 718 in as dimension return dimension value file c user yuyang miniconda3 envs tf2 lib site package tensorflow core python framework tensor shape py line 193 in init self value int value typeerror int argument must be a string a bytes like object or a number not tuple |
tensorflowtensorflow | wrong usage of tf keras layer layer maybe build | Bug | the function tf keras layer layer maybe build l2357 l2394 be use inappropriately in some function inside tensorflow implementation for example tf keras layers layer compute output shape l663 l705 call tf keras layer layer maybe build l2357 l2394 with input shape argument 1 l687 where it s suppose to be input tf keras layer layer maybe build l2357 l2394 try to detect input shape by access input shape attribute 2 l2371 which doesn t exist because input be not a tensor but input shape already hence this function doesn t work as expect this incident be eventually report to a user as layer output shape inference failure and suggest override compute output shape function although this error can be solve by override compute output shape as suggest by the error message there re many case where user don t need to do so if the function be work properly possible way to solve this circumstance I can think of be 1 add keyword argument input shape to maybe build function so that they can receive either input or input shape 2 create a dummy tensor inside compute output shape which can be use as an input argument to maybe build function I think either way win t take that much time to implement so I hope this can be fix soon if developer in the tensorflow team be busy and can t spare time for this issue I can work on this and issue a pull request in that case please give I some suggestion or opinion regard the implementation or the modification so that it can be approve and merge smoothly thank you |
tensorflowtensorflow | segfault in flexdelegate on android | Bug | I m hope to run a custom tensorflow tflite model one that use tflite s select op on device in an android app my understanding be that I need to configure the tflite interpreter with a flexdelegate but when I try to do this on the android studio emulator the app segfault apparently in the flexdelegate constructor I ve manage to reproduce the crash in a minimal code which I link to and describe below thank in advance for any help on this and thank also to all the devs for create tensorflow system information have I write custom code as oppose to use a stock example script provide in tensorflow only a little I ve add a call to the flexdelegate constructor in the mainactivity of the default flutter app that android studio generate when you tell it to start a new project os platform and distribution e g linux ubuntu 16 04 the crash I m see happen in an android phone emulator but the box the emulator be run on be run gentoo linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device the android emulator that android studio provide I ve test a few configuration include api 27 29 and r as well as x86 and x86 64 abis tensorflow instal from source or binary binary tensorflow version use command below in app build gradle implementation org tensorflow tensorflow lite 0 0 0 nightly implementation org tensorflow tensorflow lite select tf op 0 0 0 nightly describe the current behavior the app crash while attempt to construct a flexdelegate instance while run in the emulator I actually don t have a physical device handy so I can t test to see if it happen on real hardware right now describe the expect behavior flexdelegate should be create with no segfault standalone code to reproduce the issue the line that crash be flexdelegate delegate new flexdelegate which I ve add to the configureflutterengine method of the app s mainactivity I ve put the code for the full example app in this repository other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the error message in the logcat be 2020 03 29 16 22 01 392 11042 11042 com example tflitebugreport a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0xfffffff4 in tid 11042 tflitebugreport pid 11042 tflitebugreport |
tensorflowtensorflow | unable to build micro speech for sparkfun edge | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu tensorflow instal from source or binary source tensorflow version commit sha if source 44400dfcde6e39aca68c4bc103c2e4e15b5379c5 target platform e g arm mbe os arduino nano 33 etc sparkfun edge describe the problem I m try to build the micro speech example for sparkfun edge from the master branch but get a few compile error please provide the exact sequence of command step when you run into the problem I be follow the step at the very first step make f tensorflow lite micro tool make makefile target sparkfun edge tag cmsis nn micro speech bin result in the following compile error and warning tensorflow lite micro kernels cmsis nn softmax cc in function void tflite op micro activation softmaxquantize const tflitetensor tflitetensor const tflite softmaxparams tensorflow lite micro kernels cmsis nn softmax cc 97 30 error input shape be not declare in this scope const int trail dim input shape dimensionscount 1 tensorflow lite micro kernels cmsis nn softmax cc 97 30 note suggest alternative initstate const int trail dim input shape dimensionscount 1 initstate tensorflow lite micro kernels cmsis nn softmax cc 99 60 error output shape be not declare in this scope matchingflatsizeskipdim input shape trail dim output shape tensorflow lite micro kernels cmsis nn softmax cc 99 60 note suggest alternative outer size matchingflatsizeskipdim input shape trail dim output shape outer size any help be super appreciate cheer |
tensorflowtensorflow | fail to get convolution algorithm check exist solution but not work | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no use this example os platform and distribution ubuntu 19 10 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 7 4 gcc compiler version if compile from source 7 3 0 cuda cudnn version 10 1 243 and 7 6 4 gpu model and memory rtx2060s 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior trainning fail describe the expect behavior trainning be process standalone code to reproduce the issue jupyte notebook download locally other info log warn tensorflow from 6 model fit generator from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update please use model fit which support generator warn tensorflow sample weight mode be coerce from to warn tensorflow sample weight mode be coerce from to train for 500 step validate for 250 step epoch 1 15 1 500 eta 9 01 unknownerror traceback most recent call last in 4 epoch epoch 5 validation datum val data gen 6 validation step total val batch size 7 local lib python3 7 site package tensorflow core python util deprecation py in new func args kwargs 322 in a future version if date be none else after s date 323 instruction 324 return func args kwargs 325 return tf decorator make decorator 326 func new func deprecate local lib python3 7 site package tensorflow core python keras engine training py in fit generator self generator step per epoch epoch verbose callback validation datum validation step validation freq class weight max queue size worker use multiprocesse shuffle initial epoch 1304 use multiprocesse use multiprocesse 1305 shuffle shuffle 1306 initial epoch initial epoch 1307 1308 deprecation deprecate local lib python3 7 site package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 817 max queue size max queue size 818 worker worker 819 use multiprocesse use multiprocesse 820 821 def evaluate self local lib python3 7 site package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 340 mode modekey train 341 training context training context 342 total epoch epoch 343 cbks make logs model epoch log training result modekeys train 344 local lib python3 7 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 126 step step mode mode size current batch size as batch log 127 try 128 batch out execution function iterator 129 except stopiteration error outofrangeerror 130 todo kaftan file bug about tf function and error outofrangeerror local lib python3 7 site package tensorflow core python keras engine training v2 util py in execution function input fn 96 numpy translate tensor to value in eager mode 97 return nest map structure non none constant value 98 distribute function input fn 99 100 return execution function local lib python3 7 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count local lib python3 7 site package tensorflow core python eager def function py in call self args kwd 630 lifting succeed so variable be initialize and we can run the 631 stateless function 632 return self stateless fn args kwd 633 else 634 canon args canon kwd local lib python3 7 site package tensorflow core python eager function py in call self args kwargs 2361 with self lock 2362 graph function args kwargs self maybe define function args kwargs 2363 return graph function filter call args kwargs pylint disable protect access 2364 2365 property local lib python3 7 site package tensorflow core python eager function py in filter call self args kwargs 1609 if isinstance t op tensor 1610 resource variable op baseresourcevariable 1611 self capture input 1612 1613 def call flat self args capture input cancellation manager none local lib python3 7 site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1690 no tape be watch skip to run the function 1691 return self build call output self inference function call 1692 ctx args cancellation manager cancellation manager 1693 forward backward self select forward and backward function 1694 args local lib python3 7 site package tensorflow core python eager function py in call self ctx args cancellation manager 543 input args 544 attrs executor type executor type config proto config 545 ctx ctx 546 else 547 output execute execute with cancellation local lib python3 7 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor conda envs fcgf lib python3 7 site package six py in raise from value from value unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node sequential conv2d conv2d define at 6 op inference distribute function 1027 function call stack distribute function |
tensorflowtensorflow | tf estimator add metric end in shape none 12 and none be incompatible | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 64 bit tensorflow instal from source or binary pycharm tensorflow version use command below 2 0 0 python version 3 7 6 describe the current behavior I be use a dnnclassifier as my estimator and want to add some additional metric to the estimator the code I be use be basically the one from the tf estimator add metric documentation the model work fine without the add metric statement but run into an valueerror shape none 12 and none be incompatible when include it the error occure in the line auc metric update state y true label y pre prediction logit the line be call by est evaluate validation datum it be not clear to I why this happen but it seem like the y true parameter be not fill correctly hence the label column be not process correctly to the function this seem strange since the model work correctly without the additional metric the training and validation datum be create by the follow function def get dataset from tensor slice data input label column n epoch none shuffle true def get dataset dataset tf datum dataset from tensor slice dict datum input label column if shuffle dataset dataset shuffle len label column for training cycle through dataset as many time as need n epoch none dataset dataset repeat n epoch in memory training doesn t use batch dataset dataset batch len label column return dataset return get dataset describe the expect behavior it should be able to add an additional metric to the estimator standalone code to reproduce the issue def my auc label prediction auc metric tf keras metric auc name my auc auc metric update state y true label y pre prediction logit return auc auc metric def model evaluation feature training datum validation datum label validation label column hide layer len training datum call element spec 0 final layer len label est tf estimator dnnclassifier feature column feature hide unit hide layer hide layer 2 hide layer 4 final layer n class final layer label vocabulary label est tf estimator add metric est my auc e train training datum max step 100 result est evaluate validation datum other info log as far as I debug it the problem go back to the fact that the label create from the get dataset from tensor slice method have the shape none that s maybe the problem how can I fix this whereas the prediction be generate in shape none 12 where 12 be the number of possible label do anybody know why this happen any help be appreciate |
tensorflowtensorflow | module tensorflow python keras util generic util have no attribute populate dict with module object | Bug | please make sure that this be a bug as per our github policy the function populate dict with module object be miss in the 2 2 0rc2 system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 ignore one of 2 gpu | Bug | dear community I have a problem regard tensorflow calculation on 2 gpu connect via sli technology only one of they be work and second one be not although both gpu be recognize by tf setup ubuntu 18 04 python 3 tensorflow 2 1 cuda 10 1 nvidia driver official 440 64 amd ryzen 2700 asus x470 prime two gpu of gtx 1070 connect via sli techno I have already test many thing that I have find in internet concretely 1 I start with tensorflow 2 0 it do not work so I update it to tf 2 1 the problem remain 2 purge and reinstall the nvidia driver 430 50 update they to 440 64 the problem remain 3 I verify each of my gpu separately I remove physically one of they and launch code on the remain it work and it seem that the gpu be ok 4 I verify each of the gpu s port on my motherboard separately it work and it mean that each of the port be fine 5 I insert two gpu with and without hardware sli connection and launch the follow code import tensorflow as tf from tensorflow import kera from tensorflow keras application import xception import numpy as np num sample 100 height 224 width 224 num class 50 strategy tf distribute mirroredstrategy device gpu 0 gpu 1 with strategy scope parallel model xception weight none input shape height width 3 class num class parallel model compile loss categorical crossentropy optimizer rmsprop work only for the first gpu of the parallel model xception weight none input shape height width 3 class num class parallel model compile loss categorical crossentropy optimizer rmsprop print num gpu available len tf config experimental list physical device gpu generate dummy datum x np random random num sample height width 3 y np random random num sample num class parallel model summary this fit call will be distribute on 8 gpu since the batch size be 256 each gpu will process 32 sample parallel model fit x y epoch 20 batch size 16 as a result when strategy tf distribute mirroredstrategy device gpu 0 the code be run fine however when device gpu 1 or device gpu 0 gpu 1 the nvidia smi show some process on the 2nd gpu but the code execution be stack at line 2020 03 28 21 51 14 891325 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7162 mb memory physical gpu device 0 name geforce gtx 1070 pci bus i d 0000 08 00 0 compute capability 6 1 2020 03 28 21 51 14 891805 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 03 28 21 51 14 892399 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 7624 mb memory physical gpu device 1 name geforce gtx 1070 pci bus i d 0000 09 00 0 compute capability 6 1 so I have to reboot the computer because it s dead 6 initially my x11 configuration xorg conf be not configure for sli section device identifi device0 driver nvidia vendorname nvidia corporation endsection section device identifi device1 driver nvidia vendorname nvidia corporation endsection section screen identifi screen0 device device0 monitor monitor0 defaultdepth 24 subsection display depth 24 endsubsection endsection after google search I play with sudo nvidia xconfig sli on sudo nvidia xconfig sli auto etc as a result after reboot I obtain a bootloop with 2 line recover journal dev nume0n1p2 clean xxx xxx file xxx xxx block every 3 sec the screen become black and then these 2 line show again impossible to access to tty because it be in bootloop as well I look everything that I could find on this subject nothing work so I keep the previous x11 config without sli if you experience such type of problem do not hesitate to share it any advice would help thank |
tensorflowtensorflow | kerasclassifier score be break | Bug | I be use the scikit learn wrapper to wrap a keras model and train evaluate it in scikit learn call kerasclassifer score should return the accuracy of the classifier however no matter what I do it just doesn t look at the source the code do two thing 1 in case of sparse label it convert they to a onehot matrix line 296 300 2 it call sequential evaluate and then hope to find a metric call acc or accuracy which it treat as the accuracy of the model line 302 307 if it doesn t manage to find a name metric with the right name it raise an exception I don t understand how this could possibly work and it doesn t work for I give that the target label be onehot encode the correct metric to use be categoricalaccuracy however it be name l758 logically kerasclassifer score raise an exception bad the error message suggest to add the accuracy metric to the model this can be mislead as it make the error disappear and return a value but that value be not accurate pun intend I suggest rename accuracy in l306 to categorical accuracy and while at it I suggest to add estimator type classifier as a class variable scikit learn check for it to identify kerasclassifier as a classifier and without it a lot of functionality doesn t work as intend if there be agreement for this change I can submit a pr |
tensorflowtensorflow | documentation correspond to argument for pre train model inside be miss | Bug | url s with the issue the information relate to argument correspond to the pre train model define under be miss some example link be show below description of issue what need change the information correspond to argument should be specify like that it be specify in keras website vgg16 for example why should someone use this method how be it useful if someone want to know what argument should be pass while try to use these pre train model information be lack in tf org site and the developer should go to keras website the information be not available in the source code correspond to those tf pre train model correct link be the link to the source code correct yes parameter define be all parameter define and format correctly return define be return value define yes usage example be there a usage example no submit a pull request no be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | tf keras model metric name bug in tensorflow 2 2 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 mac os tensorflow version use command below 2 2 0rc0 python version 3 7 describe the current behavior model metric name return an empty list see below example for a compile model this be new unexpected behavior as of tensorflow 2 2 0 not the case in tensorflow 2 1 0 these metric name be important at compile time because they can be use to check against monitored quantity in callback e g if a modelcheckpoint callback be try to monitor val lss we can easily catch the typo before call model fit or finish the first epoch of training describe the expect behavior model metric return metric and loss name see below example standalone code to reproduce the issue python import tensorflow as tf input tf keras layers input shape 3 output tf keras layer dense 2 name out 1 input net tf keras model model inputs input output output net compile optimizer adam loss mse metric mae net metric name tensorflow 2 2 0rc2 net metric name loss mae tensorflow 2 1 0 |
tensorflowtensorflow | attributeerror with tf2 2 0 rc1 use keras model train on batch inside tf function | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab both cpu and gpu runtime mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary from default colab tf2 x version tensorflow version use command below v2 2 0 rc1 0 gacf4951a2f 2 2 0 rc1 python version 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when try to train a keras model use train on batch inside a tf function tf function input signature tf tensorspec shape none 10 dtype tf float32 tf tensorspec shape none 10 dtype tf int32 def train inp extra expect calc expect inp extra return model 1 train on batch inp extra expect tensorflow raise an attributeerror attributeerror in user code 6 train return model 1 train on batch inp extra expect usr local lib python3 6 dist package tensorflow python keras engine training py 1287 train on batch log tf util to numpy or python type log usr local lib python3 6 dist package tensorflow python keras util tf util py 523 to numpy or python type return nest map structure to single numpy or python type tensor usr local lib python3 6 dist package tensorflow python util nest py 617 map structure structure 0 func x for x in entry usr local lib python3 6 dist package tensorflow python util nest py 617 structure 0 func x for x in entry usr local lib python3 6 dist package tensorflow python keras util tf util py 519 to single numpy or python type x t numpy attributeerror tensor object have no attribute numpy describe the expect behavior the code work fine in eager execution and should not raise an error when run in a tf function this also work fine with tensorflow 2 1 so it look to be a recent regression standalone code to reproduce the issue other info log full backtrace copy from colab attributeerror traceback most recent call last in 2 b tf random uniform shape 2 10 maxval 10 dtype tf int32 3 4 train a b 8 frame usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 578 xla context exit 579 else 580 result self call args kwd 581 582 if trace count self get trace count usr local lib python3 6 dist package tensorflow python eager def function py in call self args kwd 625 this be the first call of call so we have to initialize 626 initializer 627 self initialize args kwd add initializer to initializer 628 finally 629 at this point we know that the initialization be complete or less usr local lib python3 6 dist package tensorflow python eager def function py in initialize self args kwd add initializer to 504 self concrete stateful fn 505 self stateful fn get concrete function internal garbage collect pylint disable protect access 506 args kwd 507 508 def invalid creator scope unused args unused kwd usr local lib python3 6 dist package tensorflow python eager function py in get concrete function internal garbage collect self args kwargs 2444 args kwargs none none 2445 with self lock 2446 graph function self maybe define function args kwargs 2447 return graph function 2448 usr local lib python3 6 dist package tensorflow python eager function py in maybe define function self args kwargs 2775 2776 self function cache miss add call context key 2777 graph function self create graph function args kwargs 2778 self function cache primary cache key graph function 2779 return graph function args kwargs usr local lib python3 6 dist package tensorflow python eager function py in create graph function self args kwargs override flat arg shape 2665 arg name arg name 2666 override flat arg shape override flat arg shape 2667 capture by value self capture by value 2668 self function attribute 2669 tell the concretefunction to clean up its graph once it go out of usr local lib python3 6 dist package tensorflow python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 979 original func tf decorator unwrap python func 980 981 func output python func func args func kwargs 982 983 invariant func output contain only tensor compositetensor usr local lib python3 6 dist package tensorflow python eager def function py in wrap fn args kwd 439 wrap allow autograph to swap in a converted function we give 440 the function a weak reference to itself to avoid a reference cycle 441 return weak wrap fn wrap args kwd 442 weak wrap fn weakref ref wrap fn 443 usr local lib python3 6 dist package tensorflow python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise attributeerror in user code 6 train return model 1 train on batch inp extra expect usr local lib python3 6 dist package tensorflow python keras engine training py 1287 train on batch log tf util to numpy or python type log usr local lib python3 6 dist package tensorflow python keras util tf util py 523 to numpy or python type return nest map structure to single numpy or python type tensor usr local lib python3 6 dist package tensorflow python util nest py 617 map structure structure 0 func x for x in entry usr local lib python3 6 dist package tensorflow python util nest py 617 structure 0 func x for x in entry usr local lib python3 6 dist package tensorflow python keras util tf util py 519 to single numpy or python type x t numpy attributeerror tensor object have no attribute numpy |
tensorflowtensorflow | call next with a default value on an exhausted dataset iterator raise an outofrangeerror in graph mode | Bug | system information have I write custom code yes os platform and distribution window 10 tensorflow instal from binary 2 1 0 describe the current behavior next iterator default be suppose to give the next element in the iterator or the value give as default if the iterator be at the end however when use the above construction in a function with tf function the default value be not return and an error tensorflow python framework error impl outofrangeerror be produce when try to call next on an iterator that be at the end when run this code in eager mode the default value be return as expect describe the expect behavior in graph mode the default value should be return when at the end of an iterator standalone code to reproduce the issue import tensorflow as tf x tf convert to tensor 1 2 3 ds tf datum dataset from tensor slice x dsi iter ds tf function remove this to get the expect behaviour def func for in range 4 tf print next dsi 1 func output see below for a full stacktrace 1 2 3 2020 03 27 18 56 09 523946 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext 3 expect output 1 2 3 1 other info log colab link stacktrace txt |
tensorflowtensorflow | micro crash in errorreporter report | Bug | system information tensorflow lite for microcontroller run greedy memory planner test from master building with a clang base compiler where va list have the type void describe the current behavior test crash on tf lite report error error reporter s line most likely because the string arg match two prototype in this configuration of clang and its pick the wrong one ultimately it try do dereference the literal string lead to a crash describe the expect behavior no crash print some ascii art standalone code to reproduce the issue if you have docker up and run then call this from the tensorflow folder rm rf tensorflow lite micro tool make download make f tensorflow lite micro tool make makefile clean docker run it v pwd home builder z rm xcoreai build tool late make f tensorflow lite micro tool make makefile target xcore test greedy memory planner test other info log include any log or source code that would be helpful to pr 37976 fix the immediate issue in greedy memory planner by add an explicit typecast tf lite report error error reporter s const char line |
tensorflowtensorflow | zh cn notebook fail | Bug | site zh cn tutorial distribute multi worker with keras ipynb nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell option tf datum option option experimental distribute auto shard false train dataset no auto shard train dataset with option option attributeerror traceback most recent call last in 1 option tf datum option 2 option experimental distribute auto shard false 3 train dataset no auto shard train dataset with option option tmpf src tf docs env lib python3 6 site package tensorflow core python data util option py in setattr self name value 54 else 55 raise attributeerror 56 can not set the property s on s name type self name 57 58 attributeerror can not set the property auto shard on distributeoption site zh cn tutorial generative style transfer ipynb nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell file name kadinsky turtle png mpl image imsave file name image 0 try from google colab import file except importerror pass else file download file name typeerror traceback most recent call last in 1 file name kadinsky turtle png 2 mpl image imsave file name image 0 3 4 try 5 from google colab import file local lib python3 6 site package matplotlib image py in imsave fname arr vmin vmax cmap format origin dpi metadata pil kwargs 1548 if origin low 1549 arr arr 1 1550 rgba sm to rgba arr bytes true 1551 if format png and pil kwargs be none 1552 with cbook open file cm fname wb as file local lib python3 6 site package matplotlib cm py in to rgba self x alpha byte norm 215 alpha np uint8 alpha 255 216 m n x shape 2 217 xx np empty shape m n 4 dtype x dtype 218 xx 3 x 219 xx 3 alpha typeerror datum type not understand site zh cn tutorials keras text classification with hub ipynb nbconvert preprocessor execute cellexecutionerror an error occur while execute the follow cell 6 4 15 000 10 000 25 000 train validation split tfds split train subsplit 6 4 train datum validation datum test datum tfds load name imdb review split train validation split tfds split test as supervise true assertionerror traceback most recent call last in 6 name imdb review 7 split train validation split tfds split test 8 as supervise true local lib python3 6 site package tensorflow dataset core api util py in disallow positional args dec fn instance args kwargs 50 check no positional fn args ismethod allow allow 51 check require fn kwargs 52 return fn args kwargs 53 54 return disallow positional args dec wrap pylint disable no value for parameter local lib python3 6 site package tensorflow dataset core tfrecord reader py in str to relative instruction spec 354 re sub spec re match spec 355 if not res 356 raise assertionerror unrecognized instruction format s spec 357 unit if re group from pct or re group to pct else abs 358 return readinstruction assertionerror unrecognized instruction format namedsplit train tfds percent 0 60 |
tensorflowtensorflow | graph transform tool remove node be unable to remove switch node but identity node | Bug | I can remove all the identity node from my pb model with the command bazel build tensorflow tool graph transform transform graph bazel bin tensorflow tool graph transform transform graph in graph m pb out graph new pb input batch size phase train output label batch embedding transform strip unused nodes type float shape 1 299 299 3 remove nod op identity fold old batch norm fold constant ignore error true however I can not do the same thing if change identity to switch it mean the below command do not remove any nodes bazel bin tensorflow tool graph transform transform graph in graph m pb out graph new pb input batch size phase train output label batch embedding transform strip unused nodes type float shape 1 299 299 3 remove node op switch fold old batch norm fold constant ignore error true this be how I check the model nodes bazel bin tensorflow tool graph transform summarize graph in graph new pb the result as below find 2 possible input name phase train type bool 10 shape name batch size type int32 3 shape no variable spot find 2 possible output name label batch op identity name embedding op mul find 23512506 23 51 m const parameter 0 0 variable parameter and 676 control edge op type use 2019 switch 1105 const 566 identity 449 merge 448 sub 249 mul 224 fusedbatchnormv3 132 conv2d 131 relu 23 concatv2 21 biasadd 21 addv2 3 shape 3 maxpool 3 reshape 2 placeholder 1 maximum 1 pack 1 matmul 1 queuedequeueuptov2 1 randomuniform 1 greaterequal 1 fifoqueuev2 1 rsqrt 1 avgpool 1 square 1 stridedslice 1 cast 1 sum 1 add to use with tensorflow tool benchmark benchmark model try these argument bazel run tensorflow tool benchmark benchmark model graph new pb show flop input layer phase train batch size input layer type bool int32 input layer shape output layer label batch embedding my question be how can I remove the switch node |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.