repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | size 1 must be non negative error when use boolean mask tensorflow | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow maybe os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 lts tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 8 cuda cudnn version n a use ipu poplar version 1 2 100 gpu model and memory mk1 ipu describe the current behavior when work with too big array boolean crash probably because of use int32 internally for indexing if I change the first dimension in the example below to 20000 processing be not an issue describe the expect behavior the input array for the boolean mask function have the same size as the expect output of the boolean mask hence I would expect that either the input object get not create with an error that their size exceed the limit or that it just work also it would be well to have an internal size check to provide more meaningful output standalone code to reproduce the issue import tensorflow as tf t vector tf one 8 35583 100000 dtype tf float32 acceptance vector tf cast tf one 35583 100000 tf bool eval vector tf boolean mask t vector acceptance vector axis 1 other info log boolean mask error log |
tensorflowtensorflow | tf keras activation deserialize document refer x as a parameter | Bug | url s with the issue description of issue what need change clear description the document of tf keras activations deserialize refer x as a parameter in the argument section but it be not in the signature and not accept by the function image usage example python import tensorflow as tf tf keras activations deserialize x softmax run the code above give exception traceback most recent call last file line 1 in typeerror deserialize get an unexpected keyword argument x system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 9 |
tensorflowtensorflow | retrain a tensorflow model from a pb file | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution macos catalina tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 python version 3 6 I be try to retrain a tensorflow model from a pb file I m use the follow function to retrieve it and load the graph in python load protobuf as graph give filepath def load pb path to pb with tf gfile gfile path to pb rb as f graph def tf graphdef graph def parsefromstring f read with tf graph as default as graph tf import graph def graph def name return graph from here I try to list its trainable variable and operation with tf session graph tf graph as sess print trainable variable format tf trainable variable variable op for op in tf graph get operation for var in variable print format var name end this be the output from the above code code output 1 as show above it say that there be no variable that can be train and when I try the follow code to train the graph with tf session as sess random input tf convert to tensor np random rand 1 3 2848 4256 input dimension random output sess run random input random y0 tf convert to tensor np random rand 1 3 2848 4256 loss tf reduce sum tf square random y0 random output train tf train gradientdescentoptimizer 1e 4 minimize loss sess run tf global variable initializer print training for step in range 1 sess run train it give I the error train tf train gradientdescentoptimizer 1e 4 minimize loss file library framework python framework version 3 6 lib python3 6 site package tensorflow core python training optimizer py line 410 in minimize str v for v in grad and var loss valueerror no gradient provide for any variable check your graph for op that do not support gradient between variable I believe there be an issue with the conversion such that it be fail to find the training variable could someone please clarify this I can also provide the exact pb file if need thank a lot 1 |
tensorflowtensorflow | wrong link for jni download on website | Bug | the jni download link on the java page be still point to 1 14 0 it should be update to 2 3 0 page source linux cpu only linux gpu support |
tensorflowtensorflow | bug tf sqrt inconsistent behaviour | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 5 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version v10 1 243 gpu model and memory rtx2070 maxq 8 gb describe the current behavior when I run this code I get the result and gradient py x tf variable 1 0 0 0 with tf gradienttape as tape x tf square x x tf sqrt x z tf reduce sum x axis 1 loss tf nn l2 loss z tf print z loss tf print tape gradient loss x output 1 0 5 1 1 however when I refactor the three line inside a function the gradient become nan py tf function happen in both eager and graph def fun v v tf square v v tf sqrt v return tf reduce sum v axis 1 x tf variable 1 0 0 0 with tf gradienttape as tape z fun x loss tf nn l2 loss z tf print z loss tf print tape gradient loss x output 1 0 5 1 nan change x tf sqrt x to x tf sqrt x 1e 7 make the gradient non nan describe the expect behavior either result be acceptable as long as the behaviour be consistent |
tensorflowtensorflow | bug the file copy by tf from hdfs to local may be wrong when hdfs file be be overwrite | Bug | this be a issue from taiji ai platform in tencent system information os platform and distribution linux version 4 14 105 1 tlinux3 0010 tensorflow instal from source or binary source tensorflow version use command below 1 13 1 we use and the late version also have this problem python version 3 6 c version 11 describe the current behavior our training sample datum be generate by the spark program and store on hdfs an example of a training sample file hdf xxxx example 20200822 part r 0000036 tfr gz and the file datum be compress by gzip the trigger condition of the training program be that the success file appear under hdfs xxxx example 20200822 the training program first download the training sample on hdfs to the local and then read the local datum for training when the training program and the spark program be run at the same time the download hdfs file may be overwrite by the spark program cause the gzip file download to the local to be damage once the gzip file be wrong our tensorflow training program will always stay unzipped and the cpu utilization rate be high the wrong local gzip file be compose of part of the datum of the hdfs file before and after overwrite code auto env tensorflow env default auto st env copyfile src file des file process pstack info image top info image describe the expect behavior the local gzip file be consistent with the datum of the hdfs file before overwrite or the datum of the hdfs file after overwrite instead of contain the datum of the hdfs file before and after overwrite issue analysis in order to solve the issue 5438 that the tensorboard need to get the late datum write the hdfs file be reopen in the hdfsrandomaccessfile read l226 when n 0 r 0 call hdfsopenfile to reopen the hdfs file please refer to this commit for detail before call hdfsopenfile if the hdfs file be overwrite a new hdfs file be generate after call hdfsopenfile it will point to the new hdfs file if the size of the new hdfs file be large than the size of the old hdfs file the hdfs file copy to the local file system by filesystemcopyfile contain part of the datum of the new and old hdfs file cause the local gzip file to be wrong temp solution patch 1 filesystemcopyfile l466 avoid trigger the hdfsopenfile operation of the hdfsrandomaccessfile read the size of the file copy be base on the file size not base on kcopyfilebuffersize the implementation principle of the temporary solution be the same as that of readfiletostre l423 but it be still possible that the file copy to the local file be wrong because getfilesize and read can not form an atomic operation for example when the file size be obtain through getfilesize the hdfs file be overwrite and the datum of the new file be read base on the size of the old file however the possibility that the local file of the temporary solution be wrong be far less than the original solution generally speak read the file datum to the end of the file be a time consume operation and the time consume operation of obtain the file size be negligible may be a well solution patch 2 I m not sure if this solution be a well solution in some scenario I don t know it may require further discussion randomaccessfile read be an abstraction of the operation support by each file system and the specific implementation be transparent to user add the hdfsopenfile to the hdfsrandomaccessfile read l226 to read the late datum be a hidden and dangerous behavior because hdfsopenfile may point to new file datum inconsistency may occur more fatally there be a large number of method that depend on the read which may cause some behavior that be not what we expect which be the root of all error I think it be well for user to use read and reopen to obtain the late datum in the program accept solution patch 3 quote mihaimaruseac s comment patch 1 have the issue of break separation of concern design principle what happen if there be a new scheme for hdfs we would have a bug in there until someone remember the additional if patch 2 have the issue of remove a test that be add for create a bug in order to overcome the shortcoming of patch 1 and patch 2 a switch be add to patch 3 the default hdfs disable read eof retry be false patch 3 will not remove the writewhilereade l202 test case and it can also solve the problem we encounter if you need to turn off the hdfs read eof retry set the environment variable source hdfs disable read eof retry 1 for more detail please refer to the comment below |
tensorflowtensorflow | fusedbatchnorm vs fusedbatchnormv3 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 13 1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version I describe the issue here |
tensorflowtensorflow | convert tesnorflow op cast to tensorflow tensor | Bug | system information have I write custom code yes os platform and distribution linux ubtuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below 1 15 python version 3 6 bazel version if compile from source 0 25 0 gcc compiler version if compile from source 7 5 0 cuda cudnn version 10 2 89 7 6 5 32 gpu model and memory nvidia gtx 1060 opencv 4 4 2 I have write a custom code and try to read an image with opencv and convert it to a tensorflow tensor I do it successfully and can do inference on my model but my model give I a tensor of int64 and now I wanna cast it to a tensor of int32 auto root tensorflow scope newrootscope tensor inputimg tensorflow dt uint8 tensorflow tensorshape 1 input height input width 3 uint8 t image1 p inputimg flat datum mat cv image1 cv mat input height input width cv 8uc3 image1 p mat image imread test img jpg cv mat resize img resize image resize img cv size input width input height cvtcolor resize img resize img color bgr2rgb resize img convertto cv image1 cv 8uc3 std vector input imagetensor 0 inputimg std vector output status session run input semanticprediction 0 output if status ok std cout status tostre n return 1 auto int64 caster cast root withopname int64 caster output 0 tensorflow dt int32 I search and find tensorflow cast function it give I an tensorflow ope cast output with the name of int64 caster but I don t know how to convert it to tensorflow tensor object |
tensorflowtensorflow | bazel fix for other toolchain should be global | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 redhat 7 tensorflow instal from source or binary source tensorflow version use command below 1 15 3 nv20 07 python version 3 8 5 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 9 3 0 cuda cudnn version cuda 11 0 207 cudnn 8 0 1 13 gpu model and memory nvidia a100 describe the current behavior external local config cuda crosstool clang bin crosstool wrapper driver be not gcc share o bazel out k8 py2 opt bin tensorflow python tf stack so wl rpath origin rpath origin wl version script bazel out k8 py2 opt bin tensorflow python tf stack version script ld wl no as need wl z relro z now wl build i d md5 wl hash style gnu no canonical prefix fno canonical system header b usr bin wl gc section wl bazel out k8 py2 opt bin tensorflow python tf stack so 2 param execution platform bazel tool platform host platform usr bin ld gold push state unknown option usr bin ld gold use the help option for usage information collect2 error ld return 1 exit status describe the expect behavior fix ld gold version work code to reproduce the issue compile with standard redhat s gcc other info log you describe the issue yourself here l1192 but that do not help with other toolchain other than your own so the solution be to use a properly patch libtool and remove the hardcode path to usr bin the fine gentleman of the easybuild project have a patch that do exactly this |
tensorflowtensorflow | gatherv2 check batch dim falsely in graph mode | Bug | describe the current behavior in graph mode gatherv2 check batch dim with the rank of param describe the expect behavior by definition and the kernel implementation should check batch dim with the rank of indice standalone code to reproduce the issue python import tensorflow as tf import numpy as np tf function def gather fn x indice axis batch dim return tf gather x indice axis axis batch dim batch dim 2 d input with shape 2 3 x tf constant np arange 6 reshape 2 3 dtype tf int32 3 d index with shape 2 1 2 indice tf constant 0 1 1 0 dtype tf int32 axis 1 batch dim 3 eager mode compute correctly print eager gather tf gather x indice axis axis batch dim batch dim error in graph mode print function gather gather fn x indice axis axis batch dim batch dim other info log eager gather tf tensor 0 1 1 0 3 4 4 3 shape 2 2 1 2 dtype int32 traceback most recent call last file gather function py line 14 in print function gather gather fn x indice axis axis batch dim batch dim file usr local lib python3 7 site package tensorflow python eager def function py line 580 in call result self call args kwd file usr local lib python3 7 site package tensorflow python eager def function py line 627 in call self initialize args kwd add initializer to initializer file usr local lib python3 7 site package tensorflow python eager def function py line 506 in initialize args kwd file usr local lib python3 7 site package tensorflow python eager function py line 2446 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr local lib python3 7 site package tensorflow python eager function py line 2777 in maybe define function graph function self create graph function args kwargs file usr local lib python3 7 site package tensorflow python eager function py line 2667 in create graph function capture by value self capture by value file usr local lib python3 7 site package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file usr local lib python3 7 site package tensorflow python eager def function py line 441 in wrap fn return weak wrap fn wrap args kwd file usr local lib python3 7 site package tensorflow python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in user code gather function py 6 gather fn return tf gather x indice axis axis batch dim batch dim usr local lib python3 7 site package tensorflow python util dispatch py 180 wrapper return target args kwargs usr local lib python3 7 site package tensorflow python op array op py 4541 gather v2 batch dim batch dim usr local lib python3 7 site package tensorflow python util dispatch py 180 wrapper return target args kwargs usr local lib python3 7 site package tensorflow python op array op py 4518 gather param indice axis batch dim batch dim name name usr local lib python3 7 site package tensorflow python ops gen array op py 3762 gather v2 batch dim batch dim name name usr local lib python3 7 site package tensorflow python framework op def library py 744 apply op helper attrs attr proto op def op def usr local lib python3 7 site package tensorflow python framework func graph py 595 create op internal compute device usr local lib python3 7 site package tensorflow python framework op py 3327 create op internal op def op def usr local lib python3 7 site package tensorflow python framework op py 1817 init control input op op def usr local lib python3 7 site package tensorflow python framework op py 1657 create c op raise valueerror str e valueerror shape must be at least rank 3 but be rank 2 for node gatherv2 gatherv2 taxi dt int32 tindice dt int32 tparam dt int32 batch dim 3 x indice gatherv2 axis with input shape 2 3 2 1 2 and with compute input tensor input 2 1 this issue might come from l1219 l1224 |
tensorflowtensorflow | python crash when run inference interpreter invoke on bert official nlp bert convert save model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 dev20200819 python version python 3 7 6 cuda cudnn version gpu model and memory quadro p2000 computecapability 6 1 4 gb describe the current behavior basically I m try to quantize a bert model with a classifier head use the dynamic range post training quantization technique in order to improve serve speed this be how I proceed I use the bert code of google official I execute this scrollto y acvkpsvuxc notebook to obtain a bert classifier model in the save model format it s easy to obtain just run all cell and save the model I can also link mine if need be I be able to convert the model to a tflite model and serialize it to a flatbuffer I get this log which kind of sound like thing be do ok for the conversion part info tfliteflexdelegate delegate 96 node delegate out of 620 node with 60 partition and then when try out the inference follow the basic tflite inference tutorial in python my program just crash describe the expect behavior I would like to be able to serve a tflite model convert from a bert classifier of official nlp bert to be able to execute inference d standalone code to reproduce the issue these be the few line that I try to run but just can t seem to make work import numpy as np import tensorflow as tf path save path to save model converter tf lite tfliteconverter from save model path save converter target spec support op tf lite opsset select tf op tf lite opsset tflite builtin converter optimization tf lite optimize default tflite quant model converter convert interpreter tf lite interpreter model content tflite quant model interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail input shape input detail 0 shape input datum np array np random random sample input shape dtype np int32 interpreter set tensor input detail 0 index input datum interpreter invoke output datum interpreter get tensor output detail 0 index print output datum also I m able to save the tflite model to a flatbuffer easily I ve try load it in a different process but I get the same crash other info log this be the full log I obtain 2020 08 20 16 51 03 475275 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudart64 101 dll 2020 08 20 16 51 05 781037 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2020 08 20 16 51 05 782856 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library nvcuda dll 2020 08 20 16 51 06 621258 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name quadro p2000 computecapability 6 1 coreclock 1 468ghz corecount 6 devicememorysize 4 00gib devicememorybandwidth 89 53gib s 2020 08 20 16 51 06 621417 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudart64 101 dll 2020 08 20 16 51 06 626743 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cublas64 10 dll 2020 08 20 16 51 06 630233 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cufft64 10 dll 2020 08 20 16 51 06 631445 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library curand64 10 dll 2020 08 20 16 51 06 635791 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusolver64 10 dll 2020 08 20 16 51 06 637905 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusparse64 10 dll 2020 08 20 16 51 06 646389 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudnn64 7 dll 2020 08 20 16 51 06 647139 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 08 20 16 51 06 647268 I tensorflow compiler jit xla gpu device cc 69 not create xla device tf xla enable xla device not set 2020 08 20 16 51 06 647593 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 08 20 16 51 06 648170 I tensorflow compiler jit xla cpu device cc 54 not create xla device tf xla enable xla device not set 2020 08 20 16 51 06 649130 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name quadro p2000 computecapability 6 1 coreclock 1 468ghz corecount 6 devicememorysize 4 00gib devicememorybandwidth 89 53gib s 2020 08 20 16 51 06 649270 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudart64 101 dll 2020 08 20 16 51 06 649361 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cublas64 10 dll 2020 08 20 16 51 06 649436 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cufft64 10 dll 2020 08 20 16 51 06 649597 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library curand64 10 dll 2020 08 20 16 51 06 649683 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusolver64 10 dll 2020 08 20 16 51 06 649799 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusparse64 10 dll 2020 08 20 16 51 06 649901 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudnn64 7 dll 2020 08 20 16 51 06 650420 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 08 20 16 51 07 326160 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 08 20 16 51 07 326260 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 08 20 16 51 07 327667 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2020 08 20 16 51 07 328315 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2984 mb memory physical gpu device 0 name quadro p2000 pci bus i d 0000 01 00 0 compute capability 6 1 2020 08 20 16 51 07 329077 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2020 08 20 16 51 13 831685 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 315 ignore output format 2020 08 20 16 51 13 831813 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 318 ignore drop control dependency 2020 08 20 16 51 13 835168 I tensorflow cc save model reader cc 32 reading savedmodel from c tensorflow model master save model 2020 08 20 16 51 13 884559 I tensorflow cc save model reader cc 55 read meta graph with tag serve 2020 08 20 16 51 13 885104 I tensorflow cc save model reader cc 93 reading savedmodel debug info if present from c tensorflow model master save model 2020 08 20 16 51 13 886647 I tensorflow compiler jit xla cpu device cc 54 not create xla device tf xla enable xla device not set 2020 08 20 16 51 13 886780 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 08 20 16 51 13 886911 I tensorflow core common runtime gpu gpu device cc 1263 2020 08 20 16 51 13 887017 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set 2020 08 20 16 51 14 027709 I tensorflow compiler mlir mlir graph optimization pass cc 198 none of the mlir optimization pass be enable register 0 pass 2020 08 20 16 51 14 046369 I tensorflow cc save model loader cc 190 restore savedmodel bundle 2020 08 20 16 51 14 692613 I tensorflow cc save model loader cc 174 running initialization op on savedmodel bundle at path c tensorflow model master save model 2020 08 20 16 51 14 805651 I tensorflow cc save model loader cc 261 savedmodel load for tag serve status success ok take 970481 microsecond 2020 08 20 16 51 15 612211 I tensorflow compiler jit xla cpu device cc 54 not create xla device tf xla enable xla device not set 2020 08 20 16 51 15 612614 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name quadro p2000 computecapability 6 1 coreclock 1 468ghz corecount 6 devicememorysize 4 00gib devicememorybandwidth 89 53gib s 2020 08 20 16 51 15 614030 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudart64 101 dll 2020 08 20 16 51 15 614146 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cublas64 10 dll 2020 08 20 16 51 15 614256 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cufft64 10 dll 2020 08 20 16 51 15 614370 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library curand64 10 dll 2020 08 20 16 51 15 614484 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusolver64 10 dll 2020 08 20 16 51 15 614595 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusparse64 10 dll 2020 08 20 16 51 15 614705 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudnn64 7 dll 2020 08 20 16 51 15 615136 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 08 20 16 51 15 615291 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 08 20 16 51 15 615398 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 08 20 16 51 15 615500 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2020 08 20 16 51 15 615944 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2984 mb memory physical gpu device 0 name quadro p2000 pci bus i d 0000 01 00 0 compute capability 6 1 2020 08 20 16 51 15 616086 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set info create tensorflow lite delegate for select tf op 2020 08 20 16 51 20 577713 I tensorflow compiler jit xla cpu device cc 54 not create xla device tf xla enable xla device not set 2020 08 20 16 51 20 578074 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name quadro p2000 computecapability 6 1 coreclock 1 468ghz corecount 6 devicememorysize 4 00gib devicememorybandwidth 89 53gib s 2020 08 20 16 51 20 578209 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudart64 101 dll 2020 08 20 16 51 20 578327 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cublas64 10 dll 2020 08 20 16 51 20 578447 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cufft64 10 dll 2020 08 20 16 51 20 578568 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library curand64 10 dll 2020 08 20 16 51 20 578695 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusolver64 10 dll 2020 08 20 16 51 20 578818 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cusparse64 10 dll 2020 08 20 16 51 20 578939 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cudnn64 7 dll 2020 08 20 16 51 20 579359 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 08 20 16 51 20 579508 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 08 20 16 51 20 579604 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 08 20 16 51 20 579694 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2020 08 20 16 51 20 580139 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2984 mb memory physical gpu device 0 name quadro p2000 pci bus i d 0000 01 00 0 compute capability 6 1 2020 08 20 16 51 20 580258 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set info tfliteflexdelegate delegate 96 node delegate out of 620 node with 60 partition 2020 08 20 16 51 20 586343 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library cublas64 10 dll ps1 if that can help when run it with pycharm I get a weird exit code process finish with exit code 1073741819 0xc0000005 ps2 I ve also link the end of the log when I activate cpp log with tf cpp min vlog level 2 if that can help log cpp txt I guess my next step will be to try to run inference in cpp directly |
tensorflowtensorflow | can not confine tensorflow c api to use not more than 1 thread in total | Bug | tensorflow c api be generate at least one thread on each of the available cpus the available instruction guidline do not take effect to confine tensorflow c api to only one cpu I have 8 cpus and want tensorflow c api to use only 1 thus generate one and only one thread how can I confine tensorflow to use only one cpu and only one thread out of available cpus processor lscpu command on ubuntu architecture x86 64 cpu op mode s 32 bit 64 bit byte order little endian cpu s 8 on line cpu s list 0 7 thread s per core 2 core s per socket 4 socket s 1 numa node s 1 vendor i d genuineintel cpu family 6 model 142 model name intel r core tm i5 8250u cpu 1 60ghz step 10 cpu mhz 1122 143 cpu max mhz 3400 0000 cpu min mhz 400 0000 bogomip 3600 00 virtualization vt x l1d cache 32k l1i cache 32k l2 cache 256k l3 cache 6144k numa node0 cpu s 0 7 the follow code can reduce the number of thread to 1 per core and the top command cou system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 cento 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below tensorflow c api 1 15 0 python version 3 6 bazel version if compile from source no gcc compiler version if compile from source 4 8 5 20150623 red hat 4 8 5 39 gcc cuda cudnn version no gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow c api be generate multiple thread at least one thread on each of the available cpus describe the expect behavior one and only one thread for tensorflow no matter how many cpu core or socket standalone code to reproduce the issue graph tf newgraph status tf newstatus sessionopt tf newsessionoption limit number of thread uint8 t intra op parallelism thread 1 uint8 t inter op parallelism thread 1 uint8 t device count 1 uint8 t config 15 0xa 0x7 0xa 0x3 0x43 0x50 0x55 0x10 device count 0x10 intra op parallelism thread 0x28 intra op parallelism thread 0x40 0x1 tf setconfig sessionopt void config 13 status if tf getcode status tf ok std cout nerror tf message status runopt null load model session tf loadsessionfromsavedmodel sessionopt runopt save model dir tag ntag graph null status if tf getcode status tf ok std cout nerror fail to load savedmodel tf message status return 1 assert tf getcode status tf ok other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | duplicate target in tflite makefile | Bug | describe the current behavior it seem that there be a duplicate target in lite tool make makeflie l318 l320 makefile for normal manually create tensorflow lite c source file objdir o cpp mkdir p dir cxx cxxflag include c o objdir o cc mkdir p dir cxx cxxflag include c o for normal manually create tensorflow lite c source file objdir o c mkdir p dir cc cflag include c o objdir o cpp be this proper target mkdir p dir cxx cxxflag include c o describe the expect behavior makefile for normal manually create tensorflow lite c source file objdir o cpp mkdir p dir cxx cxxflag include c o objdir o cc mkdir p dir cxx cxxflag include c o for normal manually create tensorflow lite c source file objdir o c mkdir p dir cc cflag include c o objdir o cpp mkdir p dir cxx cxxflag include c o |
tensorflowtensorflow | can not load the dataset | Bug | when I run import tensorflow dataset as tfds mnist train tfds load name mnist split train wrong occur like this 2020 08 20 09 52 45 879679 w tensorflow core platform cloud google auth provider cc 178 all attempt to get a google authentication bearer token fail return an empty token retrieve token from file fail with not find could not locate the credential file retrieve token from gce fail with abort all 10 retry attempt fail the last failure unavailable error execute an http request libcurl code 6 mean couldn t resolve host name error detail couldn t resolve host metadata 2020 08 20 09 53 46 902189 e tensorflow core platform cloud curl http request cc 596 the transmission of request 0x56396edb1ff0 uri have be stick at 0 of 0 byte for 61 second and will be abort curl timing information lookup time 0 038829 no error connect time 0 no error pre transfer time 0 no error start transfer time 0 no error 2020 08 20 09 54 48 722926 e tensorflow core platform cloud curl http request cc 596 the transmission of request 0x56396eeb1ef0 uri have be stick at 0 of 0 byte for 61 second and will be abort curl timing information lookup time 0 070107 no error connect time 0 no error pre transfer time 0 no error start transfer time 0 no error 2020 08 20 09 55 50 452114 e tensorflow core platform cloud curl http request cc 596 the transmission of request 0x56396edee160 uri have be stick at 0 of 0 byte for 61 second and will be abort curl timing information lookup time 0 039491 no error connect time 0 no error pre transfer time 0 no error start transfer time 0 no error 2020 08 20 09 56 57 020900 e tensorflow core platform cloud curl http request cc 596 the transmission of request 0x56396efbecd0 uri have be stick at 0 of 0 byte for 61 second and will be abort curl timing information lookup time 5 08334 no error connect time 0 no error pre transfer time 0 no error start transfer time 0 no error 2020 08 20 09 57 59 414296 e tensorflow core platform cloud curl http request cc 596 the transmission of request 0x56396f20f780 uri have be stick at 0 of 0 byte for 61 second and will be abort curl timing information lookup time 0 039584 no error connect time 0 no error pre transfer time 0 no error start transfer time 0 no error I want to know how can I solve it thank you |
tensorflowtensorflow | miss momentum in documentation of resourceapplycenteredrmsprop | Bug | the issue be for this page momentum be miss from the list of argument current doc have this scope a scope object var should be from a variable mg should be from a variable ms should be from a variable mom should be from a variable lr scaling factor must be a scalar rho decay rate must be a scalar epsilon ridge term must be a scalar grad the gradient this should be instead like this momentum add after rho scope a scope object var should be from a variable mg should be from a variable ms should be from a variable mom should be from a variable lr scaling factor must be a scalar rho decay rate must be a scalar momentum momentum scale must be a scalar epsilon ridge term must be a scalar grad the gradient clear description this could be due to a bug in the script that auto generate the api def this be the op registration from l1056 l1069 register op resourceapplycenteredrmsprop input var resource input mg resource input ms resource input mom resource input lr t input rho t input momentum t input epsilon t input grad t attr t numbertype attr use lock bool false setshapefn applycenteredrmspropshapefn be sparse false be resource true |
tensorflowtensorflow | tf dynamic partition cause crash when use multiple gpu via tf distribute mirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 1 cudnn 7 6 1 gpu model and memory rtx 2080 8 gb describe the current behavior the tf dynamic partition operation crash when run on multiple gpu use tf distribute mirroredstrategy describe the expect behavior the same code also use tf distribute mirroredstrategy run succesfully when limit to a single gpu by set cuda visible device 0 standalone code to reproduce the issue import tensorflow as tf n 100 m 4 distribute strategy tf distribute mirroredstrategy def op datum tf random uniform n partition tf random uniform n maxval m dtype tf int32 return tf dynamic partition datum partition m distribute strategy run op other info log full output of the above code 2020 08 19 12 05 36 898086 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 08 19 12 05 37 828508 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 08 19 12 05 37 900555 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 09 00 0 name geforce rtx 2080 computecapability 7 5 coreclock 1 83ghz corecount 46 devicememorysize 7 79gib devicememorybandwidth 417 23gib s 2020 08 19 12 05 37 901044 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 42 00 0 name geforce rtx 2080 computecapability 7 5 coreclock 1 83ghz corecount 46 devicememorysize 7 79gib devicememorybandwidth 417 23gib s 2020 08 19 12 05 37 901070 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 08 19 12 05 37 902404 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 08 19 12 05 37 903988 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 08 19 12 05 37 904184 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 08 19 12 05 37 905511 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 08 19 12 05 37 906204 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 08 19 12 05 37 908753 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 08 19 12 05 37 910997 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 1 2020 08 19 12 05 37 911427 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 08 19 12 05 37 938447 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2994045000 hz 2020 08 19 12 05 37 941540 I tensorflow compiler xla service service cc 168 xla service 0x49ac240 initialize for platform host this do not guarantee that xla will be use device 2020 08 19 12 05 37 941596 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 08 19 12 05 42 073849 I tensorflow compiler xla service service cc 168 xla service 0x4a182c0 initialize for platform cuda this do not guarantee that xla will be use device 2020 08 19 12 05 42 073925 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 compute capability 7 5 2020 08 19 12 05 42 073967 I tensorflow compiler xla service service cc 176 streamexecutor device 1 geforce rtx 2080 compute capability 7 5 2020 08 19 12 05 42 075578 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 09 00 0 name geforce rtx 2080 computecapability 7 5 coreclock 1 83ghz corecount 46 devicememorysize 7 79gib devicememorybandwidth 417 23gib s 2020 08 19 12 05 42 076318 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 42 00 0 name geforce rtx 2080 computecapability 7 5 coreclock 1 83ghz corecount 46 devicememorysize 7 79gib devicememorybandwidth 417 23gib s 2020 08 19 12 05 42 076367 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 08 19 12 05 42 076406 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 08 19 12 05 42 076433 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 08 19 12 05 42 076457 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 08 19 12 05 42 076478 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 08 19 12 05 42 076499 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 08 19 12 05 42 076524 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 08 19 12 05 42 079838 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 1 2020 08 19 12 05 42 079887 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 08 19 12 05 42 939356 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 08 19 12 05 42 939406 I tensorflow core common runtime gpu gpu device cc 1263 0 1 2020 08 19 12 05 42 939413 I tensorflow core common runtime gpu gpu device cc 1276 0 n n 2020 08 19 12 05 42 939417 I tensorflow core common runtime gpu gpu device cc 1276 1 n n 2020 08 19 12 05 42 941258 w tensorflow core common runtime gpu gpu bfc allocator cc 39 override allow growth set because the tf force gpu allow growth environment variable be set original config value be 0 2020 08 19 12 05 42 941298 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7252 mb memory physical gpu device 0 name geforce rtx 2080 pci bus i d 0000 09 00 0 compute capability 7 5 2020 08 19 12 05 42 942216 w tensorflow core common runtime gpu gpu bfc allocator cc 39 override allow growth set because the tf force gpu allow growth environment variable be set original config value be 0 2020 08 19 12 05 42 942237 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 2566 mb memory physical gpu device 1 name geforce rtx 2080 pci bus i d 0000 42 00 0 compute capability 7 5 warning tensorflow use mirroredstrategy eagerly have significant overhead currently we will be work on improve this in the future but for now please wrap call for each replica or experimental run or experimental run v2 inside a tf function to get the good performance 2020 08 19 12 05 42 972712 f tensorflow core kernel dynamic partition op gpu cu cc 108 non ok status gpulaunchkernel gatheropkernel config block count config thread per block 0 d stream param indice out gather dim size indice size slice size out size status internal invalid resource handle abort core dump |
tensorflowtensorflow | tensorflow lite converter emit incorrect mask for stridedslice when use ellipsis | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source tf nightly cpu 2 4 0 dev20200818 command use to run the converter or code if you re use the python api python import tensorflow as tf def main graph tf graph create a basic graph with only an overlap and add on some random data shape 1 1 1024 512 with graph as default input tf random uniform shape ola tf signal overlap and add input 256 name output try execute that graph in regular tensorflow with tf compat v1 session graph graph as session print f with regular tensorflow result be session run ola convert to tflite use the v1 interface for simplicity converter tf compat v1 lite tfliteconverter graph as graph def input ola tflite model converter convert write the model to disk model path f file tflite with open model path wb as f f write tflite model so that we can load it into an interpreter and see the error interpreter tf lite interpreter model path model path interpreter allocate tensor this line should throw as of tf nightly cpu 2 4 0 dev20200818 runtimeerror tensorflow lite kernel reshape cc 66 num input element num output element 0 524800 node number 5 reshape fail to prepare interpreter invoke print if we get here the bug do not appear if name main main failure detail the provide graph work in regular tensorflow and the tensorflow lite converter execute with no error but the tflite model fail at runtime with runtimeerror tensorflow lite kernel reshape cc 66 num input element num output element 0 524800 node number 5 reshape fail to prepare a visualization of the result graph in netron show that the node in question should have an input shape of 1 1 2050 256 with no unknown dimension image |
tensorflowtensorflow | error execute quickstart noteboook | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 win 10 tensorflow instal from source or binary conda gpu version tensorflow version use command below 2 1 0 python version 3 7 7 cuda cudnn version cudatoolkit 10 1 243 h74a9793 0 anaconda cudnn 7 6 5 cuda10 1 0 anaconda gpu model and memory gtx 750 ti 2 gb describe the problem I want to test the tf quick start notebook and get the follow error message unknownerror traceback most recent call last in 9 10 for image label in train ds 11 train step image label 12 13 for test image test label in test ds anaconda3 envs tf lib site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count anaconda3 envs tf lib site package tensorflow core python eager def function py in call self args kwd 630 lifting succeed so variable be initialize and we can run the 631 stateless function 632 return self stateless fn args kwd 633 else 634 canon args canon kwd anaconda3 envs tf lib site package tensorflow core python eager function py in call self args kwargs 2361 with self lock 2362 graph function args kwargs self maybe define function args kwargs 2363 return graph function filter call args kwargs pylint disable protect access 2364 2365 property anaconda3 envs tf lib site package tensorflow core python eager function py in filter call self args kwargs 1609 if isinstance t op tensor 1610 resource variable op baseresourcevariable 1611 self capture input 1612 1613 def call flat self args capture input cancellation manager none anaconda3 envs tf lib site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1690 no tape be watch skip to run the function 1691 return self build call output self inference function call 1692 ctx args cancellation manager cancellation manager 1693 forward backward self select forward and backward function 1694 args anaconda3 envs tf lib site package tensorflow core python eager function py in call self ctx args cancellation manager 543 input args 544 attrs executor type executor type config proto config 545 ctx ctx 546 else 547 output execute execute with cancellation anaconda3 envs tf lib site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor anaconda3 envs tf lib site package six py in raise from value from value unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node my model conv2d conv2d define at 10 op inference train step 566 error may have originate from an input operation input source operation connect to node my model conv2d conv2d image define at 11 function call stack train step I instal tf cuda and cudnn use conda install c anaconda tensorflow gpu |
tensorflowtensorflow | cosinesimilarity documentation range incorrect | Bug | url s with the issue description of issue what need change clear description note that it be a negative quantity between 1 and 0 should be change to note that it be a negative quantity between 1 and 1 correct link parameter define return define raise list and define usage example request visual if applicable submit a pull request don t plan to submit pull request |
tensorflowtensorflow | what grappler optimizer turn on by default | Bug | url s with the issue description of issue what need change it should be clear which optimizer grappler apply by default for now it s not clear if I should turn on a lot of feature by myself parameter define be all parameter define and format correctly no |
tensorflowtensorflow | should the custom loss function in keras return a single loss value for the batch or an arrary of loss for every sample in the training batch | Bug | I ask a question on stackoverflow regard as the return value of a custom loss funtion but I didn t get a clear answer in this guide custom loss on tensorflow website I find an example of custom loss funciton def custom mean square error y true y pre return tf math reduce mean tf square y true y pre the reduce mean function in this custom loss function will return an scalar but I think the custom loss function should return an array of loss for every example in a training batch rather than a single loss value accord to the source code of model l159 l2634 class the custom loss function be use to construct a lossfunctionwrapper object I read the source code of the loss module I think it s lossfunctionwrapper call method that be responsible for get the mean loss value for the training batch lossfunctionwrapper call method first call the lossfunctionwrapper call method to get an array of loss for every example in the training batch it s in the lossfunctionwrapper call method that our custom loss function be call in addition in the souece code of loss module the meanabsoluteerror class use the mean squared error function to construct a lossfunctionwrapper class we can see that the mean squared error function return k mean math op square difference y pre y true axis 1 which be an array not a single value I think our custom loss function shoud just be like this so why do the custom loss function in the guide custom loss on tensorflow website return a scalar be it wrong to define a custom function like this |
tensorflowtensorflow | erroneously trigger tf function retracing warning when rapidly create new tf model | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 google colab tensorflow instal from source or binary binary tensorflow version use command below occur in v2 3 0 and tf nightly v2 4 0a20200817 not in v2 2 0 python version 3 7 7 relate issue describe the current behavior the function retracing warning see below be trigger constantly when create multiple independent model similar bug occur recently in the above mention issue though while those occur when do rapid prediction on the same model do this bug occur solely when create and predict multiple new model rapidly which be necessary e g for tf evolutionary framework the tf function retracing warning be trigger after the creation of the first 5 model the bug occur in v2 3 0 and today tf nightly though not in v2 2 0 I attempt workaround mention in previous issue set experimental relax shape to true disable eager execution and set step to 1 though none work I also attempt to provide fix input shape and batch size as mention in the warning message though this be also unsuccessful while I be no expert in tf function do it seem to I that the counter to trigger the warning seem to be a global variable and not seperate for each model therefore trigger the warning that excessive retracing for the current model occur even though it be only fast consecutive initial retracing for each one of multiple model just a hunch though exact warning message warning tensorflow 5 out of the last 5 call to predict function at 0x7f3c67c45a60 trigger tf function retracing tracing be expensive and the excessive number of tracing could be due to 1 create tf function repeatedly in a loop 2 pass tensor with different shape 3 pass python object instead of tensor for 1 please define your tf function outside of the loop for 2 tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing for 3 please refer to python or tensor args and for more detail standalone code to reproduce the issue import numpy as np import tensorflow as tf x np array 0 0 0 1 1 0 1 1 y np array 0 1 1 0 for I in range 50 tf print f model I model tf keras model sequential model add tf keras layer dense unit 2 model add tf keras layer dense unit 1 prediction model predict x google colab reproduce code |
tensorflowtensorflow | keras layer call args signature inconsistent with actual code | Bug | url s with the issue description of issue what need change clear description the call method be list as call input kwargs and document that input be an input tensor or list tuple of input tensor but actual code usage show that call may accept multiple positional argument depend on the actual layer implementation for instance this be confusing when review code that define a custom layer because the doc for call don t reflect the actual usage submit a pull request be you plan to also submit a pull request to fix the issue no I don t have enough domain expertise to be confident about the detail |
tensorflowtensorflow | tfliteconverter fail when convert a quantize model train with distribute strategy | Bug | most of the code be take from tensorflow tutorial run on linux 18 04 tf version v2 3 0 rc2 23 gb36436b087 2 3 0 instal with pip python 3 7 cuda 10 1 after train a quantize model I m try to to convert to tflite but it fail same code with quant enable false work fine standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras import layer input model regularizer model optimizer loss import tensorflow model optimization as tfmot from functools import partial from pathlib import path from model network factory import get network def get model conv param pad same use bias false stride 2 kernel regularizer regularizer l2 0 01 input input shape 32 32 3 x layer conv2d 16 3 conv param input x layer batchnormalization x x layer relu x x layer conv2d 32 3 conv param x x layer batchnormalization x x layer relu x x layer conv2d 64 3 conv param x x layer batchnormalization x x layer relu x x layer flatten x x layer dropout 0 7 x x layer dense 10 kernel regularizer regularizer l2 0 01 x return model inputs input output x name simplenet def apply quantization to layer layer default quantization if type layer in layer conv2d layer depthwiseconv2d layer dense layer batchnormalization layer add layer relu return tfmot quantization keras quantize annotate layer layer return layer def quantize model in model clone fn apply quantization to layer annotate model model clone model in model clone function partial clone fn with tfmot quantization keras quantize scope quant aware model tfmot quantization kera quantize apply annotated model return quant aware model quant enable true strategy tf distribute mirroredstrategy print f nnumber of gpu strategy num replicas in sync n with strategy scope model get model model summary quantize the model if quant enable raise exception need to debug distribute training with quantization model quantize model model model compile optimizer optimizer adam loss loss sparsecategoricalcrossentropy from logit true metric accuracy num replicas in sync strategy num replicas in sync batch size val batch size 5 5 ds train tf datum dataset from tensor slice np random normal size 32 32 3 for in range 100 np random randint 10 for in range 100 ds train ds train batch batch size ds train ds train prefetch tf datum experimental autotune ds test tf datum dataset from tensor slice np random normal size 32 32 3 for in range 20 np random randint 10 for in range 20 ds test ds test batch val batch size ds test ds test prefetch tf datum experimental autotune history model fit ds train epoch 4 validation datum ds test verbose 1 converter tf lite tfliteconverter from keras model model converter experimental new converter false tflite model converter convert |
tensorflowtensorflow | error in validation split percentage image classification tutorial | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change under load use kera preprocesse create dataset the percentage under train ds for validation split be set to 0 2 I believe be suppose to be 0 8 clear description when you use 0 8 on the number of image in the directory it will result to 2936 but 0 2 be way small |
tensorflowtensorflow | compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 describe the current behavior throw the follow error when call model predict dataset on tpus traceback most recent call last file bl model py line 188 in app run main file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file bl model py line 158 in main emb label dataset model predict test dataset file usr local lib python3 6 dist package tensorflow python keras engine training py line 130 in method wrapper return method self args kwargs file usr local lib python3 6 dist package tensorflow python keras engine training py line 1601 in predict context async wait file usr local lib python3 6 dist package tensorflow python eager context py line 2319 in async wait context sync executor file usr local lib python3 6 dist package tensorflow python eager context py line 658 in sync executor pywrap tfe tfe contextsyncexecutor self context handle tensorflow python framework error impl invalidargumenterror 9 root error s find 0 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 cluster predict function control after 1 347 1 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 tpu compile succeed assert 16053711828520823699 6 209 2 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 cluster predict function control after 1 363 3 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 tpu compile succeed assert 16053711828520823699 6 223 4 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 cluster predict function control after 1 355 5 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 tpu compile succeed assert 16053711828520823699 6 251 6 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride yet please file a bug against xla node functional 1 tf op layer stride slice stride slice tpu compilation fail tpu compile succeed assert 16053711828520823699 6 tpu compile succeed assert 16053711828520823699 6 307 7 invalid argument function node inference predict function 94126 compilation failure xla have not implement dynamic sized slice with non trival stride y truncate describe the expect behavior there be no error under tf 2 2 0 be there any way to turn off xla for tpus in tf 2 3 0 |
tensorflowtensorflow | error converter do not support quantization nn with tanh activation | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 3 0 google collab command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook copy and paste here the exact command the output from the converter invocation warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 layer update from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 layer update from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically info tensorflow asset write to tmp tmpnq1srnbk asset info tensorflow asset write to tmp tmpnq1srnbk asset runtimeerror traceback most recent call last in 9 converter representative dataset representative dataset generator 10 11 tflite model converter convert 3 frame usr local lib python3 6 dist package tensorflow lite python optimize calibrator py in calibrate and quantize self dataset gen input type output type allow float activation type resize input 96 np dtype input type as numpy dtype num 97 np dtype output type as numpy dtype num allow float 98 np dtype activation type as numpy dtype num 99 100 def calibrate and quantize single self runtimeerror quantization not yet support for op also please include a link to the save model or graphdef see in google collab notebook failure detail if the conversion be successful but the generate model be wrong state what be wrong converter fail during conversion rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log my network use tanh activation if I use tanh activation the converter fail if I use only relu activation the converter do conversion successfully |
tensorflowtensorflow | gradienttape gradient need to check target type | Bug | l991 recently I write some code below which be very simple python tape gradient loss model trainable variable but it raise typeerror can not convert value none to a tensorflow dtype it turn out my custom loss function be return none yes I know I be dumb I have to check all of the relate tensorflow code line to find this dumb problem so I suggest the type checking block inside of this code I think this suggestion can be helpful for fool like I thank in advance |
tensorflowtensorflow | sparsemax tf cumsum | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary 2 3 tensorflow version or github sha if from source provide the text output from tflite convert info tensorflow asset write to c user the author appdata local temp tmp1ty7mqo0 asset info tensorflow asset write to c user the author appdata local temp tmp1ty7mqo0 asset exception traceback most recent call last appdata roam python python37 site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 198 debug info str 199 enable mlir converter 200 return model str appdata roam python python37 site package tensorflow lite python wrap toco py in wrap toco convert model flags str toco flags str input data str debug info str enable mlir converter 37 debug info str 38 enable mlir converter 39 exception c user the author appdata roam python python37 site package tensorflow python op math op py 3736 0 error tf cumsum op be neither a custom op nor a flex op c user the author appdata roam python python37 site package tensorflow python util dispatch py 201 0 note call from c user the author appdata local program python python37 lib site package tensorflow addon activation sparsemax py 105 0 note call from c user the author appdata local program python python37 lib site package tensorflow addon activation sparsemax py 47 0 note call from c user the author appdata roam python python37 site package tensorflow python keras layers convolutional py 269 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine base layer py 985 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine functional py 508 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine functional py 386 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine base layer py 985 0 note call from c user the author appdata roam python python37 site package tensorflow python keras save saving util py 134 0 note call from c user the author appdata roam python python37 site package tensorflow python op math op py 3736 0 note see current operation 12 tf cumsum value cst 1 device exclusive false reverse false tensor tensor tensor 0 error fail while convert main op that need custom implementation enable via set the emit custom op flag tf cumsum device exclusive false reverse false 0 note see current operation func bb0 arg0 tensor no predecessor cst std constant value dense 0 000000e 00 tensor tensor cst 0 std constant value dense 0x7fc00000 tensor tensor cst 1 std constant value dense 1 tensor tensor cst 2 std constant value dense 0 tensor tensor cst 3 std constant value dense 1 tensor tensor cst 4 std constant value dense 1 000000e 00 tensor tensor cst 5 std constant value dense 0xb2936cbe88f3b63e70265b3ed3bfc5bedb44c63e4365f43e241b69be14cbd0bde4a013be81638fbeaeccf8be1ff1adbea9cb8abe019bbd3e6ed7753df441f53d01d5143fa511a6be5f2a97bee01fcfbeebc3e23c99c39f3e6c510e3fd29acd3e5e5f94be26b3c2be5be612bf09cab6bd97d9a83e0076bfbee1950c3fec438abd41f56ebe33d299beccfceebe70d91a3e3c6fd6bd7440483e123e17bda734813eb9a9103e98bfb3bc6fa1bfbe00b5be3e0e84aabe07ae0bbe6385b63d5b9b08be7506263edc6fa93ecf488ebc046aafbe32b567bc9b356cbefc4c803e12a3973ef17a333ece1e1c3e4f85c7bdfa8aa6bef72768bc53141f3b0e1d213ead543bbe37f004bd2720973e078c2d3eec62913e2d23863d2bf081be4b3fc5bd4beabfbdd257da3ed93fa2be20c04bbe3a34583eb16bd43e36cccbbe318360be3007823e26df61bed0ca423e866a8fbe435dac3dfd61df3d29cd723e67decdbe79b99ebd3870183cc82a863e8bb9c83d57cfe1be4a8d99be42bd263ecbaa2cbdfbf3ea3e809d4ebe2e7e04bfb6db2bbe6d670bbe7642b5be1a9ea0bd7c9c3abe7074623e51c034be10f3093e460fb1be7f2e99be tensor 4x3x3x3xf32 tensor 4x3x3x3xf32 cst 6 std constant value dense 0 000000e 00 tensor 4xf32 tensor 4xf32 cst 7 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 8 std constant value dense 0 tensor 1xi32 tensor 1xi32 cst 9 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 10 std constant value dense 0 1 tensor 2xi32 tensor 2xi32 cst 11 std constant value dense 0 tensor 2xi32 tensor 2xi32 cst 12 std constant value dense 1 tensor 2xi32 tensor 2xi32 0 tfl conv 2d arg0 cst 5 cst 6 dilation h factor 1 i32 dilation w factor 1 i32 fuse activation function none pad same stride h 1 i32 stride w 1 i32 tensor tensor 4x3x3x3xf32 tensor 4xf32 tensor 1 tfl shape 0 tensor tensor 4xi32 2 tfl stride slice 1 cst 8 cst 7 cst 9 begin mask 1 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 0 i32 tensor 4xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 3xi32 3 tfl reduce prod 2 cst 8 keep dim false tensor 3xi32 tensor 1xi32 tensor 4 tfl range cst 2 3 cst 3 tensor tensor tensor tensor 5 tfl stride slice 1 cst 7 cst 8 cst 9 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 4xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 6 tfl cast 5 tensor tensor 7 tfl add 6 cst 4 fuse activation function none tensor tensor tensor 8 tfl range cst 4 7 cst 4 tensor tensor tensor tensor 9 tfl pack 3 5 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 10 tfl fill 9 cst 0 tensor 2xi32 tensor tensor 11 tfl reshape 0 9 tensor tensor 2xi32 tensor value indice tfl topk v2 11 5 tensor tensor tensor tensor 12 tf cumsum value cst 1 device exclusive false reverse false tensor tensor tensor 13 tfl stride slice 12 cst 10 cst 11 cst 12 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 14 tf isnan 13 device tensor tensor 15 tfl mul 8 value fuse activation function none tensor tensor tensor 16 tfl add 15 cst 4 fuse activation function none tensor tensor tensor 17 tfl great 16 12 tensor tensor tensor 18 tfl cast 17 tensor tensor 19 tfl sum 18 cst 1 keep dim false tensor tensor tensor 20 tfl cast 19 tensor tensor 21 tfl equal 19 cst 2 tensor tensor tensor 22 tfl logical or 21 14 tensor tensor tensor 23 tfl expand dim 22 cst 1 tensor tensor tensor 24 tfl maximum 19 cst 3 tensor tensor tensor 25 tfl sub 24 cst 3 fuse activation function none tensor tensor tensor 26 tfl reshape 25 cst 7 tensor tensor 1xi32 tensor 27 tfl pack 4 26 axis 1 i32 value count 2 i32 tensor tensor tensor 28 tfl gather nd 12 27 tensor tensor tensor 29 tfl sub 28 cst 4 fuse activation function none tensor tensor tensor 30 tfl div 29 20 fuse activation function none tensor tensor tensor 31 tfl expand dim 30 cst 1 tensor tensor tensor 32 tfl sub 11 31 fuse activation function none tensor tensor tensor 33 tfl maximum 32 cst tensor tensor tensor 34 tfl select v2 23 10 33 tensor tensor tensor tensor 35 tfl reshape 34 1 tensor tensor 4xi32 tensor std return 35 tensor sym name main tf entry function control output input input 8 output identity type tensor tensor during handling of the above exception another exception occur convertererror traceback most recent call last in 4 converter experimental new converter true 5 6 tflite model converter convert appdata roam python python37 site package tensorflow lite python lite py in convert self 829 830 return super tflitekerasmodelconverterv2 831 self convert graph def input tensor output tensor 832 833 appdata roam python python37 site package tensorflow lite python lite py in convert self graph def input tensor output tensor 631 input tensor input tensor 632 output tensor output tensor 633 converter kwargs 634 635 calibrate and quantize flag quant mode quantizer flag appdata roam python python37 site package tensorflow lite python convert py in toco convert impl input data input tensor output tensor enable mlir converter args kwargs 572 input datum serializetostre 573 debug info str debug info str 574 enable mlir converter enable mlir converter 575 return datum 576 appdata roam python python37 site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 200 return model str 201 except exception as e 202 raise convertererror str e 203 204 if distutil spawn find executable toco from proto bin be none convertererror c user the author appdata roam python python37 site package tensorflow python op math op py 3736 0 error tf cumsum op be neither a custom op nor a flex op c user the author appdata roam python python37 site package tensorflow python util dispatch py 201 0 note call from c user the author appdata local program python python37 lib site package tensorflow addon activation sparsemax py 105 0 note call from c user the author appdata local program python python37 lib site package tensorflow addon activation sparsemax py 47 0 note call from c user the author appdata roam python python37 site package tensorflow python keras layers convolutional py 269 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine base layer py 985 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine functional py 508 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine functional py 386 0 note call from c user the author appdata roam python python37 site package tensorflow python keras engine base layer py 985 0 note call from c user the author appdata roam python python37 site package tensorflow python keras save saving util py 134 0 note call from c user the author appdata roam python python37 site package tensorflow python op math op py 3736 0 note see current operation 12 tf cumsum value cst 1 device exclusive false reverse false tensor tensor tensor 0 error fail while convert main op that need custom implementation enable via set the emit custom op flag tf cumsum device exclusive false reverse false 0 note see current operation func bb0 arg0 tensor no predecessor cst std constant value dense 0 000000e 00 tensor tensor cst 0 std constant value dense 0x7fc00000 tensor tensor cst 1 std constant value dense 1 tensor tensor cst 2 std constant value dense 0 tensor tensor cst 3 std constant value dense 1 tensor tensor cst 4 std constant value dense 1 000000e 00 tensor tensor cst 5 std constant value dense 0xb2936cbe88f3b63e70265b3ed3bfc5bedb44c63e4365f43e241b69be14cbd0bde4a013be81638fbeaeccf8be1ff1adbea9cb8abe019bbd3e6ed7753df441f53d01d5143fa511a6be5f2a97bee01fcfbeebc3e23c99c39f3e6c510e3fd29acd3e5e5f94be26b3c2be5be612bf09cab6bd97d9a83e0076bfbee1950c3fec438abd41f56ebe33d299beccfceebe70d91a3e3c6fd6bd7440483e123e17bda734813eb9a9103e98bfb3bc6fa1bfbe00b5be3e0e84aabe07ae0bbe6385b63d5b9b08be7506263edc6fa93ecf488ebc046aafbe32b567bc9b356cbefc4c803e12a3973ef17a333ece1e1c3e4f85c7bdfa8aa6bef72768bc53141f3b0e1d213ead543bbe37f004bd2720973e078c2d3eec62913e2d23863d2bf081be4b3fc5bd4beabfbdd257da3ed93fa2be20c04bbe3a34583eb16bd43e36cccbbe318360be3007823e26df61bed0ca423e866a8fbe435dac3dfd61df3d29cd723e67decdbe79b99ebd3870183cc82a863e8bb9c83d57cfe1be4a8d99be42bd263ecbaa2cbdfbf3ea3e809d4ebe2e7e04bfb6db2bbe6d670bbe7642b5be1a9ea0bd7c9c3abe7074623e51c034be10f3093e460fb1be7f2e99be tensor 4x3x3x3xf32 tensor 4x3x3x3xf32 cst 6 std constant value dense 0 000000e 00 tensor 4xf32 tensor 4xf32 cst 7 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 8 std constant value dense 0 tensor 1xi32 tensor 1xi32 cst 9 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 10 std constant value dense 0 1 tensor 2xi32 tensor 2xi32 cst 11 std constant value dense 0 tensor 2xi32 tensor 2xi32 cst 12 std constant value dense 1 tensor 2xi32 tensor 2xi32 0 tfl conv 2d arg0 cst 5 cst 6 dilation h factor 1 i32 dilation w factor 1 i32 fuse activation function none pad same stride h 1 i32 stride w 1 i32 tensor tensor 4x3x3x3xf32 tensor 4xf32 tensor 1 tfl shape 0 tensor tensor 4xi32 2 tfl stride slice 1 cst 8 cst 7 cst 9 begin mask 1 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 0 i32 tensor 4xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 3xi32 3 tfl reduce prod 2 cst 8 keep dim false tensor 3xi32 tensor 1xi32 tensor 4 tfl range cst 2 3 cst 3 tensor tensor tensor tensor 5 tfl stride slice 1 cst 7 cst 8 cst 9 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 4xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 6 tfl cast 5 tensor tensor 7 tfl add 6 cst 4 fuse activation function none tensor tensor tensor 8 tfl range cst 4 7 cst 4 tensor tensor tensor tensor 9 tfl pack 3 5 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 10 tfl fill 9 cst 0 tensor 2xi32 tensor tensor 11 tfl reshape 0 9 tensor tensor 2xi32 tensor value indice tfl topk v2 11 5 tensor tensor tensor tensor 12 tf cumsum value cst 1 device exclusive false reverse false tensor tensor tensor 13 tfl stride slice 12 cst 10 cst 11 cst 12 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 14 tf isnan 13 device tensor tensor 15 tfl mul 8 value fuse activation function none tensor tensor tensor 16 tfl add 15 cst 4 fuse activation function none tensor tensor tensor 17 tfl great 16 12 tensor tensor tensor 18 tfl cast 17 tensor tensor 19 tfl sum 18 cst 1 keep dim false tensor tensor tensor 20 tfl cast 19 tensor tensor 21 tfl equal 19 cst 2 tensor tensor tensor 22 tfl logical or 21 14 tensor tensor tensor 23 tfl expand dim 22 cst 1 tensor tensor tensor 24 tfl maximum 19 cst 3 tensor tensor tensor 25 tfl sub 24 cst 3 fuse activation function none tensor tensor tensor 26 tfl reshape 25 cst 7 tensor tensor 1xi32 tensor 27 tfl pack 4 26 axis 1 i32 value count 2 i32 tensor tensor tensor 28 tfl gather nd 12 27 tensor tensor tensor 29 tfl sub 28 cst 4 fuse activation function none tensor tensor tensor 30 tfl div 29 20 fuse activation function none tensor tensor tensor 31 tfl expand dim 30 cst 1 tensor tensor tensor 32 tfl sub 11 31 fuse activation function none tensor tensor tensor 33 tfl maximum 32 cst tensor tensor tensor 34 tfl select v2 23 10 33 tensor tensor tensor tensor 35 tfl reshape 34 1 tensor tensor 4xi32 tensor std return 35 tensor sym name main tf entry function control output input input 8 output identity type tensor tensor standalone code to reproduce the issue input input 256 256 3 output tf keras layer conv2d activation tfa activation sparsemax filter 4 kernel size 3 3 stride 1 padding same kernel initializer tf keras initializers henormal input model model input input output output model compile tf keras optimizers nadam name nadam loss tfa loss sparsemaxloss from logit true metric tf keras metric meaniou 4 name mean iou converter tf lite tfliteconverter from keras model model converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert it can be fix if we call that fix by switch from the sparsemax to the softmax activation but of course the softmax doesn t give the same result as the sparsemax I also try the tfa layer sparsemax but I guess that they have the same root implementation since I get an equivalent error thank for your attention |
tensorflowtensorflow | unique with count on multi dimensional tensor | Bug | system information os platform and distribution macos catalina 10 15 3 tensorflow instal from binary tensorflow version 1 15 0 python version 3 7 3 describe the current behavior we have a tensor input tf tensor 1296 266 504 190 44 60 13 2 337 6742 2667 14 1 119 580 338 785 739 855 200 37 1 3 4 5 6 1296 266 504 190 44 60 13 2 337 6742 2667 14 1 119 580 338 785 739 855 200 37 1 3 4 5 6 shape 2 29 dtype int64 output tf tensor 0 2 1 0 0 0 0 2 1 0 0 0 shape 2 10000 dtype float32 here 10000 be the dictionary size describe the expect behavior we want vector output such that for each index it tell the frequency of each element ie in 1296 266 504 190 44 60 13 2 337 6742 2667 14 1 119 580 338 785 739 855 200 37 1 3 4 5 6 ie we see 0 occur 0 time 1 occur 2 time and so on currently what we be get be tf tensor 0 4 2 0 0 0 shape 1 10000 dtype float32 we want of shape 2 10000 the code we be try be import tensorflow as tf tf enable eager execution def get count feature vocab size t1d tf reshape feature shape 1 unique ids tf unique with count tf sort t1d print unique id print unique id dense vector tf sparse to dense unique ids y vocab size tf to float unique ids count print dense vector print dense vector feature batch tf reshape dense vector 1 vocab size print vocab count feature print feature batch return feature batch a tf constant 1296 266 504 190 44 60 13 2 337 6742 2667 14 1 119 580 338 785 739 855 200 37 1 3 4 5 6 1296 266 504 190 44 60 13 2 337 6742 2667 14 1 119 580 338 785 739 855 200 37 1 3 4 5 6 print get count a 10000 please tell we the right tf api transformation to do this |
tensorflowtensorflow | tf datum experimental snapshot hang when use gcs path | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 9 12 stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 python version 3 5 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda 10 1 gpu model and memory nvidia tesla t4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf datum experimental snapshot hang when use a google storage path describe the expect behavior tf datum experimental snapshot work standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf for in tf datum dataset range 10 apply tf datum experimental snapshot gs my bucket my path break hang use a local path for in tf datum dataset range 10 apply tf datum experimental snapshot gs my bucket deleteme break work as expect other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf datum dataset list file file path shuffle true fail on window w cuda | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 win 10 ltsc build 17763 tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 8 cuda cudnn version 10 1 cudnn 7 6 5 gpu model and memory quadro m2000 m 4096mib tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 3 0 rc2 23 gb36436b087 2 3 0 describe the current behavior specifically on this window system with cuda enable not happen w os environ cuda visible device 1 and not happen on my linux system at all call tf datum dataset list file tfrecord path shuffle true produce the follow error message traceback most recent call last file c lbortolotti performancemethod python generic toolbox bad list file py line 22 in tf datum dataset list file tfrecord path shuffle true file c python 36 lib site package tensorflow python data op dataset op py line 1125 in list file dataset dataset shuffle buffer size seed seed file c python 36 lib site package tensorflow python data op dataset op py line 1240 in shuffle return shuffledataset self buffer size seed reshuffle each iteration file c python 36 lib site package tensorflow python data op dataset op py line 3676 in init self flat structure file c python 36 lib site package tensorflow python ops gen dataset op py line 6215 in shuffle dataset v3 op raise from not ok status e name file c python 36 lib site package tensorflow python framework op py line 6843 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror buffer size must be great than zero op shuffledatasetv3 describe the expect behavior no error shuffle file list return standalone code to reproduce the issue gist |
tensorflowtensorflow | tfl cast can not convert tensor to uint8 while tfv1 work without problem | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary conda tensorflow version or github sha if from source 2 2 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook python copy and paste here the exact command import tensorflow kera as keras import tensorflow as tf ipt keras input shape 256 256 3 dtype float32 tmp tf cast ipt uint8 model keras model inputs ipt outputs tmp converter tf lite tfliteconverter from keras model model converter inference output type tf uint8 tflite model converter convert the output from the converter invocation copy and paste the output here convertererror see console for info 2020 08 14 08 31 08 628838 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 144 ignore output format 2020 08 14 08 31 08 628892 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 147 ignore drop control dependency 2020 08 14 08 31 08 636532 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 08 14 08 31 08 641744 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2200145000 hz 2020 08 14 08 31 08 642003 I tensorflow compiler xla service service cc 168 xla service 0x2719480 initialize for platform host this do not guarantee that xla will be use device 2020 08 14 08 31 08 642043 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 08 14 08 31 08 644263 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 08 14 08 31 08 647089 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit cuda error no device no cuda capable device be detect 2020 08 14 08 31 08 647135 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host e497356b9cd3 proc driver nvidia version do not exist loc callsite model 5 tf op layer cast 5 cast 5 usr local lib python3 6 dist package tensorflow python eager def function py 865 0 at callsite usr local lib python3 6 dist package tensorflow python eager def function py 959 0 at callsite usr local lib python3 6 dist package tensorflow lite python lite py 435 0 at callsite 6 0 at callsite usr local lib python3 6 dist package ipython core interactiveshell py 2882 0 at callsite usr local lib python3 6 dist package ipython core interactiveshell py 2822 0 at callsite usr local lib python3 6 dist package ipython core interactiveshell py 2718 0 at callsite usr local lib python3 6 dist package ipykernel zmqshell py 537 0 at callsite usr local lib python3 6 dist package ipykernel ipkernel py 208 0 at usr local lib python3 6 dist package ipykernel kernelbase py 399 0 error tfl cast op result 0 must be tensor of 32 bit float or 1 bit integer or 32 bit integer or 64 bit integer or complex type with 32 bit float element value but get tensor 1x256x256x3x tf uint8 traceback most recent call last file usr local bin toco from protos line 8 in sys exit main file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file usr local lib python3 6 dist package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file usr local lib python3 6 dist package tensorflow lite toco python toco from protos py line 56 in execute enable mlir converter exception usr local lib python3 6 dist package tensorflow python eager def function py 865 9 error tfl cast op result 0 must be tensor of 32 bit float or 1 bit integer or 32 bit integer or 64 bit integer or complex type with 32 bit float element value but get tensor 1x256x256x3x tf uint8 self initialize args kwargs add initializer to initializer usr local lib python3 6 dist package tensorflow python eager def function py 959 5 note call from concrete self get concrete function garbage collect args kwargs usr local lib python3 6 dist package tensorflow lite python lite py 435 5 note call from concrete func func get concrete function note call from usr local lib python3 6 dist package ipython core interactiveshell py 2882 17 note call from exec code obj self user global ns self user n usr local lib python3 6 dist package ipython core interactiveshell py 2822 17 note call from if self run code code result usr local lib python3 6 dist package ipython core interactiveshell py 2718 20 note call from interactivity interactivity compiler compiler result result usr local lib python3 6 dist package ipykernel zmqshell py 537 9 note call from return super zmqinteractiveshell self run cell args kwargs usr local lib python3 6 dist package ipykernel ipkernel py 208 13 note call from re shell run cell code store history store history silent silent usr local lib python3 6 dist package ipykernel kernelbase py 399 41 note call from user expression allow stdin also please include a link to the save model or graphdef put link here or attach to the issue see embed model definition |
tensorflowtensorflow | index into eagertensor return zero on window with cuda | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 win 10 ltsc build 17763 tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 8 cuda cudnn version 10 1 cudnn 7 6 5 gpu model and memory quadro m2000 m 4096mib tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 3 0 rc2 23 gb36436b087 2 3 0 describe the current behavior specifically on this window system with cuda enable not happen w os environ cuda visible device 1 and not happen on my linux system at all read a tfrecord with tf datum return an eagertensor which attempt to index result in incorrect zero output describe the expect behavior indexing into the eagertensor should return the correct output standalone code to reproduce the issue gist on a work system the output be tf tensor 100 105 110 115 120 shape 5 dtype float32 100 0 100 0 on my window system tf tensor 100 105 110 115 120 shape 5 dtype float32 0 0 100 0 |
tensorflowtensorflow | tf nest flatten crash abort when expand composite s constraint be violate | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf nest flatten crash abort when expand composite be 0d boolean be violate describe the expect behavior expect no crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np tf nest flatten structure np zero 1 expand composite tf one 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python terminate call after throw an instance of pybind11 error already set what systemerror return a result with an error set at root miniconda3 lib python3 7 site package numpy core arrayprint py 1388 array repr implementation root miniconda3 lib python3 7 site package tensorflow python util nest py 338 flatten 1 abort core dump |
tensorflowtensorflow | tf nest assert same structure crash abort when some constraint be violate | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf nest assert same structure crash abort systemerror when some either check type or expand composite be 0d bool be violate describe the expect behavior expect no crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook when check type be boolean be violate python import tensorflow as tf import numpy as np tf nest assert same structure nest1 np zero 1 nest2 tf one 1 1 1 check type tf one 2 when expand composite be boolean be violate python import tensorflow as tf import numpy as np tf nest assert same structure nest1 np zero 1 nest2 tf one 1 1 1 expand composite tf one 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python terminate call after throw an instance of pybind11 error already set what systemerror return a result with an error set at root miniconda3 lib python3 7 site package numpy core arrayprint py 1388 array repr implementation root miniconda3 lib python3 7 site package tensorflow python util nest py 397 assert same structure 1 abort core dump |
tensorflowtensorflow | datum dataset as numpy iterator be non reentrant compare to tensorflow datum iterator | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac 10 15 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 8 5 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when use the numpy iterator return from tensorflow dataset the iterator be not reentrant this be different compare to the dataset iterator in tensorflow describe the expect behavior the numpy iterator should be reentrant which share the same behaviour as tensorflow standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python ds tf datum dataset from tensor slice np arange 2 iterator ds for elem in iterator print elem for elem in iterator print elem give tf tensor 0 shape dtype int64 tf tensor 1 shape dtype int64 tf tensor 0 shape dtype int64 tf tensor 1 shape dtype int64 whereas ds tf datum dataset from tensor slice np arange 2 iterator ds as numpy iterator for elem in iterator print elem for elem in iterator print elem give 0 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach na |
tensorflowtensorflow | in tf2 3 reshape be not convert correctly | Bug | system information os platform and distribution e g linux ubuntu 16 04 osx tensorflow instal from source or binary pip tensorflow version or github sha if from source 2 3 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook converter tf lite tfliteconverter from save model save model dir tflite model converter convert also please include a link to the save model or graphdef attach to issue save model zip failure detail if the conversion be successful but the generate model be wrong state what be wrong when convert a reshape the result model have extra op here s the expect output as produce with converter experimental new converter false image and here s the output with the new converter note the extra op before the reshape image |
tensorflowtensorflow | tf nn ctc beam search decoder crash bad alloc when top path be large | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf nn ctc beam search decoder crash bad alloc when top path be extremely large describe the expect behavior expect no crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf tf nn ctc beam search decoder input tf one 1 1 1 sequence length 1 top path 1000000000000 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python what std bad alloc abort core dump |
tensorflowtensorflow | tf signal stft throw runtimeexception when pad end true | Bug | system information os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 8 conda describe the current behavior pad end of tf signal stft throw runtimeerror when set to true describe the expect behavior should not throw a runtimeexception just pad the end of the signal standalone code to reproduce the issue python import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import numpy as np def main input layer input shape none x tf signal stft input 512 20 pad end true model keras model inputs input output x signal tf constant np random rand 2 511 print model signal print all do if name main main other info log none traceback most recent call last file home sfalk tmp speech v2 asr bin tmp py line 17 in main file home sfalk tmp speech v2 asr bin tmp py line 9 in main x tf signal stft input 512 20 pad end true file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op signal spectral op py line 86 in stft frame signal shape op frame file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op signal shape op py line 162 in frame padding array op concat file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op array op py line 1654 in concat return gen array op concat v2 value value axis axis name name file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python ops gen array op py line 1221 in concat v2 op output op def library apply op helper file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework op def library py line 409 in apply op helper value op internal convert n to tensor file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework op py line 1561 in internal convert n to tensor convert to tensor file home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework op py line 1465 in convert to tensor raise runtimeerror attempt to capture an eagertensor without runtimeerror attempt to capture an eagertensor without build a function process finish with exit code 1 workaround do the padding yourself python import tensorflow as tf from tensorflow import kera from tensorflow keras import layer import numpy as np def main frame length 512 input layer input shape none x input pad frame length tf math mod tf shape x 1 frame length x tf pad x 0 0 0 pad x tf signal stft x 512 20 pad end false model keras model inputs input output x signal tf constant np random rand 2 511 print model signal print all do if name main main |
tensorflowtensorflow | tf keras backend reverse crash float point exception when first dimension of x be 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf keras backend reverse crash segfault when first dimension of x be 0 describe the expect behavior expect no crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np tf keras backend reverse x np ndarray shape 0 1 1 axis 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python float point exception core dump |
tensorflowtensorflow | tf convert to tensor and tf constant ignore tf device | Bug | url s with the issue description of issue what need change at least on 2 3 0 it seem to I that import numpy random as npr import tensorflow as tf with tf device gpu a tf convert to tensor npr randn 500 will create an eager tensor a on the cpu device it will not allocate ram on the gpu this be counter intuitive to someone who have only read the doc as it be write my understanding be that this happen because tf convert to tensor isn t an op and tf device only deal with op clear description the doc be pretty short now and I don t think it would hurt to add a little remark something like this note tf convert to tensor do not create an op as such it ignore the context create by tf device to ensure a give tensor be assign memory on a particular device one can wrap convert to tensor inside a tf identity op thought |
tensorflowtensorflow | comp datum link invalid in the analyze tf datum performance with the tf profiler guide | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example 3 be you reach high cpu utilization description of issue what need change refer to the follow line tf datum achieve high throughput by try to make the good possible use of available resource in general even when run your model on an accelerator like a gpu or tpu the tf datum pipeline be run on the cpu you can check your utilization with tool like sar and htop or in the cloud monitoring console if you re run on gcp the link attach to cloud monitoring console be invalid |
tensorflowtensorflow | typo in tflite coreml framework build command example | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior there be a single typo in build command example l82 l94 this introduce bazel build fail to newbie error skip tensorflow lite experimental ios tensorflowliteccoreml framework no such target tensorflow lite experimental ios tensorflowliteccoreml framework target tensorflowliteccoreml framework not declare in package tensorflow lite experimental io do you mean tensorflowliteccoreml framework define by user mumu hpcnt tensorflow tensorflow lite experimental io build warn target pattern parse fail error no such target tensorflow lite experimental ios tensorflowliteccoreml framework target tensorflowliteccoreml framework not declare in package tensorflow lite experimental io do you mean tensorflowliteccoreml framework define by user mumu hpcnt tensorflow tensorflow lite experimental io build info elapse time 21 011s info 0 process fail build do not complete successfully 1 package load describe the expect behavior at line 82 tensorflowliteccoreml framework should be fix to tensorflowliteccoreml framework standalone code to reproduce the issue other info log include any log or source code that would be helpful to |
tensorflowtensorflow | tf nn avg pool3d crash float point exception when input contain large value and stride 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf nn avg pool3d crash float point exception when input contain large value and stride 0 relate 42206 describe the expect behavior expect no crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np input tf constant 1e 40 dtype np float64 tf nn avg pool3d input input ksize 1 stride 0 padding same other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python float point exception core dump relate 42206 |
tensorflowtensorflow | confuse error in gradient of assign add while loop gather | Bug | describe the current behavior autograph produce a static graph with the wrong number of input when a trainable variable be in a separate class I also get a warning about convert a sparse op to a dense one which seem to be relate while to gather problem go away when remove describe the expect behavior the function should execute without error standalone code to reproduce the issue scrollto o6gzpt2gzipf |
tensorflowtensorflow | adam optimizer valueerror tf function decorate function try to create variable on non first call | Bug | I be use tensorflow 2 3 the code below import tensorflow as tf y n tf variable 1 2 3 name dd tf function def loss return tf reduce mean input tensor tf reduce sum input tensor tf math log y n axis 0 tf function def run tf keras optimizer adam 0 5 minimize loss var list y n run give exception valueerror tf function decorate function try to create variable on non first call problem look like tf keras optimizer adam 0 5 minimize loss var list y n create new variable on first call while use tf function if I must wrap adam optimizer under tf function be it possible look like a bug |
tensorflowtensorflow | tensorarray can not be convert with tfliteconverter | Bug | I want to use a tf tensorarray in a decode loop in order to collect predict id from the model for late conversion to text however it seem that tf tensorarray make issue when try to convert it with tfliteconverter system information os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 3 0 python version 3 8 conda cuda cudnn version 10 1 describe the current behavior convert a function which use tf tensorarray throw an exception say none error fail to legalize operation tf tensorlistreserve that be explicitly mark illegal describe the expect behavior convert the model without error standalone code to reproduce the issue python import tensorflow as tf tf function def tensor array output tf tensorarray dtype tf int32 size 1 dynamic size true output output write 0 1 output output write 1 2 output output write 2 3 return output gather tf range output size def main concrete fn tensor array get concrete function converter tf lite tfliteconverter from concrete function concrete fn converter experimental new converter true converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert with open model tflite wb as f f write tflite model if name main main other info log none 2020 08 10 10 28 45 806093 I tensorflow core grappler cluster single machine cc 356 start new session 2020 08 10 10 28 45 809971 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item graph to optimize 2020 08 10 10 28 45 809981 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 004ms 2020 08 10 10 28 45 809988 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2020 08 10 10 28 45 822697 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 313 ignore output format 2020 08 10 10 28 45 822721 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 316 ignore drop control dependency loc callsite tensorarrayv2 home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op tensor array op py 464 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op tensor array op py 1071 0 at callsite home sfalk tmp my speech v2 asr bin tensor array py 6 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework func graph py 962 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager def function py 600 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework func graph py 986 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 3065 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 3213 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 2855 0 at home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager def function py 696 0 error require element shape to be 1d tensor during tf lite transformation pass loc callsite tensorarrayv2 home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op tensor array op py 464 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python op tensor array op py 1071 0 at callsite home sfalk tmp my speech v2 asr bin tensor array py 6 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework func graph py 962 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager def function py 600 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python framework func graph py 986 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 3065 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 3213 0 at callsite home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager function py 2855 0 at home sfalk miniconda3 envs asr2 lib python3 8 site package tensorflow python eager def function py 696 0 error fail to legalize operation tf tensorlistreserve that be explicitly mark illegal |
tensorflowtensorflow | update tensorflow doc for a11y | Bug | description hey lamberta markdaoust yashk2810 I ve put together a few small commit to update the tensorflow doc for more inclusive language it s to do with native build in source an a11y presentation slide i d g6fe49527a0 0 334 by heyawhite a tech writer at google link to diff if you re ok with these change I can submit a pr submit a pull request yes can do affect doc tf testing good practice guide tensorflow r1 c api guide tf 1 x eager mode notebook tensorflow customization basic notebook build tensorflow on window guide tensorflow 2 migration notebook tf function notebook tf create an op in c guide tf r1 add a new op in c guide |
tensorflowtensorflow | importerror dll load fail while import pywrap tensorflow internal a dynamic link library dll initialization routine fail | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf nn ctc beam search decoder expect output of softmax whereas documentation say it expect logit | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 7 describe the current behavior currently say it expect input to be logit whereas it acutally expect softmax already apply see which expect output of softmax and it directly pass the input to see l6037 l6088 describe the expect behavior the documentation of should say it expect softmax output |
tensorflowtensorflow | hb | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | blank index in tf nn ctc loss and tf nn ctc beam search decoder have different default value | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 7 describe the current behavior currently and have different default blank index set default blank index to be 0 whreas doesn t have an api for set blank index and it assume to be num category 1 see l331 describe the expect behavior this be very unexpected I would assume they have the same default value since they both work with ctc or at least both should provide api to change the blank index |
tensorflowtensorflow | segfault in tf image crop and resize when box contain large value | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf image crop and resize segfault when there be a very large value in box can also be reproduce in nightly version describe the expect behavior expect no segfault standalone code to reproduce the issue python import tensorflow as tf tf image crop and resize image tf zeros 2 1 1 1 box 1 0e 40 0 0 0 box indice 1 crop size 1 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach python segmentation fault core dump |
tensorflowtensorflow | tensorflow hang forever in multinode training with nccl and certain model with syncbatchnorm layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 14 04 run in docker container host be 18 04 tensorflow instal from source or binary source tensorflow version use command below 2 2 0 python version 3 6 8 bazel version if compile from source 0 29 1 1 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version 10 0 gpu model and memory various e g nvidia rtx 2080 ti describe the current behavior in certain circumstance tensorflow appear to hang forever before start training multinode training use multiworkermirroredstrategy the bug do not appear when use one worker with multiple gpu or with one gpu nccl communication not 100 sure on this a specific model which seem to require syncbatchnorm layer interleave with conv2d layer this seem to happen both on the estimator and keras framework the output be slightly different but my guess be that it s the same bug I ve try to test with tf nightly but unfortunately I ve have difficulty get the correct cuda library for our particular setup and haven t be able to verify whether it be reproducible on tf nightly however if you e g be unable to reproduce and would like I to take another look please let I know describe the expect behavior tensorflow should not hang forever this toy model in particular finish training in second but when the bug be trigger hang for hour with no output standalone code to reproduce the issue please note that this code must be run with multiple worker to reproduce the issue and the tf config environment variable should be set correspondingly accord to your specific setup as far as I can see on colab it be possible to have multiple gpu on one worker but I m not sure how to set it up with multiple worker with one gpu per worker so I m not sure if this problem can be reproduce on colab but if there be a way to use multiple worker please point I at a guide and I can try to reproduce on colab estimator code import log import os import shutil import sys import numpy as np import tensorflow as tf def setup multi node training important set up tf config for multinode training here os environ tf force gpu allow growth true tf config set soft device placement true mirror strategy tf distribute experimental multiworkermirroredstrategy tf distribute experimental collectivecommunication nccl construct the configuration run config tf estimator runconfig train distribute mirror strategy return run config def input fn dataset tf datum dataset from tensor tf random normal shape 496 496 64 3 dataset dataset repeat return dataset def batch norm x be train layer tf keras layers experimental syncbatchnormalization axis 1 x norm layer x be train with tf control dependency layer get update for x x norm tf identity x norm return x norm def inference feature be train conv1 tf keras layer conv2d 32 3 padding same feature conv1bn batch norm conv1 be train deconv1bn batch norm conv1bn be train conv2 tf keras layer conv2d 32 3 padding same conv1bn conv2bn batch norm conv2 be train return tf keras layers concatenate conv1bn deconv1bn conv2bn def compute loss prediction label be train return tf reduce mean prediction def model fn feature label mode global step tf compat v1 train get global step be train mode tf estimator modekeys train prediction inference feature be train loss compute loss prediction label be train optimizer tf compat v1 train gradientdescentoptimizer 1e 5 train op optimizer minimize loss global step global step return tf estimator estimatorspec mode loss loss train op train op def main model dir tmp output run config param save checkpoint step 100 save summary step 100 log step count step 100 tf random seed 0 keep checkpoint max 1 model dir model dir run config setup multi node training replace run config param estimator tf estimator estimator model fn model fn config run config train spec tf estimator trainspec input fn input fn max step 1000 eval spec tf estimator evalspec input fn input fn step 100 throttle sec 0 start delay secs 0 print training and evaluate model tf estimator train and evaluate estimator train spec eval spec if name main main keras code import os import tensorflow as tf from tensorflow import kera class mymodel keras model def init self kwargs super init name my model kwargs self conv1 keras layer conv2d 16 3 padding same self sbn1 keras layer experimental syncbatchnormalization self sbn2 keras layer experimental syncbatchnormalization self conv2 keras layer conv2d 32 3 padding same self sbn3 keras layer experimental syncbatchnormalization self concat keras layers concatenate def call self input training false conv1 self conv1 input conv1bn self sbn1 conv1 training conv1bn2 self sbn2 conv1bn training conv2 self conv2 conv1bn conv2bn self sbn3 conv2 training return self concat conv1bn conv1bn2 conv2bn def get dataset dataset tf datum dataset from tensor tf random normal shape 496 496 64 3 dataset dataset repeat dataset tf datum dataset zip dataset dataset return dataset def main model dir tmp keras example important set up tf config for multinode training here os environ tf force gpu allow growth true tf config set soft device placement true mirror strategy tf distribute experimental multiworkermirroredstrategy tf distribute experimental collectivecommunication nccl create dataset train dataset get dataset with strategy scope model mymodel model compile optimizer keras optimizer adam loss keras loss meansquarederror model fit x train dataset step per epoch 100 epoch 1 if name main main other info log there be three log file estimator log estimator log with more debug output tf cpp min log level 0 and tf cpp min vlog level 2 note this be a google drive link as the file be 20 mb keras log I have trouble get the kera log with debug output so I haven t include that but will update if I can get it note that the second estimator log be gigantic but most of the log be output in about a minute and then the last hundred line or so seem to be of the job actually hang until it be kill after about 15 minute with this output repeat 0 2020 08 06 00 22 30 510236 I external org tensorflow tensorflow core framework model cc 892 start optimization of tunable parameter with hillclimb 0 2020 08 06 00 22 30 510355 I external org tensorflow tensorflow core framework model cc 943 number of tunable parameter 0 0 2020 08 06 00 22 30 510377 I external org tensorflow tensorflow core kernel data model dataset op cc 191 wait for 60000 ms it seem somewhat bizarre that if there be 0 parameter to tune tensorflow should still try to repeatedly optimize the nonexistent tunable parameter I wonder if there s some issue with the model cc code not properly signal that it have finish if there be no tunable parameter I see that in line 1485 1487 of core framework model cc there be some code that seem to be do something with the mutex and notify but this block would be entirely skip if there be no tunable parameter and that might contribute to the infinite loop here thank so much |
tensorflowtensorflow | problem with tf train floatlist precision | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 tensorflow instal from source or binary pip tensorflow version use command below 2 3 0 python version 3 6 10 cuda cudnn version 10 2 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I want to convert my pascal voc to tfrecord but my bounding box value be not the same for example my xmin be 0 620390625 but the value I get from example be 0 6203906536102295 python import tensorflow as tf xmin 0 620390625 feature dict xmin tf train feature float list tf train floatlist value xmin example tf train example feature tf train feature feature feature dict print xmin 0 620390625 print example feature feature xmin float list value 0 6203906536102295 and I also try other value only the value 2 7182817459106445 show in tfrecord tutorial would be the same as below screenshot from 2020 08 06 16 57 53 describe the expect behavior value should be the same standalone code to reproduce the issue colab link |
tensorflowtensorflow | qat conversion runtimeerror quantization not yet support for op dequantize issue with tf nightly | Bug | update you can now fully quantize qat model train in any tf 2 x version however this feature be only available from tf version 2 4 0 rc0 onwards and will be available in the final tf 2 4 release as well you will not require any workaround I e you don t have to use tf 1 x to verify that your tf version support this run the follow code and check if run successfully import tensorflow as tf assert tf version 3 2 4 your tf version do not support full quantization of qat model upgrade to a tf 2 4 version 2 4 0 rc0 2 4 0 rc1 2 4 or above format tf version issue system information tensorflow version use command below 2 4 0 dev20200728 describe the current behavior error converting quantize aware train tensorflow model to a fully integer quantize tflite model error runtimeerror quantization not yet support for op dequantize describe the expect behavior convert successfully standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook |
tensorflowtensorflow | doc ram usage when build from source | Bug | url s with the issue description of issue what need change aproximately the ram that use when compile imo build tensorflow from source can use a lot of ram be not clear how a lot of be how many gb I have vm with 6vcpu 10 gb ram and I frequently run out of memory this run out of memory be not only slow down the compiling but also make system freeze unresponsive every this happen I need to increase the ram by 500 mb to unlock the system clear description why we need this because we don t want to sleep with the machine compile overnight and when we wake up it turn out the compiling fail or the machine freeze and it s really waste of time this aproximately ram usage should help anticipate this though how about local ram resource well I already try with this flag local ram resource host ram 2 as describe from here flag local ram cpu resource be this mean the building process will only take 20 ram be this global or for every thread be this flag support on the v2 3 0 tag on this repo I watch htop and see many task at once use more than 20 I run out of memory again even with this this be ridiculous submit a pull request to this repo yes if I can get how aproximately the ram usage I think it s max around 3 gb per cpu core correct I if I m wrong other information well it seem I get a lot of ram but I run out of memory the vm be live cd not instal and also the swap yeah it s zram 5 gb instead of swapfile because uh I don t want to kill the ssd relate issue 30047 |
tensorflowtensorflow | keras meet a problem with sync bn in multi worker | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 debian mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a I just replace the keras bn layer in office resnet py with tf keras layers experimental syncbatchnormalization I find in nccl multi worker mode the model fit training process will be stick without error information and will not begin the training same code work in ring mode well I test the code with 2 4 host each have 2 4 gpu didn t work for all the situation standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf website tutorial save and load fail on google colab for entire model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab on cloud run in safari browser on mac osx 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary na tensorflow version use command below tf 2 3 0 git version v2 3 0 0 gb36436b087 python version appear to be 3 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version gpu model and memory describe the current behavior the tensorflow tutorial page fail even when run in google colab your cloud environment where it fail exactly be towards the bottom in the section of the tutorial describe how to save and load entire model with model save and load model I find that the restore model in either savedmodel or hdf5 format fail to reproduce the original model accuracy give exactly the provide code instead I get around 8 9 accuracy I get the same problem run the download jupyter notebook on my own laptop mac osx 10 14 6 tf 2 4 0 dev20200727 git v1 12 1 37595 g9f2e1a7246 tf from pip3 python 3 8 5 describe the expect behavior the expect behavior be for the save and reload model to have exactly the same accuracy on the same test datum standalone code to reproduce the issue I ve attach a small notebook that replicate the problem on my laptop minimal ipynb txt this be model after a similar tutorial under save model weight and architecture together that tutorial s code run fine with the give example datum on my laptop the save and reload model have identical accuracy to the original yet the very similar code in the attached notebook use the mnist datum and slightly different model do not I m not sure what this mean honestly I m pretty new to keras I m not sure if this be a bug in the keras code or just an error in the tutorial but again it doesn t work with the provide tutorial in the google colab environment suggest it s not an error on my part fwiw I do notice that the reload savedmodel model and the reloaded hdf5 model when save from the same original model as in my example notebook both give the identical bad accuracy other info log not sure if relevant but these warning appear the first time model save be use from google colab warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 layer update from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically info tensorflow asset write to save model my model asset on my laptop I get the same warning with a different path to track py user lilley library python 3 8 lib python site package tensorflow python training tracking track py the example run give on the tutorial page suggest that this functionality be originally work in tf 2 2 0 |
tensorflowtensorflow | timeserie example be not work with late 2 4 code | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue when it come to convolution model this error out code history compile and fit conv model conv window ipython display clear output val performance conv conv model evaluate conv window val performance conv conv model evaluate conv window test verbose 0 error notfounderror no algorithm work node sequential 3 conv1d conv1d define at 12 op inference train function 127129 function call stack train function |
tensorflowtensorflow | tf nn conv2d transpose name be overwrite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary tf 2 2 tensorflow version v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 7 cuda cudnn version 10 2 76 gpu model and memory describe the current behavior conv2d transpose output tensor name be conv2dbackpropinput 0 describe the expect behavior conv2d transpose output tensor name should be ct 0 standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras import input model input input shape 32 32 3 name net input w0 tf variable np one 4 4 3 6 astype np float32 16 16 name w0 c0 tf nn conv2d input w0 2 same name conv0 r0 tf nn relu c0 name relu0 wt tf variable np one 3 3 6 6 astype np float32 16 16 name wt ct tf nn conv2d transpose r0 wt 1 16 16 6 stride 1 name ct rt tf nn relu ct name relut wt2 tf variable np one 3 3 6 6 astype np float32 16 16 name wt2 ct2 tf nn conv2d transpose rt wt2 1 16 16 6 stride 1 name ct2 rt2 tf nn relu ct2 name relut2 w1 tf variable np one 16 16 6 4 astype np float32 16 16 name w1 c1 tf nn conv2d rt2 w1 1 valid name conv1 out tf nn relu c1 name relu1 m model inputs input output out name test print ct name |
tensorflowtensorflow | notfounderror no algorithm work convolutional model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 20 04 tensorflow instal from source or binary late as of today tensorflow version use command below 2 4 0 python version 3 8 0 bazel version if compile from source 3 1 0 gcc compiler version if compile from source cuda cudnn version 11 0 gpu model and memory rtx 2070 the sample code to debug this issue be at notfounderror no algorithm work node sequential 3 conv1d conv1d define at 12 op inference train function 127130 function call stack train function |
tensorflowtensorflow | typo on the order number for title on guide automatic differentiation | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change when list the reason of the none gradient chapter of get a gradient of none the text 3 take gradient through an integer or string be follow by order number 5 which be 5 take gradient through a stateful object suggest to correct it to 4 take gradient through a stateful object |
tensorflowtensorflow | tf 2 4 0 build from source get internalerror cuda runtime implicit initialization on gpu 0 fail status device kernel image be invalid | Bug | nvidia smi command issue from inside the container nvidia smi 450 57 driver version 450 57 cuda version err gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 geforce gtx 107 off 00000000 01 00 0 off n a 0 33c p8 6w 180w 193mib 8117mib 0 default n a process gpu gi ci pid type process name gpu memory i d i d usage I be run ubuntu 20 04 I follow the instruction to build from source after I compile tf inside the container I commit and save it I run the follow command to load the image and execute jupyter notebook docker run gpu all ipc host it w tensorflow v pwd mnt p 8888 8888 e host perm i d u i d g tensorflow tensorflow from src2 bash export ld library path usr local cuda extras cupti lib64 usr local cuda lib64 usr include x64 64 linux gnu usr local nvidia lib usr local nvidia lib64 pip install jupyter pip install jupyter http over ws jupyter serverextension enable py jupyter http over ws jupyter notebook no browser notebook dir mnt notebook ip 0 0 0 0 debug notebookapp allow origin notebookapp allow remote access true allow root this get I a running notebook server I try to run the tensorflow tutorial text classification ipynb file when I run the raw train ds tf keras preprocesse text dataset from directory aclimdb train batch size batch size validation split 0 2 subset training seed seed in the jupyter notebook I get typeerror could not build a typespec for aclimdb train neg 4932 4 txt there follow many page of text similar to the above then I get the following with type list during handling of the above exception another exception occur internalerror traceback most recent call last in 7 validation split 0 2 8 subset training 9 seed seed usr local lib python3 6 dist package tensorflow python keras preprocesse text dataset py in text dataset from directory directory label label mode class name batch size max length shuffle seed validation split subset follow link 159 label mode label mode 160 num class len class name 161 max length max length 162 if shuffle 163 shuffle locally at each iteration usr local lib python3 6 dist package tensorflow python keras preprocesse text dataset py in path and label to dataset file path label label mode num class max length 175 max length 176 construct a dataset of text string and label 177 path ds dataset op dataset from tensor slice file path 178 string ds path ds map 179 lambda x path to string content x max length usr local lib python3 6 dist package tensorflow python data op dataset op py in from tensor slice tensor 680 dataset a dataset 681 682 return tensorslicedataset tensor 683 684 class generatorstate object usr local lib python3 6 dist package tensorflow python data op dataset op py in init self element 2999 def init self element 3000 see dataset from tensor slice for detail 3001 element structure normalize element element 3002 batch spec structure type spec from value element 3003 self tensor structure to batch tensor list batch spec element usr local lib python3 6 dist package tensorflow python data util structure py in normalize element element 96 the value as a fallback try convert the value to a tensor 97 normalize component append 98 op convert to tensor t name component d I 99 else 100 if isinstance spec sparse tensor sparsetensorspec usr local lib python3 6 dist package tensorflow python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1524 1525 if ret be none 1526 ret conversion func value dtype dtype name name as ref as ref 1527 1528 if ret be notimplemente usr local lib python3 6 dist package tensorflow python framework constant op py in constant tensor conversion function v dtype name as ref 337 as ref false 338 as ref 339 return constant v dtype dtype name name 340 341 usr local lib python3 6 dist package tensorflow python framework constant op py in constant value dtype shape name 263 264 return constant impl value dtype shape name verify shape false 265 allow broadcast true 266 267 usr local lib python3 6 dist package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 274 with trace trace tf constant 275 return constant eager impl ctx value dtype shape verify shape 276 return constant eager impl ctx value dtype shape verify shape 277 278 g op get default graph usr local lib python3 6 dist package tensorflow python framework constant op py in constant eager impl ctx value dtype shape verify shape 299 def constant eager impl ctx value dtype shape verify shape 300 implementation of eager constant 301 t convert to eager tensor value ctx dtype 302 if shape be none 303 return t usr local lib python3 6 dist package tensorflow python framework constant op py in convert to eager tensor value ctx dtype 95 except attributeerror 96 dtype dtype as dtype dtype as datatype enum 97 ctx ensure initialize 98 return op eagertensor value ctx device name dtype 99 usr local lib python3 6 dist package tensorflow python eager context py in ensure initialize self 547 if self use tfrt be not none 548 pywrap tfe tfe contextoptionssettfrt opt self use tfrt 549 context handle pywrap tfe tfe newcontext opt 550 finally 551 pywrap tfe tfe deletecontextoption opt internalerror cuda runtime implicit initialization on gpu 0 fail status device kernel image be invalid this from tensorflow src bazelrc release gpu common action env tf cuda compute capability sm 35 sm 37 sm 52 sm 60 sm 61 compute 70 I believe the geforce 1070 be sm 61 compute level some software version gcc ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 python 3 6 9 nvcc nvidia r cuda compiler driver copyright c 2005 2019 nvidia corporation build on sun jul 28 19 07 16 pdt 2019 cuda compilation tool release 10 1 v10 1 243 please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I will attach the full bazelrc file and a pipe output of the build from source when I can figure out how to do that I m on an ipad now and can copy and paste but can t seem to figure out how to copy a file to the ipad and then upload to github issue |
tensorflowtensorflow | error in nce loss use intermediate layer output | Bug | standalone code to reproduce the issue colab notebook system information tensorflow version on my machine v2 2 0 rc4 8 g2b96f3662b 2 2 I be try to use the nce loss by pass the second to last layer s ouput to this loss function then I get error operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function |
tensorflowtensorflow | tf math l2 normalize gradient seem to be wrong | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 osx 10 16 16 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker container version tensorflow tensorflow 2 2 0 gpu and tensorflow tensorflow 2 3 0 tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 and v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 130 on linux no cuda on osx gpu model and memory gtx 1080 ti 12 gb on linux no gpu on osx you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior gradient of tf math l2 normalize seem to be wrong I confirm both analytically and with numerical computation that value compute by tensorflow be off there be always a chance my math be wrong but I m quite confident in it numerical computation indicate same result as my working here s a unit test that compare gradient compute by tensorflow to expect value since they differ test fail python def test l2 gradient with tensorflow x tf constant 3 4 tf float32 with tf gradienttape as gradient tape gradient tape watch x y tf math l2 normalize x expect y np array 0 6 0 8 assert np all np isclose expect y y gradient gradient tape gradient y x expect gradient np array 0 128 0 072 this line fail tensorflow compute gradient to be 0 032 0 024 assert np all np isclose expect gradient gradient describe the expect behavior below be my working for what the correct dy dx1 derivative should be for value in unit test above dy dx2 follow the same logic img 20200801 205810 I also double check with numerical computation in numpy and they agree with my number python def test l2 gradient with numpy specify x1 only x1 np arange 2 998 3 003 0 001 index for x1 3 test index 2 x2 be set to constant 4 y x1 np sqrt x1 x1 16 expect y 0 6 assert np isclose expect y y test index gradient np gradient y x1 expect gradient 0 128 assert np isclose expect gradient gradient test index standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook please refer to unit test above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras 2 4 3 test miss case | Bug | system information os platform and distribution linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version 2 2 0 python version 3 8 5 bazel version if compile from source 3 1 0 gcc compiler version if compile from source 5 4 0 cuda cudnn version cuda10 0 cudnn 7 6 5 32 gpu model and memory gtx1070 8 g keras version 2 4 3 I have a mnist script to test my tensorflow installation train a simple convnet on the mnist dataset get to 99 25 test accuracy after 12 epoch there be still a lot of margin for parameter tune 16 second per epoch on a grid k520 gpu import kera from keras datasets import mnist from keras model import sequential from keras layer import dense dropout flatten from keras layers import conv2d maxpooling2d from keras import backend as k batch size 128 num class 10 epoch 12 input image dimension img row img col 28 28 the datum split between train and test set x train y train x test y test mnist load datum if k image data format channel first x train x train reshape x train shape 0 1 img row img col x test x test reshape x test shape 0 1 img row img col input shape 1 img row img col else x train x train reshape x train shape 0 img row img col 1 x test x test reshape x test shape 0 img row img col 1 input shape img row img col 1 x train x train astype float32 x test x test astype float32 x train 255 x test 255 print x train shape x train shape print x train shape 0 train sample print x test shape 0 test sample convert class vector to binary class matrix y train keras util to categorical y train num class y test keras util to categorical y test num class model sequential model add conv2d 32 kernel size 3 3 activation relu input shape input shape model add conv2d 64 3 3 activation relu model add maxpooling2d pool size 2 2 model add dropout 0 25 model add flatten model add dense 128 activation relu model add dropout 0 5 model add dense num class activation softmax model compile loss kera loss categorical crossentropy optimizer keras optimizers adadelta metric accuracy model fit x train y train batch size batch size epoch epoch verbose 1 validation datum x test y test score model evaluate x test y test verbose 0 print test loss score 0 print test accuracy score 1 describe the current behavior a lot of sample be lose here be the output of run the script with keras 2 4 3 with tensoflow backend in my setup python python3 compare to expect behaviour below with kereas 2 3 1 there be only 496 sample in an epoch instead of 6000 python home bernard python dev test mnist cnn demo py 2020 07 31 21 34 53 674307 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 x train shape 60000 28 28 1 60000 train sample 10000 test sample 2020 07 31 21 34 55 075415 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 07 31 21 34 55 104800 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 105872 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 computecapability 6 1 coreclock 1 695ghz corecount 16 devicememorysize 7 92gib devicememorybandwidth 238 66gib s 2020 07 31 21 34 55 105894 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 34 55 107057 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 34 55 107952 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 07 31 21 34 55 108219 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 07 31 21 34 55 109467 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 07 31 21 34 55 110373 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 07 31 21 34 55 113179 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 31 21 34 55 113334 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 113952 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 114509 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 07 31 21 34 55 121289 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2904000000 hz 2020 07 31 21 34 55 121631 I tensorflow compiler xla service service cc 168 xla service 0x42e2280 initialize for platform host this do not guarantee that xla will be use device 2020 07 31 21 34 55 121650 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 07 31 21 34 55 182191 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 182711 I tensorflow compiler xla service service cc 168 xla service 0x436c5e0 initialize for platform cuda this do not guarantee that xla will be use device 2020 07 31 21 34 55 182743 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1070 compute capability 6 1 2020 07 31 21 34 55 182917 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 183371 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 computecapability 6 1 coreclock 1 695ghz corecount 16 devicememorysize 7 92gib devicememorybandwidth 238 66gib s 2020 07 31 21 34 55 183411 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 34 55 183441 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 34 55 183470 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 07 31 21 34 55 183482 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 07 31 21 34 55 183493 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 07 31 21 34 55 183516 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 07 31 21 34 55 183545 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 31 21 34 55 183616 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 184210 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 184688 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 07 31 21 34 55 184751 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 34 55 665812 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 07 31 21 34 55 665860 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 07 31 21 34 55 665888 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 07 31 21 34 55 666241 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 666841 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 34 55 667435 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6988 mb memory physical gpu device 0 name geforce gtx 1070 pci bus i d 0000 01 00 0 compute capability 6 1 epoch 1 12 2020 07 31 21 34 56 279338 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 34 56 477766 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 469 469 4s 8ms step loss 2 2858 accuracy 0 1543 val loss 2 2619 val accuracy 0 2190 epoch 2 12 469 469 3s 7ms step loss 2 2462 accuracy 0 2411 val loss 2 2135 val accuracy 0 3888 epoch 3 12 469 469 3s 7ms step loss 2 1951 accuracy 0 3359 val loss 2 1482 val accuracy 0 5546 epoch 4 12 469 469 3s 7ms step loss 2 1254 accuracy 0 4195 val loss 2 0573 val accuracy 0 6437 epoch 5 12 469 469 3s 7ms step loss 2 0294 accuracy 0 4904 val loss 1 9342 val accuracy 0 6910 epoch 6 12 469 469 3s 7ms step loss 1 9057 accuracy 0 5410 val loss 1 7749 val accuracy 0 7224 epoch 7 12 469 469 3s 7ms step loss 1 7500 accuracy 0 5860 val loss 1 5826 val accuracy 0 7497 epoch 8 12 469 469 3s 7ms step loss 1 5807 accuracy 0 6156 val loss 1 3772 val accuracy 0 7735 epoch 9 12 469 469 3s 7ms step loss 1 4153 accuracy 0 6406 val loss 1 1840 val accuracy 0 7954 epoch 10 12 469 469 3s 7ms step loss 1 2699 accuracy 0 6618 val loss 1 0213 val accuracy 0 8093 epoch 11 12 469 469 3s 7ms step loss 1 1508 accuracy 0 6834 val loss 0 8939 val accuracy 0 8220 epoch 12 12 469 469 3s 7ms step loss 1 0579 accuracy 0 7003 val loss 0 7974 val accuracy 0 8316 test loss 0 7974222898483276 test accuracy 0 83160001039505 describe the expect behavior downgrade kera to version 2 3 1 and run the script again the outcome be consistent with old version of keras before note that train on 60000 sample validate on 10000 sample be miss from the output above use keras 2 4 3 python home bernard python dev test mnist cnn demo py use tensorflow backend x train shape 60000 28 28 1 60000 train sample 10000 test sample 2020 07 31 21 17 44 616490 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 07 31 21 17 44 641796 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 643804 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 computecapability 6 1 coreclock 1 695ghz corecount 16 devicememorysize 7 92gib devicememorybandwidth 238 66gib s 2020 07 31 21 17 44 644032 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 17 44 645151 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 17 44 646009 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 07 31 21 17 44 646247 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 07 31 21 17 44 647392 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 07 31 21 17 44 648247 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 07 31 21 17 44 650968 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 31 21 17 44 651097 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 651925 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 652674 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 07 31 21 17 44 658691 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2904000000 hz 2020 07 31 21 17 44 659154 I tensorflow compiler xla service service cc 168 xla service 0x5cb4a20 initialize for platform host this do not guarantee that xla will be use device 2020 07 31 21 17 44 659167 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 07 31 21 17 44 718817 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 719399 I tensorflow compiler xla service service cc 168 xla service 0x5d43f30 initialize for platform cuda this do not guarantee that xla will be use device 2020 07 31 21 17 44 719434 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1070 compute capability 6 1 2020 07 31 21 17 44 719622 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 720238 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 computecapability 6 1 coreclock 1 695ghz corecount 16 devicememorysize 7 92gib devicememorybandwidth 238 66gib s 2020 07 31 21 17 44 720286 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 17 44 720299 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 17 44 720330 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 07 31 21 17 44 720343 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 07 31 21 17 44 720354 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 07 31 21 17 44 720379 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 07 31 21 17 44 720390 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 31 21 17 44 720463 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 721061 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 721531 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 07 31 21 17 44 721574 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 07 31 21 17 44 722447 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 07 31 21 17 44 722457 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 07 31 21 17 44 722479 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 07 31 21 17 44 722716 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 723177 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 31 21 17 44 724393 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7088 mb memory physical gpu device 0 name geforce gtx 1070 pci bus i d 0000 01 00 0 compute capability 6 1 train on 60000 sample validate on 10000 sample epoch 1 12 2020 07 31 21 17 45 945316 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 17 46 131405 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 epoch 1 12 2020 07 31 21 24 25 101205 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 07 31 21 24 25 295790 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 60000 60000 5s 91us step loss 0 2614 accuracy 0 9188 val loss 0 0578 val accuracy 0 9811 epoch 2 12 60000 60000 4s 67us step loss 0 0910 accuracy 0 9730 val loss 0 0411 val accuracy 0 9857 epoch 3 12 60000 60000 4s 67us step loss 0 0656 accuracy 0 9805 val loss 0 0407 val accuracy 0 9862 epoch 4 12 60000 60000 4s 67us step loss 0 0538 accuracy 0 9837 val loss 0 0311 val accuracy 0 9898 epoch 5 12 60000 60000 4s 67us step loss 0 0483 accuracy 0 9850 val loss 0 0288 val accuracy 0 9904 epoch 6 12 60000 60000 4s 67us step loss 0 0412 accuracy 0 9878 val loss 0 0291 val accuracy 0 9905 epoch 7 12 60000 60000 4s 67us step loss 0 0381 accuracy 0 9888 val loss 0 0283 val accuracy 0 9902 epoch 8 12 60000 60000 4s 67us step loss 0 0354 accuracy 0 9892 val loss 0 0286 val accuracy 0 9914 epoch 9 12 60000 60000 4s 68us step loss 0 0330 accuracy 0 9900 val loss 0 0275 val accuracy 0 9916 epoch 10 12 60000 60000 4s 74us step loss 0 0314 accuracy 0 9909 val loss 0 0276 val accuracy 0 9916 epoch 11 12 60000 60000 5s 77us step loss 0 0300 accuracy 0 9907 val loss 0 0248 val accuracy 0 9917 epoch 12 12 60000 60000 5s 77us step loss 0 0264 accuracy 0 9920 val loss 0 0292 val accuracy 0 9907 test loss 0 02923813719809523 test accuracy 0 9907000064849854 |
tensorflowtensorflow | tflite on android didn t find op for builtin opcode conv 2d version 5 | Bug | system information os platform and distribution e g linux ubuntu 16 04 android emulator android 10 0 google apis tensorflow instal from source or binary model be generate with tf from binary tensorflow version or github sha if from source model be generate with tf 2 2 0 and convert to tflite with tf nightly 2 4 0 dev20200730 use tflite v1 1 1 flutter binding provide the text output from tflite convert no issue with tflite convert thank to amahendrakar in 41877 my issue be with use my model on the mobile side use the tflite flutter binding I m get the follow error when I try to load my custom model unsupported value java lang illegalargumentexception internal error can not create interpreter didn t find op for builtin opcode conv 2d version 5 registration fail standalone code to reproduce the issue see this gist the error be happen when I try to load the model file home dart l31 l35 if I swap out my custom generate tflite model see 41877 for how the model be generate for an off the shelf ssd model detection work just fine any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach stack trace dart standardmethodcodec decodeenvelope message codec dart 569 methodchannel invokemethod platform channel dart 321 tflite loadmodel tflite dart 15 homepagestate loadmodel home dart 30 homepagestate initstate home dart 26 statefulelement firstbuild framework dart 4355 componentelement mount framework dart 4201 element inflatewidget framework dart 3194 element updatechild framework dart 2988 snip a bunch of framework stuff native android java lang illegalargumentexception internal error can not create interpreter didn t find op for builtin opcode conv 2d version 5 registration fail at org tensorflow lite nativeinterpreterwrapper createinterpreter native method at org tensorflow lite nativeinterpreterwrapper init nativeinterpreterwrapper java 72 at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 63 at org tensorflow lite interpreter interpreter java 266 at sq flutter tflite tfliteplugin loadmodel tfliteplugin java 232 at sq flutter tflite tfliteplugin onmethodcall tfliteplugin java 98 at io flutter plugin common methodchannel incomingmethodcallhandler onmessage methodchannel java 230 at io flutter embed engine dart dartmessenger handlemessagefromdart dartmessenger java 85 at io flutter embed engine flutterjni handleplatformmessage flutterjni java 692 at android os messagequeue nativepollonce native method at android os messagequeue next messagequeue java 335 at android os looper loop looper java 183 at android app activitythread main activitythread java 7476 at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java 549 at com android internal os zygoteinit main zygoteinit java 939 |
tensorflowtensorflow | unresolved symbol eigenmatmulf64 when link a compile graph with xla aot runtime | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 2 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when compile a tensorflow graph compile use save model cli aot compile cpu the follow link error occur libcholesky o error lnk2019 unresolve external symbol xla cpu runtime eigenmatmulf64 reference in function entry 34875272679f13979ced813466bdebb0 c project aot aot test mve build main vcxproj c project aot aot test mve build release main exe fatal error lnk1120 1 unresolved external c project aot aot test mve build main vcxproj I think this be because the aot compile reference tensorflow include tensorflow compiler xla service cpu runtime matmul h instead of tensorflow include tensorflow compiler xla service cpu runtime single thread matmul h describe the expect behavior I expect that the export model can be compile by reference only the file in tensorflow xla aot runtime src as state here l193 standalone code to reproduce the issue I use and save the follow graph python def cholesky a b build a graph a tf linalg cholesky a re tf linalg triangular solve a b return re m 4 predict fn tf function cholesky input signature tf tensorspec shape m m dtype tf float64 name a tf tensorspec shape m 1 dtype tf float64 name b experimental compile false module to save tf module module to save predict predict fn tf save model save module to save save model signature serve default module to save predict then I compile the graph use the save model cli script bash cd save model save model cli aot compile cpu checkpoint path variable variable dir signature def key serve default target triple x86 64 none window cpp class cholesky output prefix libs64 libcholesky tag set serve compile this sample main cpp and link it against the generated library above and the xla aot cpu runtime that ship with the pip package l193 produce the link error above cpp include include third party eigen3 unsupported eigen cxx11 tensor include libcholesky h generate define m 4 int main int argc char argv cholesky model int I j idx double test a 1 0583131 0 8570645 0 77131426 0 9754439 0 8570645 0 8788354 0 6582537 0 88981044 0 77131426 0 6582537 0 6411602 0 7668645 0 9754439 0 88981044 0 7668645 1 3106735 double test b 1 2 3 4 std copy test a test a m m model arg0 data std copy test b test b m model arg1 data model run std cout model result0 datum 0 std endl return 0 other info log |
tensorflowtensorflow | documentation fix of text classification with movie review | Bug | this be the same issue mention here currently here scrollto zxxx5oc3pomn under loss function and optimizer it say model compile optimizer adam loss tf loss binarycrossentropy from logit true metric accuracy this need to be correct to model compile optimizer adam loss tf keras loss binarycrossentropy from logit true metric tf metric binaryaccuracy threshold 0 0 name accuracy |
tensorflowtensorflow | only 1 subgraph be currently support cmsis nn | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 make with linux ubuntu 20 04 comply on atmel studio on windows 10 python 3 7 7 tensorflow instal from source or binary download from master tensorflow version commit sha if source 2 3 0 e544dce target platform e g arm mbe os arduino nano 33 etc atmel samd51 atmel studio describe the problem after some issue to link correctly the tflite with cmsis nn kernel file into my atmel studio project I have build a static library with atmel studio and link the a file to do that I make the project with tag cmsis nn copy the the file from a keil project I use the image recognition example from the make directory and compile with atmel studio to get the a file when I run inference compile the c file obtain without the cmsis tag in the tensorflow lite subdirectory of the image recognition example in the make folder within the project everything work fine when I run inference with the static library obtain from make with the cmsis nn tag I get the follow error only 1 subgraph be currently support exiting with status 1 the flatbuffer file be the same please provide the exact sequence of command step when you run into the problem please refere to this colab to see the python code please refere there to see the train model please refere there for the project in the cmsis nn brench replicate the issue copying and paste the flatbuffer inside a work project with the cmsis nn version of tflite load the model and run inference |
tensorflowtensorflow | error with tflite hello world example on esp eye | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version commit sha if source 2 4 0 target platform e g arm mbe os arduino nano 33 etc esp eye describe the problem I be try to setup the esp eye for tflite I have be able to set up the esp eye for basic setup as per I be also able to use the aw iot example for tflite I be try out the example for esp eye at when I try to compile this per the instruction above for esp eye I be get a compiler error cc1plus error command line option std c11 be valid for c objc but not for c werror cc1plus error command line option std c11 be valid for c objc but not for c werror cc1plus all warning be treat as error please provide the exact sequence of command step when you run into the problem per the step at the tflite link above generate the example the example project can be generate with the follow command make f tensorflow lite micro tool make makefile target esp generate hello world esp project build the example go the the example project directory cd tensorflow lite micro tool make gen esp xtensa esp32 prj hello world esp idf then build with idf py idf py build I get this error repeatedly cc1plus error command line option std c11 be valid for c objc but not for c werror cc1plus error command line option std c11 be valid for c objc but not for c werror cc1plus all warning be treat as error the error do go away if I remove werror in the below line in component tfmicro cmakelist txt target compile option component lib private std c 11 dtf lite static memory werror wsign compare wdouble promotion wshadow wunuse variable wmisse field initializer wunuse function dndebug o3 wno return type wno strict aliasing wno ignore qualifier wno return type wno strict aliasing wno ignore qualifier wno return type wno strict aliasing wno return type wno strict aliasing but the image when flash do not work it keep crash with a register dump any suggestion for this issue |
tensorflowtensorflow | tf math reduce euclidean norm can not be in tf function with experimental compile true | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 dev20200730 python version colab bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior euclideannorm unsupported op no register euclideannorm opkernel for xla cpu jit device compatible with node node euclideannorm describe the expect behavior the op can run as usual standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python3 import tensorflow as tf x tf complex tf random uniform shape 5 5 tf random uniform shape 5 5 tf function experimental compile true def reduce euclidean norm x return tf math reduce euclidean norm x print reduce euclidean norm x other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | should this be zeroslike | Bug | l104 |
tensorflowtensorflow | tflu r2 3 break memory allocation for micro interpreter object with tf lite static memory enable | Bug | compile tflu r2 3 with tf lite static memory give an incorrect memory allocation for the micro interpreter object system information tflu r2 3 os platform linux ubuntu 16 04 or window 10 64 bit target cortex m4f current behavior in my case the size of the network input datum be interpreter input 0 byte 1920 after call for memory allocation tflitestatus allocate status interpreter allocatetensor the memory address of the input data be interpreter input 0 datum 0x8012174 and memory address of the input tensor structure be interpreter input 0 0x8012570 this give only 1020 byte memory space for the datum which be 1920 bte as a consequence the input tensor structure be overwrite when copy the input data expect behavior after remove tf lite static memory from the ccflag cxxflag the memory allocation leave enough space between the datum and the input tensor structure interpreter input 0 0x80124d0 interpreter input 0 datum 0x80103c0 other info use tflu r2 2 with tf lite static memory do not show the problem it give correct memory layout start with commit fbf407383c93774d10bd7c45cd66788a070b0e07 mid of june 20 the memory layout be break I m not sure if this be a bug or if it be intend behavior and I well compile without tf lite static memory |
tensorflowtensorflow | cuda error illegal address in toy training example | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 inside virtual container uname a linux 3558c7dc300b 4 15 0 74 generic 84 ubuntu smp thu dec 19 08 06 28 utc 2019 x86 64 x86 64 x86 64 gnu linux tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version python 3 6 8 cuda cudnn version nvcc nvidia r cuda compiler driver copyright c 2005 2019 nvidia corporation build on sun jul 28 19 07 16 pdt 2019 cuda compilation tool release 10 1 v10 1 243 gpu model and memory nvidia dgx 2 16x nvidia tesla v100 32 gb 2x intel xeon platinum 8168 2 7ghz 24c 48 t 1 5 tb ram 30 tb internal nvme ssd describe the current behavior crash same happen in colab and in tf nightly see crash report in colab 2020 07 30 08 06 02 330777 e tensorflow stream executor cuda cuda event cc 29 error polling for event status fail to query event cuda error illegal address an illegal memory access be encounter describe the expect behavior it should work as it do for other value of the batch and repeat parameter standalone code to reproduce the issue python this be a simplified version of m train datum tf constant 1 1 1 dtype tf double dataset tf datum dataset from tensor slice m train datum repeat 8 1024 batch 1024 mirror strategy tf distribute mirroredstrategy dist dataset mirror strategy experimental distribute dataset dataset def train step input a tf matmul input 0 tf linalg matrix transpose input 0 return tf linalg det a tf function def distribute train step dist input loss mirror strategy run train step args dist input return mirror strategy reduce tf distribute reduceop sum loss axis none for dist input in dist dataset print distribute train step dist input other info log crash 1002 txt |
tensorflowtensorflow | keras modelcheckpoint callback not raise exception when h5 save fail | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary binary tensorflow version use command below 2 3 python version 3 6 describe the current behavior the callback conceal the exception throw by save model the checkpoint be not save but no error be report describe the expect behavior the callback should re throw the error after catch it standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | return tf datum unknown cardinality when the cardinality can be easily compute | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 dockerhub tensorflow tensorflow late tensorflow instal from source or binary dockerhub image tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 describe the current behavior tf datum dataset cardinality return tf data unknown cardinality when the cardinality can be easily compute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf print tf version print 2 3 0 nx 2 t tf datum dataset from tensor slice tf constant 0 1 t2 t flat map lambda ti tf datum dataset from tensor ti repeat nx for ti in t2 print ti correctly print tf tensor 0 shape 1 dtype int32 tf tensor 0 shape 1 dtype int32 tf tensor 1 shape 1 dtype int32 tf tensor 1 shape 1 dtype int32 3 print t2 cardinality tf datum unknown cardinality numpy print true |
tensorflowtensorflow | cuda error when training rnn | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary python wheel use pip tensorflow version 2 2 and 2 3 python version 3 8 2 instal use virtualenv pip conda pip bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 gpu model and memory gtx 1070 8 gb with the current driver 451 67 describe the problem I want to test my tensorflow gpu installation it work for cnn but when I try to run the text classification tutorial where bidirectional layer with lstm be use the training process crash during early epoch with this error I leave out part of the repeat part of the stacktrace e tensorflow stream executor dnn cc 613 cudnn status internal error in tensorflow stream executor cuda cuda dnn cc 1986 cudnnrnnbackwarddata cudnn handle rnn desc handle model dim max seq length output desc handle output datum opaque output desc handle output backprop datum opaque output h desc handle output h backprop data opaque output c desc handle output c backprop data opaque rnn desc param handle param opaque input h desc handle input h datum opaque input c desc handle input c datum opaque input desc handle input backprop datum opaque input h desc handle input h backprop datum opaque input c desc handle input c backprop data opaque workspace opaque workspace size reserve space datum opaque reserve space datum size w tensorflow core framework op kernel cc 1753 op require fail at cudnn rnn op cc 1922 internal fail to call thenrnnbackward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 64 64 1 1449 32 64 provide the exact sequence of command step that you execute before run into the problem I instal cuda 10 1 and cudnn 7 6 5 and edit my path variable accordingly restart because sometimes windows be weird about change environment variable afterwards I run pip install tensorflow gpu inside a conda environment I then execute a python file contain all the code of the tutorial any other info log the error occur within the first epoch not directly after the start but in early step during the epoch which make I wonder where the problem originate when I use a model without the tf keras layers bidirectional tf keras layers lstm 64 return sequence true layer it work just fine when I run tf config list physical device gpu it seem to load all dll s correctly I be also consider that this error be due to insufficient gpu memory even though the model be relatively small tensorflow allocate between 6 and 7 gb memory it do that also for small cnn model |
tensorflowtensorflow | keras callbacks logs numpy log not in sync | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 8 describe the current behavior some callback e g progbarlogger modelcheckpoint have the flag self support tf logs true if other callback especially custom callback don t have this property then those callback do not have acce to the same log in the code example below modelcheckpoint can not use the val log loss as a monitor value from the custommetric callback this result from the commit where a new numpy log property have be introduce without make sure to sync it with the pre exist log property describe the expect behavior the two propertys numpy log and log should contain the same information or it should be make clear in the docs keras callback overview what support tf log do and that there could be compatibility issue standalone code to reproduce the issue from tensorflow keras callback import callback modelcheckpoint class custommetric callback def init self x valid y valid super init self x valid x valid self y valid y valid def on epoch end self epoch log none y pre self model predict self x valid batch size batchsize log val log loss metric log loss self y valid y pre model fit x train y train validation datum x valid y valid shuffle true batch size batchsize epoch epoch verbose 1 callback custommetric x valid y valid modelcheckpoint test h5 val log loss verbose 1 save well only true mode min other info log see commit |
tensorflowtensorflow | exception tensorflow lite currently doesn t support control flow op merge switch | Bug | system information os platform and distribution e g linux ubuntu 18 04 tensorflow instal from source or binary tensorflow version or github sha if from source 1 14 provide the text output from tflite convert tensorflow lite python convert convertererror toco fail see console for info 2020 07 29 15 27 21 979509 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 1402 operator 2593 array 0 quantize 2020 07 29 15 27 22 011911 I tensorflow lite toco graph transformation graph transformation cc 39 after remove unused op pass 1 1290 operator 2425 array 0 quantize 2020 07 29 15 27 22 054821 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 1290 operator 2425 array 0 quantize 2020 07 29 15 27 22 098145 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 1062 operator 2194 array 0 quantize 2020 07 29 15 27 22 139738 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 1062 operator 2194 array 0 quantize 2020 07 29 15 27 22 172354 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 1062 operator 2194 array 0 quantize 2020 07 29 15 27 22 219537 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 1228800 byte theoretical optimal value 921600 byte 2020 07 29 15 27 22 231094 e tensorflow lite toco toco tooling cc 456 tensorflow lite currently doesn t support control flow op merge switch traceback most recent call last file home ps anaconda3 envs rknn bin toco from protos line 11 in sys exit main file home ps local lib python3 6 site package tensorflow lite toco python toco from protos py line 59 in main app run main execute argv sys argv 0 unparse file home ps local lib python3 6 site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home ps local lib python3 6 site package absl app py line 299 in run run main main args file home ps local lib python3 6 site package absl app py line 250 in run main sys exit main argv file home ps local lib python3 6 site package tensorflow lite toco python toco from protos py line 33 in execute output str tensorflow wrap toco tococonvert model str toco str input str exception tensorflow lite currently doesn t support control flow op merge switch standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | gpu installation instruction may need to be update | Bug | I m follow the gpu install instruction for ubuntu 18 04 and get the following after run this sudo apt get install no install recommend cuda 10 1 libcudnn7 7 6 5 32 cuda10 1 libcudnn7 dev 7 6 5 32 1 cuda10 1 I m get this reading package list do building dependency tree read state information do e version 7 6 5 32 cuda10 1 for libcudnn7 be not find 7 6 5 32 cuda10 1 may need to be change to 7 6 5 32 1 cuda10 1 since the latter work |
tensorflowtensorflow | timeserie example be a misnomer it be not a timeserie it be rather a simple series | Bug | part 2 forecast a multivariate time series I do not understand how this can be time series the datum be equidistant with each other and the time itself be not consider in predict the value but rather just as a series when you include time as a vector I would accept that it be a timeserie the title be misleading and also we need an example for vectorize time for an actual timeserie |
tensorflowtensorflow | miss model in visualize py script tflite model attributeerror module tensorflow lite python schema py generate have no attribute model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 2 4 0 python version 3 8 1 bazel version if compile from source 3 1 0 gcc compiler version if compile from source ubuntu 7 5 0 3ubuntu1 18 04 7 5 0 cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v1 12 1 37714 gac84c5eb81 2 4 0 I follow the instruction how do I inspect a tflite file in order to inspect a tflite file compilation and installation of the python whl be successful pip freeze show tensorflow file tmp tensorflow pkg tensorflow 2 4 0 cp38 cp38 linux x86 64 whl start the python consul and import tensorflow work as expect however run python visualize py foo tflite foo html result in a an error traceback most recent call last file visualize py line 517 in main sys argv file visualize py line 513 in main createhtmlfile tflite input html output file visualize py line 429 in createhtmlfile datum createdictfromflatbuffer file datum file visualize py line 414 in createdictfromflatbuffer model obj schema fb model getrootasmodel buffer datum 0 attributeerror module tensorflow lite python schema py generate have no attribute model in addition bazel run tensorflow lite tool visualize model tflite visualize model html result in a similar error info option provide by the client inherit common option isatty 1 terminal column 153 info read rc option for run from home omri src tensorflow bazelrc inherit common option experimental repo remote exec info read rc option for run from home omri src tensorflow bazelrc inherit build option apple platform type macos define framework share object true define open source build true java toolchain third party toolchain java tf java toolchain host java toolchain third party toolchain java tf java toolchain define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no are true noincompatible remove legacy whole archive noincompatible prohibit aapt1 enable platform specific config config short log config v2 info read rc option for run from home omri src tensorflow tf configure bazelrc inherit build option action env python bin path home omri pyenv version tf src bin python3 action env python lib path home omri pyenv version tf src lib python3 8 site package python path home omri pyenv version tf src bin python3 config xla action env tf configure io 0 info find applicable config definition build short log in file home omri src tensorflow bazelrc output filter do not match anything info find applicable config definition build v2 in file home omri src tensorflow bazelrc define tf api version 2 action env tf2 behavior 1 info find applicable config definition build xla in file home omri src tensorflow bazelrc action env tf enable xla 1 define with xla support true info find applicable config definition build linux in file home omri src tensorflow bazelrc copt w define prefix usr define libdir prefix lib define includedir prefix include cxxopt std c 14 host cxxopt std c 14 config dynamic kernel info find applicable config definition build dynamic kernel in file home omri src tensorflow bazelrc define dynamic load kernel true copt dautoload dynamic kernel info analyze target tensorflow lite tool visualize 0 package load 0 target configure info find 1 target target tensorflow lite tool visualize up to date bazel bin tensorflow lite tool visualize info elapse time 0 132s critical path 0 01s info 0 process info build complete successfully 1 total action info run command line bazel bin tensorflow lite tool visualize home omri download mobilenet thin openpose opt fullint tf1 tflite home omri downinfo build complete successfully 1 total action traceback most recent call last file home omri cache bazel bazel omri a9e9b87cb64d67149db4f28645a2ba4b execroot org tensorflow bazel out k8 opt bin tensorflow lite tool visualize runfiles org tensorflow tensorflow lite tool visualize py line 517 in main sys argv file home omri cache bazel bazel omri a9e9b87cb64d67149db4f28645a2ba4b execroot org tensorflow bazel out k8 opt bin tensorflow lite tool visualize runfiles org tensorflow tensorflow lite tool visualize py line 513 in main createhtmlfile tflite input html output file home omri cache bazel bazel omri a9e9b87cb64d67149db4f28645a2ba4b execroot org tensorflow bazel out k8 opt bin tensorflow lite tool visualize runfiles org tensorflow tensorflow lite tool visualize py line 429 in createhtmlfile datum createdictfromflatbuffer file datum file home omri cache bazel bazel omri a9e9b87cb64d67149db4f28645a2ba4b execroot org tensorflow bazel out k8 opt bin tensorflow lite tool visualize runfiles org tensorflow tensorflow lite tool visualize py line 414 in createdictfromflatbuffer model obj schema fb model getrootasmodel buffer datum 0 attributeerror module tensorflow lite python schema py generate have no attribute model finally start the python consul and try to import model from tensorflow lite python schema py generate result in the same error src tensorflow tensorflow lite tool python python 3 8 1 default mar 5 2020 13 14 49 gcc 7 4 0 on linux type help copyright credit or license for more information from tensorflow lite python import schema py generate schema py generate model traceback most recent call last file line 1 in attributeerror module tensorflow lite python schema py generate have no attribute model the script should generate an html file standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | segmentation fault 11 | Bug | system information os platform and distribution e g linux ubuntu 16 04 mac os x 10 14 6 tensorflow instal from source or binary use pip tensorflow version or github sha if from source 2 3 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook python3 tensorflow lite python tflite convert py output file model float lite output format tflite inference type float inference input type float input shape 1 224 224 3 input array serve default input 1 output array statefulpartitionedcall mean value 0 std dev value 1 save model dir user z004njq project save model mobilenetv4 exp 1004 export experimental converter true the output from the converter invocation 2020 07 28 13 29 05 763703 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 07 28 13 29 05 781147 I tensorflow compiler xla service service cc 168 xla service 0x14dc421f0 initialize for platform host this do not guarantee that xla will be use device 2020 07 28 13 29 05 781174 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 07 28 13 29 14 581147 I tensorflow core grappler device cc 78 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda or rocm support 2020 07 28 13 29 14 581230 I tensorflow core grappler cluster single machine cc 356 start new session 2020 07 28 13 29 14 677045 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item graph to optimize 2020 07 28 13 29 14 677069 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer graph size after 1835 node 1614 2696 edge 2475 time 64 086ms 2020 07 28 13 29 14 677092 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 2 02ms i0728 13 29 16 315555 140734995711424 lite py 624 use experimental converter if you encounter a problem please file a bug you can opt out by set experimental new converter false 2020 07 28 13 29 16 359263 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 313 ignore output format 2020 07 28 13 29 16 359292 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 316 ignore drop control dependency fatal python error segmentation fault current thread 0x00007fff6b6d45c0 most recent call first file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python wrap toco py line 38 in wrap toco convert file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python convert py line 199 in toco convert protos file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python convert py line 574 in toco convert impl file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python lite py line 633 in convert file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python lite py line 900 in convert file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python lite py line 1076 in convert file tensorflow lite python tflite convert py line 239 in convert tf2 model file tensorflow lite python tflite convert py line 623 in run main file library framework python framework version 3 7 lib python3 7 site package absl app py line 250 in run main file library framework python framework version 3 7 lib python3 7 site package absl app py line 299 in run file library framework python framework version 3 7 lib python3 7 site package tensorflow python platform app py line 40 in run file tensorflow lite python tflite convert py line 640 in main file tensorflow lite python tflite convert py line 644 in segmentation fault 11 also please include a link to the save model or graphdef mobilenetv3 exp 1004 export zip failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow lite on ndk with c api fail to provide output no error or warning | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow base on example script os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung galaxy s10 tensorflow lite version 2 2 python version 3 6 gcc compiler version if compile from source clang version 7 0 describe the current behavior previously I be run tflite on android with the java api run the tflite model from android s java api work fine and I get the expected output now I m run tflite on ndk use the c api invoke the interpreter in the ndk doesn t seem to do anything I can verify the the model be load do not return null and set everything up interpreter option input and output tensor etc seem ok too the input and output be both float32 array I can verify input array datum integrity before tflitetensorcopyfrombuffer all float value be correct and none zero output array be a float array in the proper size initialize with zero extract the inference result with tflitetensorcopytobuffer do not change the output array it s still all zero describe the expect behavior invoke the interpreter should provide an output array a result standalone code to reproduce the issue in my native c file follow the example in c api h input input be just a single tensor in size input be vector input float array access input value with input float array at x for example provide float32 value the first few be 43 36578 72 9673 98 4356 12 7865 output output be just a single tensor in size output be vector output float array output be initialize with zero 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 and be not constant tflitemodel model tflitemodelcreatefromfile reinterpret cast model file path checkpoint model if model nullptr result model be not null model file path find in model file path there s a tflite file tfliteinterpreteroption option tfliteinterpreteroptionscreate tfliteinterpreteroptionssetnumthread option 2 tfliteinterpreter interpreter tfliteinterpretercreate model option tfliteinterpreterallocatetensor interpreter tflitetensor input tensor tfliteinterpretergetinputtensor interpreter 0 checkpoint before input float array have legitimate float value at this point 43 36578 72 9673 98 4356 12 7865 tflitetensorcopyfrombuffer input tensor input float array datum input float array size sizeof float tfliteinterpreterinvoke interpreter const tflitetensor output tensor tfliteinterpretergetoutputtensor interpreter 0 tflitetensorcopytobuffer output tensor output float array datum output float array size sizeof float checkpoint after output float array remain all zero at this point 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 tfliteinterpreterdelete interpreter tfliteinterpreteroptionsdelete option tflitemodeldelete model some alternative to access datum and verify integrity replace input float array datum with input float array begin and output float array datum with output float array begin initialize output float array with one instead of zero 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 I also try to change a few model all with same input output and characteristic other info log no error of any kind or anything to show the code run smoothly but nothing happen the output array remain unchanged the exact same model when run with java api work just fine |
tensorflowtensorflow | tflm 2 3 fail assert in cmsis nn fully connect cc | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 verify on mbe os 5 13 tensorflow instal from source or binary source tensorflow version commit sha if source 0c43ad89f81b22d81d1894f5b53f9fbdda0b738a target platform e g arm mbe os arduino nano 33 etc mbe os st iot discovery kit describe the problem we have a very simple fully connect network 33x20x10x4 neuron that fail an assert when be invoke with cmsis nn kernel here in fully connect cc cmsis nn dim input dim input dim n batch input dim h input shape dim 1 input dim w input shape dim 2 here here input shape be the input layer 33 neuron which do not have this dimension size be 2 set input dim w and input dim c to 1 solve the issue please provide the exact sequence of command step when you run into the problem see above attach the tflite model ei continuous gesture nn classifier tensorflow lite int8 quantize model lite zip |
tensorflowtensorflow | savedmodel export fail on rnns by set wrong input dtype | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 7 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when try to export concrete function in a keras model to the savedmodel format if a function use tf keras layer rnn the dtype of the rnn cell input argument be wrong in particular actual dtype seem to be ignore and replace with tf float32 this do not happen when call the same concrete function manually but only when try to export it describe the expect behavior the input dtype should be preserve allow to export the model at the very least the behavior should be consistent with call the concrete function standalone code to reproduce the issue python import tensorflow as tf class testrnncell tf keras layers layer def init self super init self unit 10 self state size 20 def call self indice state this assertion fail assert index dtype tf int32 if the assertion be remove this line fail with typeerror value pass to parameter index have datatype float32 not in list of allow value int32 int64 output tf gather tf range 5 index return output state class testmodel tf keras model def init self super init self rnn tf keras layer rnn testrnncell tf function def do stuff self index assert index dtype tf int32 return self rnn index model testmodel tf save model save model test model signature do stuff model do stuff get concrete function indice tf tensorspec none none 5 tf int32 use model save instead of tf save model save do not help either here s a version that fail just in the same way python class testmodel tf keras model continue the class above def call self index assert index dtype tf int32 return self rnn index model testmodel this work fine model tf zeros 10 10 5 tf int32 this fail because the rnn input be now tf float32 model save test model save format tf any possible workaround for this would be much appreciate I can not figure out any since this problem only affect savedmodel export and not the actual concrete function be export |
tensorflowtensorflow | doc be partial batch support with multiworkermirroredstrategy | Bug | url s with the issue partial batch description of issue what need change the release note state tf distribute experimental multiworkermirroredstrategy add support for partial batch however the documentation tutorial state currently this be support for all strategy except tf distribute experimental multiworkermirroredstrategy partial batch be support for all strategy except tf distribute experimental multiworkermirroredstrategy this sound like those contradict each other what be the actual reality can the documentation or the release note be update to clarify |
tensorflowtensorflow | tflite relocate tensor fail | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 5 2 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior step to generate tflite 1 train a ckpt 2 create save model pb use none none input parameter 3 generate tflite save pb model by specify default input because none none do not work generate tflite input detail shape array 1 256 256 3 dtype int32 quantization 0 0 0 dtype index 0 name image shape array 1 256 256 1 dtype int32 quantization 0 0 0 dtype index 1 name mask shape array 1 64 64 1 dtype int32 quantization 0 0 0 dtype index 2 name mask2 shape array 1 128 128 1 dtype int32 quantization 0 0 0 dtype index 3 name mask4 actual input detail 1 432 492 3 1 432 492 1 1 216 246 1 1 108 123 1 allocate tensor base on actual input value interpreter resize tensor input input detail 0 index 1 h w 3 interpreter resize tensor input input detail 1 index 1 h w 1 interpreter resize tensor input input detail 2 index 1 int h 4 int w 4 1 interpreter resize tensor input input detail 3 index 1 int h 2 int w 2 1 interpreter allocate tensor error file testtlitenone py line 141 in interpreter allocate tensor file homelib python3 5 site package tensorflow lite python interpreter py line 95 in allocate tensor return self interpreter allocatetensor file home lib python3 5 site package tensorflow lite python interpreter wrapper tensorflow wrap interpreter wrapper py line 106 in allocatetensor return tensorflow wrap interpreter wrapper interpreterwrapper allocatetensor self runtimeerror tensorflow lite kernels kernel util cc 233 d1 d2 d1 1 d2 1 be not true node number 4 mul fail to prepare describe the expect behavior allocation should be do standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | incompatible with expected float ref | Bug | traceback most recent call last file convert tflite py line 90 in convert from savedmodel savedmodeldir tflitefile quan true integeronly true file convert tflite py line 50 in convert from savedmodel tflitemodel converter convert file usr local lib python3 5 dist package tensorflow lite python lite py line 459 in convert self func 0 low control flow false file usr local lib python3 5 dist package tensorflow python framework convert to constant py line 707 in convert variable to constant v2 as graph frozen func construct concrete function func graph def convert input file usr local lib python3 5 dist package tensorflow python framework convert to constant py line 406 in construct concrete function new output name file usr local lib python3 5 dist package tensorflow python eager wrap function py line 633 in function from graph def wrap import wrap function import graph def file usr local lib python3 5 dist package tensorflow python eager wrap function py line 611 in wrap function collection file usr local lib python3 5 dist package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file usr local lib python3 5 dist package tensorflow python eager wrap function py line 86 in call return self call with variable creator scope self fn args kwargs file usr local lib python3 5 dist package tensorflow python eager wrap function py line 92 in wrap return fn args kwargs file usr local lib python3 5 dist package tensorflow python eager wrap function py line 631 in import graph def importer import graph def graph def name file usr local lib python3 5 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 5 dist package tensorflow python framework importer py line 405 in import graph def producer op list producer op list file usr local lib python3 5 dist package tensorflow python framework importer py line 501 in import graph def internal raise valueerror str e valueerror input 0 of node conv21 pointwise batchnorm cond assign switch be pass float from conv21 pointwise batchnorm move mean 0 incompatible with expected float ref when I use tf lite tfliteconverter from save model to convert checkpoint to tflite I get above err about batchnorm please help I ths originally post by zhoukai90 in issuecomment 663382551 |
tensorflowtensorflow | fail call to cuinit unknown error 1 | Bug | run the late docker with docker run it p 8888 8888 tensorflow tensorflow late gpu jupyter jupyter notebook notebook dir tf ip 0 0 0 0 no browser allow root notebookapp allow origin code import tensorflow as tf print num gpu available len tf config experimental list physical device gpu give I 2020 07 27 19 44 03 826149 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 07 27 19 44 03 826179 e tensorflow stream executor cuda cuda driver cc 313 fail call to cuinit unknown error 1 2020 07 27 19 44 03 826201 I tensorflow stream executor cuda cuda diagnostic cc 163 no nvidia gpu device be present dev nvidia0 do not exist I m on pop os 20 04 have try instal the cuda driver from the pop repository as well as from nvidia no dice any help appreciate run docker run gpu all nvidia cuda 10 0 base nvidia smi give I nvidia smi 450 51 05 driver version 450 51 05 cuda version 11 0 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 geforce rtx 2080 on 00000000 09 00 0 on n a 0 52c p5 15w 225w 513mib 7959mib 17 default n a process gpu gi ci pid type process name gpu memory i d i d usage |
tensorflowtensorflow | model with conv2d layer cause segmentation fault when invoke in c | Bug | system information linux ubuntu 18 04 tensorflow binary via pip test on 2 2 0 2 3 0rc2 command use to run the converter or code if you re use the python api python import tensorflow as tf from tensorflow keras layer import conv2d flatten input dense from tensorflow keras model import model input input shape 10 5 1 x input x conv2d 32 3 3 x function correctly when remove x flatten x x dense 1 x model model input x model compile convert the model converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default tflite model converter convert save the tf lite model with tf io gfile gfile min model tflite wb as f f write tflite model the output from the converter invocation none model in savedmodel tflite format as well as minimal example build for linux x86 64 failure detail conversion successful cause segmentaion fault on invoke when run via tflite c minimal example load and run a model in c invoke fine without conv2d layer c include tensorflow lite interpreter h include tensorflow lite kernel register h include tensorflow lite model h include tensorflow lite optional debug tool h include use namespace tflite define tflite minimal check x if x fprintf stderr error at s d n file line exit 1 int main int argc char argv if argc 2 fprintf stderr minimal n return 1 const char filename argv 1 load model std unique ptr model tflite flatbuffermodel buildfromfile filename tflite minimal check model nullptr build the interpreter tflite op builtin builtinopresolver resolver interpreterbuilder builder model resolver std unique ptr interpreter builder interpreter tflite minimal check interpreter nullptr allocate tensor buffer tflite minimal check interpreter allocatetensor ktfliteok printf pre invoke interpreter state n tflite printinterpreterstate interpreter get run inference tflite minimal check interpreter invoke ktfliteok fail here before return printf n n post invoke interpreter state n tflite printinterpreterstate interpreter get return 0 call via minimal min model tflite |
tensorflowtensorflow | conv1dtranspose dilation support might be a bug idk | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change conv1dtranspose dilation do not inform use that dilation doesn t work for any value of dilation 1 because it isn t implement yet clear description currently documentation say an integer specify the dilation rate to use for dilate convolution currently specify a dilation rate value 1 be incompatible with specify a stride value 1 this may not be implement yet in the new of nightly build but with my tf nightly 2 5 0dev20200629 build this didn t work I fear update to new nightly build in case in break my research code which rely on nightly build until conv1dtranspose be release in a support build invalidargumenterror current libxsmm and customize cpu implementation do not yet support dilation rate large than 1 node test1 ae decoder conv1d transpose conv1d transpose define at d user username desktop libs python nn4n autoencoder4 py 120 op inference train function 2185 function call stack train function this be with stride 1 correct link parameter define yes set my dilation to 1 get rid of the issue return define not necessary I m not sure if you be ask if I have define return in my code or if my code return a define value or if the documentation claim to return something raise list and define invalidargumenterror current libxsmm and customize cpu implementation do not yet support dilation rate large than 1 node test1 ae decoder conv1d transpose conv1d transpose define at d user username desktop libs python nn4n autoencoder4 py 120 op inference train function 2185 function call stack train function usage example nightly build so no request visual if applicable no submit a pull request I will not be do so test code note 1 this be build with tf nightly 2 5 0dev20200626 which be remove from the pypi archive for unknown reason note 2 model fit must be call for the error to occur simpy construct and compile the network be not enough to reproduce the error import tensorflow as tf import tensorflow kera as krs import numpy as np train datum np random uniform 1 1 20 20 input krs input 20 1 x input x krs layer conv1d 1 3 stride 1 padding same dilation rate 2 activation relu x x krs layer flatten x x krs layer dense 10 activation relu x x krs layer dense 2 activation relu x x krs layer dense 10 activation relu x x krs layer dense 20 activation relu x x krs layer reshape target shape 20 1 x x krs layer conv1dtranspose 1 3 stride 1 dilation rate 2 padding same activation relu output pad 0 x output krs layer flatten x model krs model input output name test model compile optimizer adam loss mse model summary model fit train datum train datum |
tensorflowtensorflow | converter inference input type tf int8 be be ignore | Bug | system information docker image tensorflow tensorflow 2 2 0 same issue with window python 3 and tensorflow 2 2 0 instal via pip command use to run the converter or code if you re use the python api especially the converter inference input type and converter inference output type be imporant k model tf keras model load model model path converter tf lite tfliteconverter from keras model k model converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf int8 converter inference output type tf int8 def representative datum gen for input value in datum set yield input value astype np float32 reshape 1 10 converter representative dataset representative datum gen tf lite model quant converter convert the output from the converter invocation the output of the convertion be the same as without specify the inference input and output type the tflite outputfile with and without the specification of the inference input type be attach tflite conv test zip failure detail if the conversion be successful but the generate model be wrong state what be wrong I expect a model without the qunatize and the dequantize layer at the beginning and at the end the generate model have a infernece input type of float32 not the expect int8 any other info log with tensorflow 1 15 the inference input be int8 of the generate model when specife the inference input type also no quantization or deqauntization layer be putt in the generate model |
tensorflowtensorflow | not able to create libtensorflow lite a from build ios universal lib sh | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac 10 15 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tensorflow version use command below 2 1 1 python version python 2 7 16 and python 3 7 3 bazel version if compile from source bazel 3 4 1 homebrew gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I be able to create libtensorflow lite a on my mac for tensorflow version 1 8 0 before now I be try to do the same for version 2 1 0 but it s give the follow error clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite core api flatbuffer conversion o error 1 make wait for unfinished job make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite allocation o error 1 clang clang error no such file or directory i386 error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot clang warn no such sysroot directory arch wmisse sysroot clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite arena planner o error 1 make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite core api error reporter o error 1 clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite core api op resolver o error 1 clang error no such file or directory i386 clang warn no such sysroot directory arch wmisse sysroot make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite c c api internal o error 1 make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite core api tensor util o error 1 make user mymac tflite 2 1 tensorflow lite tool make gen io i386 obj tensorflow lite core subgraph o error 1 standalone code to reproduce the issue 1 have download the code from 2 first run download dependency sh script and then build ios universal lib sh face the same issue in ver 2 2 I have also try for ver 2 3 but it show warn this build script be deprecate |
tensorflowtensorflow | tf lite tfliteconverter crash when convert keras model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 osx 10 15 5 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 37224 ga6cd18a133 2 4 0 dev20200722 python version 3 7 describe the current behavior it throw exception 2020 07 24 3 37 41 describe the expect behavior it should finish the conversion successfully standalone code to reproduce the issue import numpy as np import tensorflow as tf model tf keras model sequential tf keras layers input shape 28 28 name input tf keras layers bidirectional tf keras layers lstm 20 return sequence true tf keras layer flatten tf keras layer dense 10 activation tf nn softmax name output model compile optimizer adam loss sparse categorical crossentropy metric accuracy model summary run model tf function lambda x model x this be important let s fix the input size batch size 1 step 28 input size 28 concrete func run model get concrete function tf tensorspec batch size step input size model input 0 dtype model directory model dir keras lstm converter tf lite tfliteconverter from concrete function concrete func converter tf lite tfliteconverter from save model model dir converter tf lite tfliteconverter from keras model model converter experimental new converter true tflite model converter convert with tf io gfile gfile tflite test tflite wb as f f write tflite model |
tensorflowtensorflow | inaccessibletensorerror in custom model use add loss and build | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macosx 10 15 5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when run the follow code which use add loss and create a layer in the build method I get an inaccessibletensorerror python import tensorflow as tf from tensorflow import kera class mymodel keras models model def build self batch input shape self output layer keras layer dense 1 super build batch input shape def call self input training none self add loss 1 0 return self output layer input model mymodel model compile loss mse optimizer nadam x tf random uniform 100 10 y tf random uniform 100 1 history model fit x y epoch 2 describe the expect behavior I expect the model to be train normally without error standalone code to reproduce the issue see the code above you can run it in this colab other info log here be the full stacktrace inaccessibletensorerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 571 train function output self distribute strategy run usr local lib python3 6 dist package tensorflow python distribute distribute lib py 951 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2290 call for each replica return self call for each replica fn args kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2649 call for each replica return fn args kwargs usr local lib python3 6 dist package tensorflow python keras engine training py 533 train step y y pre sample weight regularization loss self loss usr local lib python3 6 dist package tensorflow python keras engine compile util py 231 call reg loss math op add n regularization loss usr local lib python3 6 dist package tensorflow python util dispatch py 180 wrapper return target args kwargs usr local lib python3 6 dist package tensorflow python op math op py 3239 add n return gen math op add n input name name usr local lib python3 6 dist package tensorflow python ops gen math op py 420 add n addn input input name name usr local lib python3 6 dist package tensorflow python framework op def library py 744 apply op helper attrs attr proto op def op def usr local lib python3 6 dist package tensorflow python framework func graph py 591 create op internal inp self capture inp usr local lib python3 6 dist package tensorflow python framework func graph py 641 capture tensor tensor graph self inaccessibletensorerror the tensor tensor const 0 shape dtype float32 can not be access here it be define in another function or code block use return value explicit python local or tensorflow collection to access it define in funcgraph name build graph i d 139933898535488 access from funcgraph name train function I d 139933898618920 |
tensorflowtensorflow | broadcastto | Bug | macbook pro provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add average pool 2d conv 2d depthwise conv 2d div fully connect hard swish maximum mean minimum mul pack reshape shape softmax stride slice sub here be a list of operator for which you will need custom implementation broadcastto traceback most recent call last file library framework python framework version 3 7 bin toco from protos line 10 in sys exit main file user z004njq library python 3 7 lib python site package tensorflow lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file user z004njq library python 3 7 lib python site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file library framework python framework version 3 7 lib python3 7 site package absl app py line 299 in run run main main args file library framework python framework version 3 7 lib python3 7 site package absl app py line 250 in run main sys exit main argv file user z004njq library python 3 7 lib python site package tensorflow lite toco python toco from protos py line 56 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add average pool 2d conv 2d depthwise conv 2d div fully connect hard swish maximum mean minimum mul pack reshape shape softmax stride slice sub here be a list of operator for which you will need custom implementation broadcastto standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.