repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | unsupported operation mean while try to apply gpudelegate to tflite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code nop just some assemble statement os platform and distribution e g linux ubuntu 16 04 macos 10 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung s9 tensorflow instal from source or binary binary tensorflow version use command below tf nightly 1 15 0 dev20190812 python version 3 6 5 describe the current behavior I m try to convert the keras s mobilenet model with float16 precision for gpu inference but when run task I encounter the follow error cause by java lang illegalargumentexception internal error fail to apply delegate next operation be not support by gpu delegate mean operation be not support first 88 operation will run on the gpu and the remain 5 on the cpu tensorflow lite kernel conv cc 259 bias type input type 10 1 node number 90 conv 2d fail to prepare tensorflow lite kernel conv cc 259 bias type input type 10 1 node number 3 conv 2d fail to prepare describe the expect behavior code to reproduce the issue the python script for conversion be here python import tensorflow as tf import tensorflow kera as keras model keras application mobilenet mobilenet input shape none alpha 1 0 depth multipli 1 dropout 1e 3 include top true weight imagenet input tensor none pool none class 1000 converter tf lite tfliteconverter from keras model file mobilenet h5 converter optimization tf lite optimize default converter target spec support type tf lite constant float16 tflite model converter convert open mobilenet tflite wb write tflite model and in android code I call tfliteoption setallowfp16precisionforfp32 true other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf2 0 0 rc0 tensor shape check fail when use tf function in tf distribute strategy scope | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento 7 tensorflow instal from source or binary binary docker image 2 0 0rc0 gpu py3 jupyter tensorflow version use command below 2 0 0 rc0 python version 3 6 8 cuda cudnn version v10 0 130 gpu model and memory p40 describe the current behavior I follow the document for write custom training loop ref in short and catch an valueerror exception valueerror input tensor const 1 0 enter the loop with shape but have shape none 110 110 1 after one iteration to allow the shape to vary across iteration use the shape invariant argument of tf while loop to specify a less specific shape if I remove tf function decorator the code work fine with a warning warn tensorflow use mirroredstrategy eagerly have significant overhead currently we will be work on improve this in the future but for now please wrap call for each replica or experimental run or experimental run v2 inside a tf function to get the good performance describe the expect behavior it should work and run fast than eager mode code to reproduce the issue python import tensorflow as tf mirror strategy tf distribute mirroredstrategy def get net net tf keras sequential net add tf keras layer conv2d filter 10 kernel size 3 3 net add tf keras layer dense 1 return net datum tf random normal shape 1280 112 112 3 label tf random normal shape 1280 multi db tf datum dataset from tensor slice datum label batch 80 dist dataset mirror strategy experimental distribute dataset multi db with mirror strategy scope net get net tf function def replica fn input d l input return net d tf function def distribute train epoch dataset total result 0 for x in dataset per replica result mirror strategy experimental run v2 replica fn args x total result mirror strategy reduce tf distribute reduceop sum per replica result axis none return total result for in range 100 f distribute train epoch dist dataset |
tensorflowtensorflow | distribute training not work properly wait for model to be ready ready for local init op variable not initialize global step | Bug | hello I be use tfhub elmo embed for ner model training training work fine for a single machine when I be train to do training on gcp ml engine with 1 master node 2 parameter server 3 worker node training start on master node but on all three worker node I be get this log continuously for 1 hour look like training be not get start on worker node I be use these version which currently support by google cloud ml engine python 3 5 tensorflow 1 13 1 ml engine resource usage traininginput scaleti custom mastertype large model workertype standard parameterservertype standard workercount 3 parameterservercount 2 wait for model to be ready ready for local init op variable not initialize global step module bilm char embed module bilm cnn w cnn 0 module bilm cnn b cnn 0 module bilm cnn w cnn 1 module bilm cnn b cnn 1 module bilm cnn w cnn 2 module bilm cnn b cnn 2 module bilm cnn w cnn 3 module bilm cnn b cnn 3 module bilm cnn w cnn 4 module bilm cnn b cnn 4 module bilm cnn w cnn 5 module bilm cnn b cnn 5 module bilm cnn w cnn 6 module bilm cnn b cnn 6 module bilm cnn high 0 w carry module bilm cnn high 0 b carry module bilm cnn high 0 w transform module bilm cnn high 0 b transform module bilm cnn high 1 w carry module bilm cnn high 1 b carry module bilm cnn high 1 w transform module bilm cnn high 1 b transform module bilm cnn proj w proj module bilm cnn proj b proj module bilm rnn 0 rnn multirnncell cell0 rnn lstm cell kernel module bilm rnn 0 rnn multirnncell cell0 rnn lstm cell bias module bilm rnn 0 rnn multirnncell cell0 rnn lstm cell projection kernel module bilm rnn 0 rnn multirnncell cell1 rnn lstm cell kernel module bilm rnn 0 rnn multirnncell cell1 rnn lstm cell bias module bilm rnn 0 rnn multirnncell cell1 rnn lstm cell projection kernel module bilm rnn 1 rnn multirnncell cell0 rnn lstm cell kernel module bilm rnn 1 rnn multirnncell cell0 rnn lstm cell bias module bilm rnn 1 rnn multirnncell cell0 rnn lstm cell projection kernel module bilm rnn 1 rnn multirnncell cell1 rnn lstm cell kernel module bilm rnn 1 rnn multirnncell cell1 rnn lstm cell bias module bilm rnn 1 rnn multirnncell cell1 rnn lstm cell projection kernel module aggregation weight module aggregation scale lstm fuse cell kernel lstm fuse cell bias lstm fuse cell 1 kernel lstm fuse cell 1 bias dense kernel dense bias crf beta1 power beta2 power lstm fuse cell kernel adam lstm fuse cell kernel adam 1 lstm fuse cell bias adam lstm fuse cell bias adam 1 lstm fuse cell 1 kernel adam lstm fuse cell 1 kernel adam 1 lstm fuse cell 1 bias adam lstm fuse cell 1 bias adam 1 dense kernel adam dense kernel adam 1 dense bias adam dense bias adam 1 crf adam crf adam 1 ready none |
tensorflowtensorflow | tf2 0rc tfrecord file size big than original csv file consist text datum | Bug | I be try transform a csv file consist two column short description label description consist sentence from length 1 to 4 and label be just an integer number the csv file have a size of 740 mb however on convert it to tfrecord the size of the new file increase to 1 2 gb which I think be not correct since tfrecord file be suppose to have less size for short description column 1 I be use byteslist and for label coulmn2 I be use int64list as be show here in this tutorial just to be clear the datum read from the new tfrecord file be fine I e no problem it be just the size of the file now I do not understand what the problem be please help thank you |
tensorflowtensorflow | installer for c api do not include copyright license information for tensorflow | Bug | system information os platform and distribution e g linux ubuntu 16 04 windows linux and osx mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary use the instruction from c api page at tensorflow version 1 14 0 python version n a instal use virtualenv pip conda instal by follow instruction at c api page basically extract from tarball or zip file bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the problem the problem I see be that there be no copyright or license information for the tensorflow library itself there be a license file under include tensorflow c license but that give all of the license information for the 3rd party librarie that tensorflow use but there be nothing for tensorflow itself provide the exact sequence of command step that you execute before run into the problem window unzip the zip file linux darwin detar the compress tarball any other info log none |
tensorflowtensorflow | user can t add loss w input dependence in custom layer in eager mode or doc unclear | Bug | system information tensorflow version you be use 2 0 be you willing to contribute it yes no yes describe the feature and the current behavior state thank for make tensorflow I want to make custom layer which handle their own loss to write less code ex layerwise reconstruction kl divergence loss in stack autoencoder the doc l1083 indicate this isn t possible in eager mode which suck because eager be default in 2 0 I like tf function graph mode but frankly it s a different dialect of tf which force user to waste time translate code into graph dialect so I want to use eager even if performance be bad it s well than debug tf function without modular custom loss user must write complicated annoying training loop to apply correct loss function to correct combination of input and output from a model and model return array so order get complicated can we name model output would it be possible to permit tf user to define module specific input dependent loss for custom layer in eager mode will this change the current api how user could add custom input dependent loss to custom layer thus user can dramatically simplify training loop who will benefit with this feature 2 0 kera user with custom layer that use custom loss any other info I want to make these coder brick into layer which handle reconstruction kl divergence in a modular way def get sensor and actuator agent in spec if in spec rank be 3 sensor use image sensor agent actuator use image actuator agent elif in spec rank be 2 and in spec shape 1 be none sensor use ragged sensor agent in spec actuator use ragged actuator agent in spec else sensor use resizer agent code spec shape actuator use resizer in spec shape return sensor actuator def use coder agent in spec log use coder in spec color blue normalizer norm use norm if in spec rank be 3 h w get hw in spec shape hw h w def resize then norm x x tf image resize x hw return norm x normalizer resize then norm coordinator l lambda concat coord sensor actuator get sensor and actuator agent in spec def call x normie normalizer x normie w coords coordinator normie code sensor normie w coords reconstruction actuator code return normie code reconstruction return call alas since I can t use input dependent modular loss I need to 1 design my model to return a bunch of extra output 2 keep track of the order of those output which one be a code which one be a reconstruction 3 unpack the output of the model to organize code reconstruction normalize input and actual desire output 4 organize pair of these normalize input and reconstruction 5 loop over the pair and apply a loss function 6 append error term to a list of loss be there some reason we can not add a loss function to the layer themselves and avoid all this bs this would really simplify modular agent please advise |
tensorflowtensorflow | tensorflow window server 2019 and wsl | Bug | system information window server 2019 essential and ubuntu 18 04 lt in wsl tensorflow instal with pip3 install tensorflow version all version above 1 6 0 throw an error python version 3 6 8 instal use pip only cpu I be try to use this github repository and all version above 1 6 0 don t work I get an error core dump if I install old version I get different error like this attributeerror module tensorflow python op rnn cell impl have no attribute assert like rnncell on version 1 5 0 google search didn t help I run this command sudo python3 train py log after I run command above user server mnt d tts tacotron master sudo python3 train py usr local lib python3 6 dist package tensorflow python framework dtype py 493 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 494 futurewarning pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 495 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 496 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 497 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 usr local lib python3 6 dist package tensorflow python framework dtype py 502 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 checkpoint path log tacotron model ckpt loading training datum from training train txt use model tacotron hyperparameter adam beta1 0 9 adam beta2 0 999 attention depth 128 batch size 32 cleaner english cleaner decay learning rate true decoder depth 1024 embed depth 512 encoder depth 256 fmax 7600 fmin 125 frame length ms 50 frame shift ms 12 5 griffin lim iters 60 initial learning rate 0 001 max abs value 4 max frame num 1000 max iter 300 min level db 100 num freq 1025 num mel 160 output per step 5 postnet depth 512 power 1 2 preemphasis 0 97 prenet depth 256 256 ref level db 20 reg weight 1e 06 sample rate 24000 use cmudict false load metadata for 32 example 0 09 hour traceback most recent call last file train py line 157 in main file train py line 153 in main train log dir args file train py line 66 in train model initialize feeder input feeder input length feeder mel target feeder linear target feeder stop token target global step file mnt d tts tacotron master model tacotron py line 77 in initialize customdecoder decoder cell helper decoder init state file mnt d tts tacotron master model custom decoder py line 46 in init rnn cell impl assert like rnncell type cell cell attributeerror module tensorflow python op rnn cell impl have no attribute assert like rnncell |
tensorflowtensorflow | create a boolean constant print a deprecation warn | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 0 0rc0 python version 3 6 describe the current behavior create a boolean constant print a deprecation warn w0828 15 45 36 142576 139852094695168 deprecation py 323 from lib python3 6 site package tensorflow core python framework constant op py 253 eagertensorbase cpu from tensorflow python framework op be deprecate and will be remove in a future version instruction for update use tf identity instead describe the expect behavior no deprecation warn code to reproduce the issue python import tensorflow as tf tf zero 10 dtype tf bool |
tensorflowtensorflow | static int64 getdirectconvcost integer overflow | Bug | tensorflow core kernel deep conv2d cc ln 74 static int64 getdirectconvcost int filter row int filter col int in depth int out depth int out row int out col return filter row filter col in depth out depth out row out col can lead to integer overflow and weird result I think it should be smth like that return int64 filter row int64 filter col int64 in depth int64 out depth int64 out row int64 out col |
tensorflowtensorflow | tf 2 0 0rc0 can not connect to tpu device | Bug | create vm and v3 8 tpu with ctpu up command and update tf version to tf2 0 0rc0 via pip3 when I try to connect to tpu device return error invalidargumenterror unable to find a context i d match the specify one 5613663074031560004 perhaps the worker be restart or the context be gc d additional grpc error information create 1566994715 938381293 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message unable to find a context i d match the specify one 5613663074031560004 perhaps the worker be restart or the context be gc d grpc status 3 2019 08 28 12 18 36 196440 e tensorflow core distribute runtime rpc eager grpc eager client cc 72 remote eagercontext with i d 5613663074031560004 do not seem to exist I also try the same in colab with rc0 version and I have the same error the code I use be the one give in documentation tpu test resolver tf distribute cluster resolver tpuclusterresolver tpu tpu tf config experimental connect to host resolver master tf tpu experimental initialize tpu system resolver tpu strategy tf distribute experimental tpustrategy resolver |
tensorflowtensorflow | micro riscv32 mcu build fail with undefined reference | Bug | system information os platform and distribution ubuntu 18 04 tensorflow instal from source tensorflow version 298534b745db43b2ad18256ed781bd2f142e5bc7 python version 2 7 15 3 6 8 instal use virtualenv pip conda pip3 describe the problem tf lite for micro riscv32 build fail with undefined reference make f tensorflow lite experimental micro tool make makefile target riscv32 mcu target arch riscv32 hello world bin home ehirdoy src tensorflow tensorflow lite experimental micro riscv32 mcu debug log cc 18 undefined reference to wrap put exit c text exit 0x2e undefined reference to wrap exit sbrkr c text sbrk r 0x12 undefined reference to wrap sbrk writer c text write r 0x16 undefined reference to wrap write close c text close r 0x12 undefined reference to wrap close lseekr c text lseek r 0x16 undefined reference to wrap lseek readr c text read r 0x16 undefined reference to wrap read fstatr c text fstat r 0x14 undefined reference to wrap fstat isattyr c text isatty r 0x12 undefined reference to wrap isatty provide the exact sequence of command step that you execute before run into the problem export path path tensorflow lite experimental micro tool make download riscv toolchain bin make f tensorflow lite experimental micro tool make makefile clean make f tensorflow lite experimental micro tool make makefile target riscv32 mcu target arch riscv32 hello world bin any other info log make log |
tensorflowtensorflow | when I follow the guide to create a mbed folder I get this error this be the first time I use tflite can any one help I please | Bug | when I follow the guide to demonstrate the absolute basic of use tensorflow lite for microcontroller I create a folder for mbed but I get this error make no rule to make target tensorflow lite experimental micro tool make gen mbe cortex m4 prj hello world mbe tensorflow lite experimental micro tool make download cmsis ext arm cmplx mag squared q10p6 c need by generate hello world mbe project os platform and distribution linux ubuntu 18 04 |
tensorflowtensorflow | customize keras tensorboard callback | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary install by pip in anaconda environment tensorflow version use command below tensorflow gpu 2 0 0 rc0 tensorboard 1 14 python version 3 6 9 cuda cudnn version cudatoolkit 10 0 130 cudnn 7 6 0 gpu model and memory geforce gtx 1080 8117mib describe the current behavior in tensorflow 1 13 1 I create my tensorboard callback to add image while training by customize the keras callback tensorboard ex from this l1114 l1122 I add the code if hasattr layer output if isinstance layer output list for I output in enumerate layer output tf summary histogram out format layer name I output else tf summary histogram out format layer name layer output my code input1 self model get layer input1 input tf summary image input1 input1 max output max out end my code self merge tf summary merge all then I can see the image while training but now in l1491 I can t find if self histogram freq and self merge be none or self merge tf summary merge all in the def set model self model describe the expect behavior where I can add the input1 self model get layer input1 input tf summary image input1 input1 max output max out to see the image while training |
tensorflowtensorflow | tensorflow keras model compute output shape give wrong result | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary conda tensorflow version use command below try with 1 12 0 and 1 14 0 python version 3 6 describe the current behavior use a keras model store in a variable mm in tensorflow kera I would like to calculate the output shape for a give input this work correctly only the first time I call mm compute output shape the subsequent result for call the same function with different shape be inconsistent use standard keras method I get different and consistent result an example for the problem be implement in the tf bug py script that you find in the zip if you call it without parameter it load a fully convolutional model from a json file provide in the zip and do import json import tensorflow kera as keras with open model tf bug json r as fi kk json load fi mm kera model model from json json dump kk for n in range 999 1020 ss 1 n 1 1 print ss mm compute output shape input shape ss the result display the input and correspond output shape on each line be 1 999 1 1 1 481 1 1 1 1000 1 1 1 481 1 1 1 1001 1 1 1 482 1 1 1 1002 1 1 1 482 1 1 1 1003 1 1 1 483 1 1 1 1004 1 1 1 483 1 1 1 1005 1 1 1 484 1 1 1 1006 1 1 1 484 1 1 1 1007 1 1 1 482 1 1 1 1008 1 1 1 485 1 1 I keep only the relevant line you see that after the first line that be correct start with input shape 1007 the output shape decrease and start to produce erratic behavior while for the fully convolutional model it should increase monotonously with the input size describe the expect behavior run the same script with argument keras use the vanilla keras version 2 2 4 and in this case the output shape increase as expect 1 999 1 1 1 481 1 1 1 1000 1 1 1 481 1 1 1 1001 1 1 1 482 1 1 1 1002 1 1 1 482 1 1 1 1003 1 1 1 483 1 1 1 1004 1 1 1 483 1 1 1 1005 1 1 1 484 1 1 1 1006 1 1 1 484 1 1 1 1007 1 1 1 485 1 1 1 1008 1 1 1 485 1 1 note that I can get a correct result with tf keras as well if I clear the model output shape cache before I compute the output shape run the script with argument clear use a modified loop as follow for n in range 999 1020 ss 1 n 1 1 if len sys argv 1 and sys argv 1 clear mm output shape cache clear print ss mm compute output shape input shape ss the result be correct as expect look into the function mm compute output shape I find that compare to kera you change the cache key generation where kera do cache key join str x for x in input shape tf keras do cache key generic util object list uid input shape it appear that the cache key in tf keras confuse different input shape as the same and return wrong result from the cache code to reproduce the issue you find the script model and output file in the zip tf compute output shape bug zip |
tensorflowtensorflow | tf 2 0 unsupported op node error message in late tf nightly 8 27 19 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tf nightly gpu 2 0 preview 2 0 0 dev20190827 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 6 2 gpu model and memory titan xp 12 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when run a model that previously yield no error I m get error of the form 2019 08 27 11 35 15 004095 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while enter 15 i d 223 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while enter 15 enter t dt int32 frame name policy fn ion while be constant false parallel iteration 10 device job localhost replica 0 task 0 device gpu 0 policy apply gauss force expanddim dim 2019 08 27 11 35 15 004669 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while enter 9 i d 224 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get effe ct get phic add collision while enter 9 enter t dt int32 frame name policy fn ion while be constant false parallel iteration 10 device job localhost replica 0 t ask 0 device gpu 0 policy postprocess policy input sum reduction indice 2019 08 27 11 35 15 005043 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while enter 4 i d 227 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get effe ct get phic add collision while enter 4 enter t dt int32 frame name policy fn ion while be constant false parallel iteration 10 device job localhost replica 0 t ask 0 device gpu 0 policy postprocess policy input get collision stride slice 3 stack 1 0 2019 08 27 11 35 15 005378 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while enter 2 i d 228 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get effe ct get phic add collision while enter 2 enter t dt int32 frame name policy fn ion while be constant false parallel iteration 10 device job localhost replica 0 t ask 0 device gpu 0 policy postprocess policy input get collision stride slice 3 stack 1 0 2019 08 27 11 35 15 005796 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while enter 11 i d 242 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while enter 11 enter t dt bool frame name policy fn ion while be constant false parallel iteration 10 device job localhost replica 0 task 0 device gpu 0 policy postprocess policy input tile 7 2019 08 27 11 35 15 006245 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while merge 29 i d 255 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while merge 29 merge n 2 t dt int32 device job localhost replica 0 task 0 device gpu 0 policy fn call build one time effect or recurrenc e get effect get phic add collision while enter 15 policy fn call build one time effect or recurrence get effect get phic add collision while next iteration 74 2019 08 27 11 35 15 006623 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while merge 23 i d 256 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while merge 23 merge n 2 t dt int32 device job localhost replica 0 task 0 device gpu 0 policy fn call build one time effect or recurrenc e get effect get phic add collision while enter 9 policy fn call build one time effect or recurrence get effect get phic add collision while next iteration 68 2019 08 27 11 35 15 007007 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while merge 18 i d 259 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while merge 18 merge n 2 t dt int32 device job localhost replica 0 task 0 device gpu 0 policy fn call build one time effect or recurrenc e get effect get phic add collision while enter 4 policy fn call build one time effect or recurrence get effect get phic add collision while next iteration 63 2019 08 27 11 35 15 007369 e tensorflow compiler jit compilability check util cc 346 unsupported op node name policy fn call build one time effect or recurrence get effect s get phic add collision while merge 16 i d 260 op device job localhost replica 0 task 0 device gpu 0 def node policy fn call build one time effect or recurrence get eff ect get phic add collision while merge 16 merge n 2 t dt int32 device job localhost replica 0 task 0 device gpu 0 policy fn call build one time effect or recurrenc e get effect get phic add collision while enter 2 policy fn call build one time effect or recurrence get effect get phic add collision while next iteration 61 when use autograph with the tf function decorator describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log I m just curious how we be suppose to interpret these log so far it s non blocking but I m wonder if this be indicate autograph be fail to convert part of my model |
tensorflowtensorflow | training stall after save checkpoint 0 | Bug | hello I m try to run the librispeech problem use tensor2tensor on google colab s gpu runtime but the training stall after save checkpoint 0 and open dynamic library libcubla so 10 0 there be no error message it just stop there forever I m post it here because the stalling point happen on tensorflow s package python s version 3 6 8 tensorflow s version 1 14 0 tensor2tensor s version 1 14 0 cuda s version 10 1 os ubuntu 18 04 this be the code from tensor2tensor import model from tensor2tensor util import registry t2 t trainer tmp dir content gdrive my drive tcc t2 t librispeech tmp problem librispeech clean small model transformer train step 10 hparam set transformer librispeech datum dir content gdrive my drive tcc t2 t librispeech datum output dir content gdrive my drive tcc t2 t librispeech output worker gpu 0 and here s the output warn log before flag parsing go to stderr w0827 17 43 33 747592 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util expert util py 68 the name tf variable scope be deprecate please use tf compat v1 variable scope instead w0827 17 43 35 111425 139969908836224 lazy loader py 50 the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue w0827 17 43 36 899833 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util adafactor py 27 the name tf train optimizer be deprecate please use tf compat v1 train optimizer instead w0827 17 43 36 900365 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util multistep optimizer py 32 the name tf train adamoptimizer be deprecate please use tf compat v1 train adamoptimizer instead w0827 17 43 36 911696 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package mesh tensorflow op py 4237 the name tf train checkpointsaverlistener be deprecate please use tf estimator checkpointsaverlistener instead w0827 17 43 36 911862 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package mesh tensorflow op py 4260 the name tf train sessionrunhook be deprecate please use tf estimator sessionrunhook instead w0827 17 43 36 928164 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor model research neural stack py 38 the name tf nn rnn cell rnncell be deprecate please use tf compat v1 nn rnn cell rnncell instead w0827 17 43 36 975095 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor rl gym util py 235 the name tf log info be deprecate please use tf compat v1 log info instead w0827 17 43 36 993708 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util trainer lib py 111 the name tf optimizeroption be deprecate please use tf compat v1 optimizeroption instead w0827 17 43 37 006869 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensorflow gan python contrib util py 305 the name tf estimator tpu tpuestimator be deprecate please use tf compat v1 estimator tpu tpuestimator instead w0827 17 43 37 007008 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensorflow gan python contrib util py 310 the name tf estimator tpu tpuestimatorspec be deprecate please use tf compat v1 estimator tpu tpuestimatorspec instead w0827 17 43 38 449517 139969908836224 deprecation wrapper py 119 from usr local bin t2 t trainer 32 the name tf log set verbosity be deprecate please use tf compat v1 log set verbosity instead w0827 17 43 38 449705 139969908836224 deprecation wrapper py 119 from usr local bin t2 t trainer 32 the name tf log info be deprecate please use tf compat v1 log info instead w0827 17 43 38 449807 139969908836224 deprecation wrapper py 119 from usr local bin t2 t trainer 33 the name tf app run be deprecate please use tf compat v1 app run instead i0827 17 43 38 450179 139969908836224 t2 t trainer py 155 find unparsed command line argument check if any start with hp and interpret those as hparam setting w0827 17 43 38 450768 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor bin t2 t trainer py 165 the name tf logging warn be deprecate please use tf compat v1 log warn instead w0827 17 43 38 450837 139969908836224 t2 t trainer py 165 find unknown flag worker gpu 0 w0827 17 43 38 451183 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util hparams lib py 49 the name tf gfile exist be deprecate please use tf io gfile exist instead w0827 17 43 38 451832 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util trainer lib py 839 the name tf set random seed be deprecate please use tf compat v1 set random seed instead w0827 17 43 38 452693 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util trainer lib py 123 the name tf graphoption be deprecate please use tf compat v1 graphoption instead w0827 17 43 38 452859 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util trainer lib py 129 the name tf gpuoption be deprecate please use tf compat v1 gpuoption instead w0827 17 43 38 453019 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor util trainer lib py 242 runconfig init from tensorflow contrib learn python learn estimator run config be deprecate and will be remove in a future version instruction for update when switch to tf estimator estimator use tf estimator runconfig instead i0827 17 43 38 453181 139969908836224 trainer lib py 265 configure dataparallelism to replicate the model i0827 17 43 38 453252 139969908836224 device py 76 schedule continuous train and eval i0827 17 43 38 453314 139969908836224 device py 77 worker gpu 1 i0827 17 43 38 453381 139969908836224 device py 78 sync false w0827 17 43 38 453437 139969908836224 device py 141 schedule continuous train and eval assume that training be run on a single machine i0827 17 43 38 453504 139969908836224 device py 170 datashard device gpu 0 i0827 17 43 38 453559 139969908836224 device py 171 cache device none i0827 17 43 38 454001 139969908836224 device py 172 ps devices gpu 0 i0827 17 43 38 454567 139969908836224 estimator py 209 use config task type none task i d 0 cluster spec master num ps replicas 0 num worker replicas 0 environment local be chief true evaluation master train distribute none eval distribute none experimental max worker delay sec none device fn none tf config gpu option per process gpu memory fraction 1 0 tf random seed none save summary step 100 save checkpoint sec none log step count step 100 protocol none session config gpu option per process gpu memory fraction 0 95 allow soft placement true graph option optimizer option global jit level off isolate session state true save checkpoint step 1000 keep checkpoint max 20 keep checkpoint every n hour 10000 model dir content gdrive my drive tcc t2 t librispeech output use tpu false t2 t device info num async replicas 1 datum parallelism w0827 17 43 38 454751 139969908836224 model fn py 630 estimator s model fn wrapping model fn at 0x7f4cda9e7ae8 include param argument but param be not pass to estimator w0827 17 43 38 454877 139969908836224 trainer lib py 783 validationmonitor only work with schedule train and evaluate w0827 17 43 38 455530 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor bin t2 t trainer py 328 the name tf gfile makedirs be deprecate please use tf io gfile makedirs instead w0827 17 43 38 458196 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor bin t2 t trainer py 344 the name tf gfile open be deprecate please use tf io gfile gfile instead i0827 17 43 38 487565 139969908836224 estimator training py 186 not use distribute coordinator i0827 17 43 38 487942 139969908836224 training py 612 run training and evaluation locally non distribute i0827 17 43 38 488237 139969908836224 training py 700 start train and evaluate loop the evaluate will happen after every checkpoint checkpoint frequency be determine base on runconfig argument save checkpoint step 1000 or save checkpoint sec none w0827 17 43 38 493283 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensorflow python training training util py 236 variable initialize value from tensorflow python op variable be deprecate and will be remove in a future version instruction for update use variable read value variable in 2 x be initialize automatically both in eager and graph inside tf defun contexts i0827 17 43 38 502703 139969908836224 problem py 644 read datum file from content gdrive my drive tcc t2 t librispeech data librispeech clean small train i0827 17 43 38 543926 139969908836224 problem py 670 partition 0 num datum file 100 w0827 17 43 38 545797 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor datum generator problem py 680 parallel interleave from tensorflow python datum experimental op interleave op be deprecate and will be remove in a future version instruction for update use tf datum dataset interleave map func cycle length block length num parallel call tf data experimental autotune instead if sloppy execution be desire use tf datum option experimental determinstic w0827 17 43 38 581830 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor layer common audio py 92 to int32 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0827 17 43 38 823341 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor layer common audio py 115 to float from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead w0827 17 43 38 987241 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor util data reader py 275 tf record iterator from tensorflow python lib io tf record be deprecate and will be remove in a future version instruction for update use eager execution and tf datum tfrecorddataset path w0827 17 43 40 327878 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor util data reader py 395 datasetv1 output shape from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use tf compat v1 datum get output shape dataset w0827 17 43 40 328149 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util datum reader py 398 the name tf logging warning be deprecate please use tf compat v1 log warning instead w0827 17 43 40 328256 139969908836224 datum reader py 399 shape be not fully define assume batch size mean token w0827 17 43 40 374079 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensorflow python datum experimental op group py 193 add dispatch support wrapper from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where w0827 17 43 40 414666 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util data reader py 231 the name tf summary scalar be deprecate please use tf compat v1 summary scalar instead i0827 17 43 40 470206 139969908836224 estimator py 1145 calling model fn i0827 17 43 40 481091 139969908836224 t2 t model py 2248 set t2tmodel mode to train w0827 17 43 40 552857 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util t2 t model py 244 the name tf summary text be deprecate please use tf compat v1 summary text instead i0827 17 43 41 160171 139969908836224 api py 255 use variable initializer uniform unit scale i0827 17 43 41 531091 139969908836224 t2 t model py 2248 transforming feature input with speech recognition modality bottom w0827 17 43 41 532868 139969908836224 deprecation py 323 from usr local lib python3 6 dist package tensor2tensor layer modalitie py 439 conv2d from tensorflow python layer convolutional be deprecate and will be remove in a future version instruction for update use tf keras layer conv2d instead i0827 17 43 41 922302 139969908836224 t2 t model py 2248 transforming feature target with symbol modality 256 384 target bottom i0827 17 43 42 037450 139969908836224 t2 t model py 2248 building model body w0827 17 43 42 094394 139969908836224 deprecation py 506 from usr local lib python3 6 dist package tensor2tensor model transformer py 96 call dropout from tensorflow python op nn op with keep prob be deprecate and will be remove in a future version instruction for update please use rate instead of keep prob rate should be set to rate 1 keep prob w0827 17 43 42 130389 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor layer common layer py 3077 the name tf layer dense be deprecate please use tf compat v1 layer dense instead w0827 17 43 42 473380 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor layer common attention py 1249 the name tf summary image be deprecate please use tf compat v1 summary image instead i0827 17 43 49 011597 139969908836224 t2 t model py 2248 transform body output with symbol modality 256 384 top w0827 17 43 49 118912 139969908836224 deprecation wrapper py 119 from usr local lib python3 6 dist package tensor2tensor util learn rate py 120 the name tf train get or create global step be deprecate please use tf compat v1 train get or create global step instead i0827 17 43 49 120072 139969908836224 learning rate py 29 base learning rate 2 000000 i0827 17 43 49 131614 139969908836224 optimize py 338 trainable variable total size 70343552 i0827 17 43 49 131888 139969908836224 optimize py 338 non trainable variable total size 5 i0827 17 43 49 132170 139969908836224 optimize py 193 use optimizer adam i0827 17 43 59 596418 139969908836224 estimator py 1147 do call model fn i0827 17 43 59 597772 139969908836224 basic session run hook py 541 create checkpointsaverhook i0827 17 44 03 685569 139969908836224 monitor session py 240 graph be finalize 2019 08 27 17 44 03 685968 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 27 17 44 03 708726 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 08 27 17 44 03 898700 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 899340 I tensorflow compiler xla service service cc 168 xla service 0x207fb80 execute computation on platform cuda device 2019 08 27 17 44 03 899389 I tensorflow compiler xla service service cc 175 streamexecutor device 0 tesla t4 compute capability 7 5 2019 08 27 17 44 03 901408 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2200000000 hz 2019 08 27 17 44 03 901570 I tensorflow compiler xla service service cc 168 xla service 0x207ea00 execute computation on platform host device 2019 08 27 17 44 03 901594 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 08 27 17 44 03 901797 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 902276 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla t4 major 7 minor 5 memoryclockrate ghz 1 59 pcibusid 0000 00 04 0 2019 08 27 17 44 03 902614 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 27 17 44 03 907500 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 27 17 44 03 908556 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 27 17 44 03 911851 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 27 17 44 03 916549 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 27 17 44 03 917606 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 27 17 44 03 925044 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 27 17 44 03 925147 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 925681 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 926137 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 2019 08 27 17 44 03 926182 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 27 17 44 03 927269 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 27 17 44 03 927290 I tensorflow core common runtime gpu gpu device cc 1187 0 2019 08 27 17 44 03 927300 I tensorflow core common runtime gpu gpu device cc 1200 0 n 2019 08 27 17 44 03 927408 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 927907 I tensorflow stream executor cuda cuda gpu executor cc 1005 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 08 27 17 44 03 928376 w tensorflow core common runtime gpu gpu bfc allocator cc 40 override allow growth set because the tf force gpu allow growth environment variable be set original config value be 0 2019 08 27 17 44 03 928411 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 14325 mb memory physical gpu device 0 name tesla t4 pci bus i d 0000 00 04 0 compute capability 7 5 2019 08 27 17 44 07 049904 w tensorflow compiler jit mark for compilation pass cc 1412 one time warn not use xla cpu for cluster because envvar tf xla flag tf xla cpu global jit be not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla be active pass vmodule xla compilation cache 1 as a proper command line flag not via tf xla flag or set the envvar xla flag xla hlo profile i0827 17 44 09 037412 139969908836224 session manager py 500 run local init op i0827 17 44 09 280463 139969908836224 session manager py 502 do run local init op i0827 17 44 18 882892 139969908836224 basic session run hook py 606 save checkpoint for 0 into content gdrive my drive tcc t2 t librispeech output model ckpt 2019 08 27 17 44 39 361151 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 |
tensorflowtensorflow | python c api interpreter example for hybrid model | Bug | I have a hybrid tflite model e g be convert with the option converter target op tf lite opsset tflite builtin tf lite opsset select tf op so it contain both tflite op and normal op I test a lot so this be quick than implement the miss part myself when try to load the model with an interpreter either with the python or the c api I get error python runtimeerror regular tensorflow op be not support by this interpreter make sure you invoke the flex delegate before inference node number 4 flex fail to prepare c info initialize tensorflow lite runtime error regular tensorflow op be not support by this interpreter make sure you invoke the flex delegate before inference error node number 4 flexsoftplus fail to prepare it doesn t seem there be doc that cover how to treat this error and load such hybrid model correctly if I have miss any doc by any chance please share the link |
tensorflowtensorflow | the title be the share embedding module but the document introduce share embed column module | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change the description in the document be a way to use the share embed column module but the title be the share embedding module and the share embed column module have be remove in tensorflow2 0 |
tensorflowtensorflow | sparsetensor stop work on tf kera when move from 2 0 0 beta1 to 2 0 0 rc0 | Bug | I just move from 2 0 0 beta1 to 2 0 0 rc0 and some code for handle sparse categorical variable stop work for I here be some minimal code to reproduce the issue import tensorflow as tf import numpy as np class sparseslice tf keras layers layer def init self feature column super sparseslice self init self fc feature column def build self input shape self kernel self add weight kernel format self fc name shape self fc num bucket dtype tf float32 def call self input ids self fc transform input tensor input return tf expand dim tf gather self kernel ids value axis 1 batch size 10 c smth col tf feature column categorical column with hash bucket c 10000 dtype tf int64 example spec tf feature column make parse example spec col input tf keras layers input name c shape none batch size batch size sparse true dtype tf int64 sparse out sparseslice col input output tf keras layer dense 1 activation sigmoid sparse out model tf keras model input output model compile optimizer adam loss mse feature c tf sparse sparsetensor index I 0 for I in range batch size value np random randint 0 1000 batch size tolist dense shape batch size 1 ys tf constant np random rand batch size tolist dtype tf float32 dataset tf datum dataset from tensor slice feature ys batch batch size model fit x dataset epoch 1 on 2 0 0 rc0 I be get the follow error valueerror the two structure don t have the same nested structure first structure type sparsetensorspec str sparsetensorspec tensorshape none 1 tf int32 second structure type sparsetensor str sparsetensor indice tensor smth indice 0 shape none 2 dtype int64 value tensor smth value 0 shape none dtype int64 dense shape tensor smth shape 0 shape 2 dtype int64 more specifically incompatible compositetensor typespec type sparsetensorspec str sparsetensorspec tensorshape none 1 tf int32 vs type sparsetensorspec str sparsetensorspec tensorshape none none tf int64 entire first structure entire second structure whereas everything run fine in 2 0 0 beta1 |
tensorflowtensorflow | distribute tensorflow | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow example script provide by tensorflow os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below tensorflow gpu 1 14 0 python version python 3 6 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 gpu model and memory tesla m40 each with 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior hi I be use this script resnet cifar main py in to test multi worker distribute strategy I have add some code to configure cluster in this file as follow import os import json os environ tf config json dump cluster worker 8080 8080 8080 8080 task type worker index 0 both machine can login from each other without authentication use ssh I use python resnet cifar main py datum dir cifar 10 batch bin distribution strategy multi worker mirror num gpu 8 to run it however the code stall here be the information data1 username local lib python3 6 site package tensorflow python framework dtype py 516 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 data1 username local lib python3 6 site package tensorflow python framework dtype py 517 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 data1 username local lib python3 6 site package tensorflow python framework dtype py 518 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 data1 username local lib python3 6 site package tensorflow python framework dtype py 519 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 data1 username local lib python3 6 site package tensorflow python framework dtype py 520 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 data1 username local lib python3 6 site package tensorflow python framework dtype py 525 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 541 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 542 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 543 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 544 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 545 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 data1 username local lib python3 6 site package tensorboard compat tensorflow stub dtype py 550 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 w0826 11 51 16 547584 140208701863744 deprecation wrapper py 119 from data1 username model official util misc keras util py 154 the name tf session be deprecate please use tf compat v1 session instead 2019 08 26 11 51 16 563363 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcuda so 1 2019 08 26 11 51 16 616212 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 16 617656 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 16 619078 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 16 620503 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 16 621998 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 16 623482 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 16 624936 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 16 626392 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 16 626574 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 16 627793 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 16 629061 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 16 629329 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 16 630910 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 16 632135 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 16 635787 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 16 658649 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 16 659053 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 26 11 51 18 127150 I tensorflow compiler xla service service cc 168 xla service 0x7f84e4413620 execute computation on platform cuda device 2019 08 26 11 51 18 127186 I tensorflow compiler xla service service cc 175 streamexecutor device 0 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127194 I tensorflow compiler xla service service cc 175 streamexecutor device 1 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127200 I tensorflow compiler xla service service cc 175 streamexecutor device 2 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127205 I tensorflow compiler xla service service cc 175 streamexecutor device 3 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127210 I tensorflow compiler xla service service cc 175 streamexecutor device 4 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127215 I tensorflow compiler xla service service cc 175 streamexecutor device 5 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127221 I tensorflow compiler xla service service cc 175 streamexecutor device 6 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 127227 I tensorflow compiler xla service service cc 175 streamexecutor device 7 tesla m40 24 gb compute capability 5 2 2019 08 26 11 51 18 133237 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2400090000 hz 2019 08 26 11 51 18 135546 I tensorflow compiler xla service service cc 168 xla service 0x7f84e6c0fe90 execute computation on platform host device 2019 08 26 11 51 18 135571 I tensorflow compiler xla service service cc 175 streamexecutor device 0 2019 08 26 11 51 18 142054 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 18 143568 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 18 145001 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 18 146484 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 18 147938 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 18 149383 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 18 150933 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 18 152413 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 18 152461 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 18 152480 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 18 152498 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 18 152515 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 18 152532 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 18 152548 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 18 152566 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 18 175302 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 175343 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 18 188053 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 18 188075 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 188101 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 18 188112 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 18 188119 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 18 188126 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 18 188140 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 18 188147 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 18 188154 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 18 188161 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 18 203869 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 18 205583 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 18 207222 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 18 208873 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 18 210517 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 18 212232 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 18 213911 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 18 215580 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 w0826 11 51 18 218294 140208701863744 deprecation wrapper py 119 from data1 username model official util misc keras util py 155 the name tf keras backend set session be deprecate please use tf compat v1 keras backend set session instead 2019 08 26 11 51 18 221324 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 18 222757 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 18 224187 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 18 225622 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 18 227063 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 18 228503 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 18 229937 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 18 231370 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 18 231399 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 18 231418 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 18 231434 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 18 231450 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 18 231466 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 18 231483 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 18 231499 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 18 254277 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 254618 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 18 254634 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 254652 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 18 254666 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 18 254673 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 18 254680 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 18 254693 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 18 254700 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 18 254707 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 18 254721 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 18 270274 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 18 271763 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 18 273259 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 18 274711 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 18 276196 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 18 277656 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 18 279127 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 18 280830 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job localhost replica 0 task 0 device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 2019 08 26 11 51 18 292016 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 18 293441 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 18 294855 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 18 296320 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 18 297760 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 18 299191 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 18 300627 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 18 302090 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 18 302118 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 18 302137 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 18 302153 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 18 302169 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 18 302185 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 18 302201 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 18 302217 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 18 324811 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 325169 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 18 325185 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 18 325203 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 18 325211 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 18 325224 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 18 325232 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 18 325238 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 18 325252 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 18 325259 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 18 325266 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 18 341278 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 18 342725 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 18 344171 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 18 345625 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 18 347083 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 18 348544 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 18 349997 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 18 351556 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 i0826 11 51 18 352102 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device cpu 0 i0826 11 51 18 352772 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 0 i0826 11 51 18 352954 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 1 i0826 11 51 18 353148 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 2 i0826 11 51 18 353321 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 3 i0826 11 51 18 353487 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 4 i0826 11 51 18 353651 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 5 i0826 11 51 18 353815 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 6 i0826 11 51 18 353976 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 7 i0826 11 51 18 354135 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla cpu 0 i0826 11 51 18 354294 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 0 i0826 11 51 18 354458 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 1 i0826 11 51 18 354618 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 2 i0826 11 51 18 354776 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 3 i0826 11 51 18 354935 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 4 i0826 11 51 18 355093 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 5 i0826 11 51 18 355252 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 6 i0826 11 51 18 355409 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 7 w0826 11 51 18 355474 140208701863744 cross device op py 1177 not all device in tf distribute strategy be visible to tensorflow i0826 11 51 18 356196 140208701863744 collective all reduce strategy py 226 multi worker collectiveallreducestrategy with cluster spec worker 8080 8080 task type worker task i d 0 num worker 2 local device job worker task 0 device gpu 0 job worker task 0 device gpu 1 job worker task 0 device gpu 2 job worker task 0 device gpu 3 job worker task 0 device gpu 4 job worker task 0 device gpu 5 job worker task 0 device gpu 6 job worker task 0 device gpu 7 communication collectivecommunication auto w0826 11 51 19 012401 140208701863744 lazy loader py 50 the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue w0826 11 51 19 093180 140208701863744 deprecation py 323 from data1 username local lib python3 6 site package tensorflow python op image op impl py 1514 div from tensorflow python op math op be deprecate and will be remove in a future version instruction for update deprecate in favor of operator or tf math divide w0826 11 51 19 094039 140208701863744 deprecation py 323 from data1 username model official vision image classification cifar preprocesse py 80 sparse to dense from tensorflow python op sparse op be deprecate and will be remove in a future version instruction for update create a tf sparse sparsetensor and use tf sparse to dense instead i0826 11 51 19 311534 140208701863744 cross device op py 1032 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 311678 140208701863744 cross device op py 1053 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 332920 140208701863744 cross device op py 1032 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 333035 140208701863744 cross device op py 1053 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 354012 140208701863744 cross device op py 1032 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 354128 140208701863744 cross device op py 1053 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 406786 140208701863744 cross device op py 1032 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 406906 140208701863744 cross device op py 1053 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 427956 140208701863744 cross device op py 1032 collective batch all reduce 1 all reduce num worker 2 i0826 11 51 19 428072 140208701863744 cross device op py 1053 collective batch all reduce 1 all reduce num worker 2 w0826 11 51 50 855791 140208701863744 deprecation py 506 from data1 username local lib python3 6 site package tensorflow python keras initializers py 143 call randomnormal init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor i0826 11 51 50 961928 140208701863744 distribute coordinator py 776 run distribute coordinator with mode independent worker cluster spec worker 8080 8080 task type worker task i d 0 environment none rpc layer grpc w0826 11 51 50 962039 140208701863744 distribute coordinator py 825 eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster w0826 11 51 50 962105 140208701863744 distribute coordinator py 829 eval strategy be not pass in no distribution strategy will be use for evaluation 2019 08 26 11 51 50 967967 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 50 969414 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 50 970952 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 50 972416 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 50 973948 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 50 975438 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 50 976896 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 50 978343 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 50 978393 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 50 978413 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 50 978430 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 50 978447 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 50 978463 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 50 978479 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 50 978496 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 51 002211 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 003014 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 51 003031 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 003054 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 51 003068 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 51 003075 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 51 003089 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 51 003096 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 51 003103 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 51 003116 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 51 003124 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 51 021461 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 51 022911 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 51 024357 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 51 026395 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 51 028537 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 51 030097 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 51 031552 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 51 033061 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 i0826 11 51 51 033764 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device cpu 0 i0826 11 51 51 033987 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 0 i0826 11 51 51 034153 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 1 i0826 11 51 51 034312 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 2 i0826 11 51 51 034467 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 3 i0826 11 51 51 034620 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 4 i0826 11 51 51 034772 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 5 i0826 11 51 51 034923 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 6 i0826 11 51 51 035072 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 7 i0826 11 51 51 035221 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla cpu 0 i0826 11 51 51 035371 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 0 i0826 11 51 51 035520 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 1 i0826 11 51 51 035670 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 2 i0826 11 51 51 035823 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 3 i0826 11 51 51 035972 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 4 i0826 11 51 51 036118 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 5 i0826 11 51 51 036265 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 6 i0826 11 51 51 036412 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 7 w0826 11 51 51 036476 140208701863744 cross device op py 1177 not all device in tf distribute strategy be visible to tensorflow i0826 11 51 51 037198 140208701863744 collective all reduce strategy py 226 multi worker collectiveallreducestrategy with cluster spec worker 8080 8080 task type worker task i d 0 num worker 2 local device job worker task 0 device gpu 0 job worker task 0 device gpu 1 job worker task 0 device gpu 2 job worker task 0 device gpu 3 job worker task 0 device gpu 4 job worker task 0 device gpu 5 job worker task 0 device gpu 6 job worker task 0 device gpu 7 communication collectivecommunication auto i0826 11 51 51 037485 140208701863744 distribute coordinator py 438 start standard tensorflow server target grpc 8080 session config allow soft placement true graph option rewrite option scope allocator optimization on scope allocator opt enable op collectivereduce experimental collective group leader job worker replica 0 task 0 2019 08 26 11 51 51 039570 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 51 041442 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 51 043607 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 51 045599 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 51 047044 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 51 048483 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 51 049915 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 51 052019 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 51 052058 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 51 052077 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 51 052094 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 51 052110 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 51 052126 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 51 052142 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 51 052158 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 51 086456 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 097618 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 51 097635 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 097659 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 51 097666 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 51 097680 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 51 097687 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 51 097701 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 51 097708 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 51 097722 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 51 097735 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 51 135823 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 51 138364 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 51 141177 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 51 146457 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 51 151085 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 51 156594 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 51 162514 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 51 164078 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device job worker replica 0 task 0 device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 2019 08 26 11 51 51 165756 I tensorflow core distribute runtime rpc grpc channel cc 250 initialize grpcchannelcache for job worker 0 localhost 8080 1 8080 2019 08 26 11 51 51 169005 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 8080 2019 08 26 11 51 51 169031 I tensorflow core distribute runtime rpc grpc server lib cc 369 server already start target grpc localhost 8080 2019 08 26 11 51 51 180244 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 51 181868 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 51 183521 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 51 185127 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 51 186747 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 51 188347 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 51 189975 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 51 191620 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 51 191660 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 51 191688 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 51 191708 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 51 191726 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 51 191749 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 51 191768 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 51 191801 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 51 217200 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 217612 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 51 217632 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 51 217643 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 51 217668 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 51 217689 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 51 217705 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 51 217720 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 51 217739 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 51 217747 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 51 217758 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 51 235327 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 51 236996 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 51 238641 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 51 240324 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 51 242048 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 51 243783 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 51 245461 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 51 247184 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 i0826 11 51 51 248429 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device cpu 0 i0826 11 51 51 248695 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 0 i0826 11 51 51 248878 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 1 i0826 11 51 51 249054 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 2 i0826 11 51 51 249228 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 3 i0826 11 51 51 249400 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 4 i0826 11 51 51 249571 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 5 i0826 11 51 51 249751 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 6 i0826 11 51 51 249931 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 7 i0826 11 51 51 250112 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla cpu 0 i0826 11 51 51 250284 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 0 i0826 11 51 51 250463 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 1 i0826 11 51 51 250636 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 2 i0826 11 51 51 250803 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 3 i0826 11 51 51 250995 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 4 i0826 11 51 51 251197 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 5 i0826 11 51 51 251389 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 6 i0826 11 51 51 251568 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 7 w0826 11 51 51 251645 140208701863744 cross device op py 1177 not all device in tf distribute strategy be visible to tensorflow i0826 11 51 51 252460 140208701863744 collective all reduce strategy py 226 multi worker collectiveallreducestrategy with cluster spec worker 8080 8080 task type worker task i d 0 num worker 2 local device job worker task 0 device gpu 0 job worker task 0 device gpu 1 job worker task 0 device gpu 2 job worker task 0 device gpu 3 job worker task 0 device gpu 4 job worker task 0 device gpu 5 job worker task 0 device gpu 6 job worker task 0 device gpu 7 communication collectivecommunication auto i0826 11 51 55 961434 140208701863744 distribute coordinator py 776 run distribute coordinator with mode independent worker cluster spec worker 8080 8080 task type worker task i d 0 environment none rpc layer grpc w0826 11 51 55 961624 140208701863744 distribute coordinator py 825 eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster w0826 11 51 55 961693 140208701863744 distribute coordinator py 829 eval strategy be not pass in no distribution strategy will be use for evaluation 2019 08 26 11 51 55 967643 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 55 969124 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 55 970583 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 55 972132 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 55 973567 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 55 974998 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 55 976440 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 55 977881 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 55 977933 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 55 977954 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 55 977972 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 55 978010 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 55 978030 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 55 978050 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 55 978068 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 56 003848 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 56 004245 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 56 004263 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 56 004274 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 56 004295 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 56 004313 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 56 004322 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 56 004337 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 56 004346 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 56 004361 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 56 004372 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 56 022906 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 56 024706 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 56 027009 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 56 028847 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 56 030286 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 56 031725 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 56 033410 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 56 035645 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 i0826 11 51 56 036307 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device cpu 0 i0826 11 51 56 036504 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 0 i0826 11 51 56 036674 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 1 i0826 11 51 56 036839 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 2 i0826 11 51 56 036999 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 3 i0826 11 51 56 037156 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 4 i0826 11 51 56 037314 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 5 i0826 11 51 56 037470 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 6 i0826 11 51 56 037625 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 7 i0826 11 51 56 037780 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla cpu 0 i0826 11 51 56 037934 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 0 i0826 11 51 56 038089 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 1 i0826 11 51 56 038243 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 2 i0826 11 51 56 038396 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 3 i0826 11 51 56 038550 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 4 i0826 11 51 56 038703 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 5 i0826 11 51 56 038856 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 6 i0826 11 51 56 039009 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 7 w0826 11 51 56 039075 140208701863744 cross device op py 1177 not all device in tf distribute strategy be visible to tensorflow i0826 11 51 56 039798 140208701863744 collective all reduce strategy py 226 multi worker collectiveallreducestrategy with cluster spec worker 8080 8080 task type worker task i d 0 num worker 2 local device job worker task 0 device gpu 0 job worker task 0 device gpu 1 job worker task 0 device gpu 2 job worker task 0 device gpu 3 job worker task 0 device gpu 4 job worker task 0 device gpu 5 job worker task 0 device gpu 6 job worker task 0 device gpu 7 communication collectivecommunication auto 2019 08 26 11 51 56 045441 I tensorflow core common runtime gpu gpu device cc 1640 find device 0 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 04 00 0 2019 08 26 11 51 56 046901 I tensorflow core common runtime gpu gpu device cc 1640 find device 1 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 05 00 0 2019 08 26 11 51 56 048481 I tensorflow core common runtime gpu gpu device cc 1640 find device 2 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 08 00 0 2019 08 26 11 51 56 050782 I tensorflow core common runtime gpu gpu device cc 1640 find device 3 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 09 00 0 2019 08 26 11 51 56 053113 I tensorflow core common runtime gpu gpu device cc 1640 find device 4 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 84 00 0 2019 08 26 11 51 56 054669 I tensorflow core common runtime gpu gpu device cc 1640 find device 5 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 85 00 0 2019 08 26 11 51 56 056100 I tensorflow core common runtime gpu gpu device cc 1640 find device 6 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 88 00 0 2019 08 26 11 51 56 057525 I tensorflow core common runtime gpu gpu device cc 1640 find device 7 with property name tesla m40 24 gb major 5 minor 2 memoryclockrate ghz 1 112 pcibusid 0000 89 00 0 2019 08 26 11 51 56 057556 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 0 2019 08 26 11 51 56 057575 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 08 26 11 51 56 057599 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcufft so 10 0 2019 08 26 11 51 56 057619 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcurand so 10 0 2019 08 26 11 51 56 057639 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusolver so 10 0 2019 08 26 11 51 56 057658 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcusparse so 10 0 2019 08 26 11 51 56 057678 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 08 26 11 51 56 114764 I tensorflow core common runtime gpu gpu device cc 1763 add visible gpu device 0 1 2 3 4 5 6 7 2019 08 26 11 51 56 125872 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2019 08 26 11 51 56 125889 I tensorflow core common runtime gpu gpu device cc 1187 0 1 2 3 4 5 6 7 2019 08 26 11 51 56 125916 I tensorflow core common runtime gpu gpu device cc 1200 0 n y y y n n n n 2019 08 26 11 51 56 125924 I tensorflow core common runtime gpu gpu device cc 1200 1 y n y y n n n n 2019 08 26 11 51 56 125941 I tensorflow core common runtime gpu gpu device cc 1200 2 y y n y n n n n 2019 08 26 11 51 56 125951 I tensorflow core common runtime gpu gpu device cc 1200 3 y y y n n n n n 2019 08 26 11 51 56 125967 I tensorflow core common runtime gpu gpu device cc 1200 4 n n n n n y y y 2019 08 26 11 51 56 125981 I tensorflow core common runtime gpu gpu device cc 1200 5 n n n n y n y y 2019 08 26 11 51 56 125988 I tensorflow core common runtime gpu gpu device cc 1200 6 n n n n y y n y 2019 08 26 11 51 56 125996 I tensorflow core common runtime gpu gpu device cc 1200 7 n n n n y y y n 2019 08 26 11 51 56 160059 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 0 with 20545 mb memory physical gpu device 0 name tesla m40 24 gb pci bus i d 0000 04 00 0 compute capability 5 2 2019 08 26 11 51 56 161525 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 1 with 20545 mb memory physical gpu device 1 name tesla m40 24 gb pci bus i d 0000 05 00 0 compute capability 5 2 2019 08 26 11 51 56 163024 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 2 with 20545 mb memory physical gpu device 2 name tesla m40 24 gb pci bus i d 0000 08 00 0 compute capability 5 2 2019 08 26 11 51 56 164528 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 3 with 20545 mb memory physical gpu device 3 name tesla m40 24 gb pci bus i d 0000 09 00 0 compute capability 5 2 2019 08 26 11 51 56 165979 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 4 with 20545 mb memory physical gpu device 4 name tesla m40 24 gb pci bus i d 0000 84 00 0 compute capability 5 2 2019 08 26 11 51 56 167440 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 5 with 20545 mb memory physical gpu device 5 name tesla m40 24 gb pci bus i d 0000 85 00 0 compute capability 5 2 2019 08 26 11 51 56 168887 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 6 with 20545 mb memory physical gpu device 6 name tesla m40 24 gb pci bus i d 0000 88 00 0 compute capability 5 2 2019 08 26 11 51 56 170380 I tensorflow core common runtime gpu gpu device cc 1326 create tensorflow device device gpu 7 with 20545 mb memory physical gpu device 7 name tesla m40 24 gb pci bus i d 0000 89 00 0 compute capability 5 2 i0826 11 51 56 170995 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device cpu 0 i0826 11 51 56 171188 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 0 i0826 11 51 56 171359 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 1 i0826 11 51 56 171522 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 2 i0826 11 51 56 171682 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 3 i0826 11 51 56 171840 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 4 i0826 11 51 56 171997 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 5 i0826 11 51 56 172154 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 6 i0826 11 51 56 172314 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla gpu 7 i0826 11 51 56 172470 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device xla cpu 0 i0826 11 51 56 172623 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 0 i0826 11 51 56 172777 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 1 i0826 11 51 56 172931 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 2 i0826 11 51 56 173083 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 3 i0826 11 51 56 173235 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 4 i0826 11 51 56 173387 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 5 i0826 11 51 56 173537 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 6 i0826 11 51 56 173689 140208701863744 cross device op py 1174 device be available but not use by distribute strategy device gpu 7 w0826 11 51 56 173753 140208701863744 cross device op py 1177 not all device in tf distribute strategy be visible to tensorflow i0826 11 51 56 174468 140208701863744 collective all reduce strategy py 226 multi worker collectiveallreducestrategy with cluster spec worker 8080 8080 task type worker task i d 0 num worker 2 local device job worker task 0 device gpu 0 job worker task 0 device gpu 1 job worker task 0 device gpu 2 job worker task 0 device gpu 3 job worker task 0 device gpu 4 job worker task 0 device gpu 5 job worker task 0 device gpu 6 job worker task 0 device gpu 7 communication collectivecommunication auto w0826 11 51 56 174733 140208701863744 distribute training util py 1082 modelcheckpoint callback be not provide worker will need to restart training if any fail nvidia smi show that only a tiny part of gpu memory be use in node 0 and the other machine I e the worker node 1 do not run any program the worker node have the same python and tensorflow version here be my question 1 should I set up some configuration in worker node 1 what and how 2 be there anything wrong use the resnet cifar main py for distribute training 3 I find the port on node 0 be listen command pid user fd type device size off node name resnet ci 51888 username 93u ipv4 8730954 0t0 tcp webcache listen however the port on node 1 be not open describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | crash course issue | Bug | 31958 url s with the issue description of issue what need change on 1 50 it prompt I to do the gradient descent practice when I click the button then redirect to the wrong page correct link |
tensorflowtensorflow | break link in xla page | Bug | all the link on the follow page be forward to a 404 page |
tensorflowtensorflow | tensorflow 2 0 tf function internal error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab ubuntu window mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 beta 2 0 rc 2 0 nightly v1 12 1 9694 g006e2933 2 0 0 dev20190825 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 also without cuda gpu model and memory describe the current behavior decorate the training loop that consume a tf datum dataset with tf function cause an internal error in tensorflow an object be return to python with an error set and I m unable to understand the actual origin of the error not use tf function the code work alright describe the expect behavior the model gradient should be calculate well regardless of be within a tf function trace or not code to reproduce the issue code be provide in the colab notebook available here other info log while work with a rather complex module that operate on irregular datum graph use tf function worsens performance in tf 2 0 due to bug 29075 while follow the workaround describe in that issue I stumble upon this issue because of these issue tf 2 0 be not a good fit for dl on irregularly shape datum relevant traceback 137 train step gradient tape gradient loss model trainable variable usr local lib python3 6 dist package tensorflow core python eager backprop py 1015 gradient unconnected gradient unconnected gradient usr local lib python3 6 dist package tensorflow core python eager imperative grad py 76 imperative grad compat as str unconnected gradient value usr local lib python3 6 dist package tensorflow core python eager backprop py 599 aggregate grad if len gradient 1 systemerror return a result with an error set |
tensorflowtensorflow | eager mode accessing content of scalar in a tf function | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 any mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I be try to parallel download some image I be use a tf datum dataset with the image url as content I want to store they in a gcs so I be use function from the tf io gfile package inside a tf function this function will be call through tf datum dataset map when the different tf io gfile function be call inside the tf function like makedir it raise an error indicate that it require a binary or unicode string as input typeerror expect binary or unicode string get if I try to use numpy it be not available as expect the result be that I can not download the image in the gcs describe the expect behavior as a tensorflow package I would expect the tf io gfile function to allow the use of scalar string tensor or I would expect tensorflow to provide a solution similar to the numpy function inside tf function for these case if not at least it should be a warning in the documentation that these function can not be use inside tf function code to reproduce the issue python tf function def test string value return tf io gfile exist value test string tf constant test the result in this case be typeerror expect binary or unicode string get other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | lossscaleoptimizer do not work | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 x64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 gpu model and memory gtx 1080ti describe the current behavior I be try to run the sample code from and get the follow error when no gradient can be compute for some variable valueerror traceback most recent call last c user admin anaconda3 lib site package tensorflow python framework op def library py in apply op helper self op type name name keyword 526 as ref input arg be ref 527 prefer dtype default dtype 528 except typeerror as err c user admin anaconda3 lib site package tensorflow python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept symbolic tensor accept composite tensor 1223 if ret be none 1224 ret conversion func value dtype dtype name name as ref as ref 1225 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant tensor conversion function v dtype name as ref 304 as ref 305 return constant v dtype dtype name name 306 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant value dtype shape name 245 return constant impl value dtype shape name verify shape false 246 allow broadcast true 247 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 283 value dtype dtype shape shape verify shape verify shape 284 allow broadcast allow broadcast 285 dtype value attr value pb2 attrvalue type tensor value tensor dtype c user admin anaconda3 lib site package tensorflow python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 453 if value be none 454 raise valueerror none value not support 455 if dtype be provide force numpy array to be the type valueerror none value not support during handling of the above exception another exception occur valueerror traceback most recent call last c user admin anaconda3 lib site package tensorflow python framework op def library py in apply op helper self op type name name keyword 540 observe op internal convert to tensor 541 value as ref input arg be ref dtype name 542 except valueerror as err c user admin anaconda3 lib site package tensorflow python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept symbolic tensor accept composite tensor 1223 if ret be none 1224 ret conversion func value dtype dtype name name as ref as ref 1225 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant tensor conversion function v dtype name as ref 304 as ref 305 return constant v dtype dtype name name 306 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant value dtype shape name 245 return constant impl value dtype shape name verify shape false 246 allow broadcast true 247 c user admin anaconda3 lib site package tensorflow python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 283 value dtype dtype shape shape verify shape verify shape 284 allow broadcast allow broadcast 285 dtype value attr value pb2 attrvalue type tensor value tensor dtype c user admin anaconda3 lib site package tensorflow python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 453 if value be none 454 raise valueerror none value not support 455 if dtype be provide force numpy array to be the type valueerror none value not support during handling of the above exception another exception occur valueerror traceback most recent call last in 13 14 call minimize on the loss scale optimizer 15 train op loss scale optimizer minimize loss c user admin anaconda3 lib site package tensorflow python training optimizer py in minimize self loss global step var list gate gradient aggregation method colocate gradient with op name grad loss 411 412 return self apply gradient grad and var global step global step 413 name name 414 415 def compute gradient self loss var list none c user admin anaconda3 lib site package tensorflow contrib mixed precision python loss scale optimizer py in apply gradient self grad and var global step name 148 be finite grad 149 for g in grad 150 be finite grad append math op reduce all gen math op be finite g 151 be overall finite math op reduce all be finite grad 152 c user admin anaconda3 lib site package tensorflow python ops gen math ops py in be finite x name 4919 try 4920 op op def lib apply op helper 4921 isfinite x x name name 4922 except typeerror valueerror 4923 result dispatch dispatch c user admin anaconda3 lib site package tensorflow python framework op def library py in apply op helper self op type name name keyword 543 raise valueerror 544 try to convert s to a tensor and fail error s 545 input name err 546 prefix input s of s op have type s that do not match 547 input name op type name observe valueerror try to convert x to a tensor and fail error none value not support describe the expect behavior no error would occur for some other optimizer such as adamoptimizer and movingaverageoptimizer even if no gradient can be compute for some variable code to reproduce the issue import tensorflow as tf a1 tf variable 1 name a1 a2 tf variable 2 name a2 model param var for var in tf global variable if a in var name loss a1 2 opt tf train adamoptimizer learning rate 1 beta1 0 beta2 0 9 choose a loss scale manager which decide how to pick the right loss scale throughout the training process loss scale manager tf contrib mixed precision fixedlossscalemanager 5000 wrap the original optimizer in a lossscaleoptimizer loss scale optimizer tf contrib mixed precision lossscaleoptimizer opt loss scale manager call minimize on the loss scale optimizer train op loss scale optimizer minimize loss var list model param |
tensorflowtensorflow | tensorflow lite gpu delegate on android with gpu nightly fail to run on gpu it seem to run on cpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no for the demo app classification app yes for the object detection app but only add gpu delegate and use a self convert ssd float model mobile device google pixel 2 xl android 8 0 0 and android 9 tensorflow lite version nightly 0 0 0 gpu nightly 0 0 0 os platform and distribution window 10 64 bit tensorflow instal from binary tensorflow version 1 13 1 python version 3 6 8 cuda cudnn version 10 0 7 6 0 gpu model and memory geforce gtx 970 amd64 describe the current behavior I can not get any app to run on gpu use gpu nightly all the app seem to run on cpu even though the gpu delegate be use by the interpreter and no error occur describe the expect behavior the app should run on gpu the model run on gpu should be fast than the model run on cpu code to reproduce the issue I try the demo app and the classification app without change to the code for the object detection app I add the gpu delegate change isquantize to false and I use a self convert ssd mobilenet model I write the delegate as describe in android java and I convert the model this way issuecomment 517165233 demo app classification app object detection app other info log performance result test the app on google pixel 2 xl with android 8 0 0 and android 9 gpu nightly current demo app quantize model cpu 75ms float model cpu 135ms float model gpu 135ms object detection app self convert float model gpu 100 ms cpu only objekt detection app I only change targedsdkversion to 28 to get the original code work quantize model cpu 45 50ms thank in advance |
tensorflowtensorflow | tf 2 0 0 rc0 run model evaluate let notebook crash chrome | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution mac tensorflow instal from source or binary pip tensorflow version use command below tensorflow 2 0 0 rc0 python version 3 6 5 print tf version git version tf version version v2 0 0 beta1 5101 gc75bb66a99 2 0 0 rc0 describe the current behavior when I run result model evaluate x train y train my jupyter notebook crash describe the expect behavior I will be run another cell in jupyter notebook but I can t code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I use sample code in tensorflow 2 0 0 rc classify image other info log |
tensorflowtensorflow | tf estimator training with tf estimator tf keras and tf kera only yield inconsistent result | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 18g87 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory none describe the current behavior follow the official guide 1 create estimator from keras model 2 use a custom model fn I be train psenet for text detection even though train metric do improve to almost perfect level and the loss remain stable and low after a while the inference I get be gibberish psenet work as follow run the image through the feature pyramid network to obtain segmentation map it then apply a custom algorithm to extract bboxe the image below show the segmentation map with yellow region correspond to the predict text and purple to everything else after 186 attempt to make it work on the ai platform and 400 of gsoc credit I realize that the problem be deep than the implementation detail and decide to overfit on a single sample use the tf keras implementation of fpn from segmentation model by qubvel I have tweak his implementation for psenet and run into the same problem with tf estimator so it seem that tf estimator be indeed the culprit for this sample image sample image and one of the label sample label after 300 epoch the same loss and optimizer the predict label with a pure tf keras implementation with the tf keras model convert use tf keras estimator model to estimator with the tf keras model use in the tf estimator model function I have try the tf kera in model fn setup on 10 000 image for 30 50 epoch and the result be much bad than this which be itself not perfect describe the expect behavior 1 tensorflow documentation should state clearly the preferred way to use tf keras model inside tf estimator give the knowledge that tf estimator be build on tf keras layer and thus the expectation that interop be seamless 2 the discrepancy between training with tf keras and tf estimator tf keras should be minimal or non existent code to reproduce the issue the minimal fail example with the code and datum be here other info log be a related issue similar finding be document here I can also confirm that training with tf estimator take long than with pure tf keras in alignment with other report of this behavior the model itself be sensible and the follow example show that it do generalize well for this input not in the original datum image 2019 08 23 17 13 30 the output from another implementation write in tf slim be as follow image 2019 08 23 17 14 20 the output from the pytorch implementation by the original author be this image 2019 08 23 17 16 54 |
tensorflowtensorflow | tf1 x tpu how to perform preprocesse step for text classification for training on tpu | Bug | I have be try since 5 day to train a simple text classification model on tpus but because of lack of documentation it be very difficult I just can not perform tokenization encode padding without tf py func please add some example for do these step for tpu device so that dumb people like I can understand tf will be greatly thankful to everyone at google I be follow this tutorial split the dataset into text and train batch |
tensorflowtensorflow | intermittent crash with cuda error launch fail failure | Bug | system information os platform and distribution on linux base ml engine runtime version 1 12 which mean it be use tensorflow 1 12 with python 3 6 cuda cudnn version cuda version 10 1 with cudnn libcudnn so 7 4 2 gpu model and memory tesla k80 exact command to reproduce run custom code on object segmentation task the problem the problem be the training crash after reach thousand of step few hour the crash be intermittent the same code and dataset when rerun be ok on another occasion this problem happen quite frequently 20x in my testing one observation be that this do not happen for early tensorflow 1 8 current workaround for I be to downgrade to tf 1 8 this could be a bug on the library or cuda driver the same problem happen in workstation offline the logs instruction for update use standard file apis to delete file with this prefix info tensorflow recording summary at step 1239 info tensorflow global step 1240 loss 0 2247 1 349 sec step info tensorflow global step 1260 loss 0 2513 1 189 sec step info tensorflow global step 1280 loss 0 3016 1 181 sec step info tensorflow recording summary at step 1291 info tensorflow global step 1300 loss 0 2227 1 151 sec step info tensorflow global step 1320 loss 0 2227 1 139 sec step info tensorflow global step 1340 loss 0 2224 1 273 sec step info tensorflow recording summary at step 1342 info tensorflow global step 1360 loss 0 2220 1 186 sec step info tensorflow global step 1380 loss 0 2297 1 157 sec step info tensorflow recording summary at step 1393 info tensorflow global step 1400 loss 0 2217 1 160 sec step info tensorflow global step 1420 loss 0 2213 1 169 sec step info tensorflow global step 1440 loss 0 2209 1 134 sec step info tensorflow recording summary at step 1443 2019 08 01 01 43 15 655519 e tensorflow stream executor cuda cuda driver cc 1131 fail to enqueue async memcpy from host to device cuda error launch fail unspecified launch failure gpu dst 0x71133f800 host src 0x7ff986bfcc80 size 131072 0x20000 2019 08 01 01 43 15 655586 e tensorflow stream executor cuda cuda driver cc 1000 could not wait stream on event cuda error launch fail unspecified launch failure 2019 08 01 01 43 15 655630 e tensorflow stream executor cuda cuda driver cc 1000 could not wait stream on event cuda error launch fail unspecified launch failure 2019 08 01 01 43 15 655630 e tensorflow stream executor cuda cuda driver cc 1000 could not wait stream on event cuda error launch fail unspecified launch failure 2019 08 01 01 43 15 655644 e tensorflow stream executor cuda cuda event cc 48 error polling for event status fail to query event cuda error launch fail unspecified launch failure 2019 08 01 01 43 15 655731 I tensorflow stream executor stream cc 5027 stream 0x5c23080 impl 0x5c23120 do not memcpy host to device source 0x7ff97e0fd7c0 2019 08 01 01 43 15 655731 I tensorflow stream executor stream cc 5027 stream 0x5c23080 impl 0x5c23120 do not memcpy host to device source 0x7ff97e6241c0 2019 08 01 01 43 15 655717 I tensorflow stream executor stream cc 5027 stream 0x5c23080 impl 0x5c23120 do not memcpy host to device source 0x7ff97e62e9c0 2019 08 01 01 43 15 655755 f tensorflow core common runtime gpu gpu event mgr cc 274 unexpected event status 1 additional information log the ml team support kindly provide machine debug information as following the vm log of fail job all have the same error like I 2019 07 25t17 00 20 282471416z 37582 281240 nvrm xid pci 0000 00 04 13 graphic exception miss inline datum r n I 2019 07 25t17 00 20 282477291z 37582 289153 nvrm xid pci 0000 00 04 13 graphic exception esr 0x404600 0x80000002 r n I 2019 07 25t17 00 20 282557397z 37582 297688 nvrm xid pci 0000 00 04 13 graphic exception chide 0017 class 0000a1c0 offset 000001b4 datum 00002000 r n fyi I do not succeed to reproduce the error with cuda memcheck or cuda gdb or cuda device wait on exception 1 this issue be probably similar to 20356 |
tensorflowtensorflow | all reduce of collective op hang in a distribute environment | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux cento 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary no tensorflow version use command below v1 13 1 0 g6612da8951 python version 3 6 8 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none describe the current behavior the monitored session hang in there fetch the reduce weight describe the expect behavior the all reduce tensor reduce weight give proper answer on all worker code to reproduce the issue python illustrate allreduce import multiprocessing as mp mp method fork fork unix spawn window num process 2 def process fn worker host task index allreduce process import time import tensorflow as tf from tensorflow python op import collective op num worker len worker host cluster spec tf train clusterspec worker worker host server tf train server cluster spec job name worker task index task index group key 0 instance key 0 with tf graph as default weight list reduce weight list for worker index in range num worker with tf variable scope worker format worker index tf device job worker task device cpu 0 format worker index weight tf get variable weight shape weight append weight if worker index task index reduce weight collective op all reduce weight num worker group key instance key add div session creator tf train chiefsessioncreator master server target with tf train monitoredsession session creator session creator as mon sess print task have format task index mon sess run weight result mon sess run reduce weight print task reduce format task index result time sleep 1 def start process start process port 60000 host fmt localhost worker host list for process index in range num process worker host append host fmt format port process index mp ctx mp get context mp method process list for process index in range num process process mp ctx process target process fn args worker host process index process append process process start for process in process process join if name main start process other info log console tf 1 13 py3 huwh1 huwh1 centos worksync python tf distribute collective op py 2019 08 23 10 55 20 150797 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 23 10 55 20 152951 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 08 23 10 55 20 163464 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 23 10 55 20 163852 I tensorflow compiler xla service service cc 150 xla service 0x4364100 execute computation on platform host device 2019 08 23 10 55 20 163883 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 08 23 10 55 20 165614 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job worker 0 localhost 60000 1 localhost 60001 2019 08 23 10 55 20 165828 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 08 23 10 55 20 166148 I tensorflow compiler xla service service cc 150 xla service 0x4363fe0 execute computation on platform host device 2019 08 23 10 55 20 166174 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2019 08 23 10 55 20 166519 I tensorflow core distribute runtime rpc grpc server lib cc 391 start server with target grpc localhost 60001 2019 08 23 10 55 20 167632 I tensorflow core distribute runtime rpc grpc channel cc 252 initialize grpcchannelcache for job worker 0 localhost 60000 1 localhost 60001 2019 08 23 10 55 20 168829 I tensorflow core distribute runtime rpc grpc server lib cc 391 start server with target grpc localhost 60000 warn tensorflow from home huwh1 virtualenv tf 1 13 py3 lib python3 6 site package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer warning tensorflow from home huwh1 virtualenv tf 1 13 py3 lib python3 6 site package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer 2019 08 23 10 55 20 262106 I tensorflow core distribute runtime master session cc 1192 start master session c45b1693e334d401 with config 2019 08 23 10 55 20 269965 I tensorflow core distribute runtime master session cc 1192 start master session a7c551a16b557bd8 with config task 1 have 0 82924074 0 72853804 task 0 have 0 82924074 0 72853804 the collective op all reduce seem to be reference only once in build collective reduce l360 l362 where the instruction suggest input tensor tensor within a single worker graph that be to be reduce together must be one per device be that mean the all reduce be only applicable to in graph replication replicate training |
tensorflowtensorflow | fix tensorflow lite documentation | Bug | hi on the sample interpreter code be wrong image from tflite runtime import interpreter should be change to from tflite runtime interpreter import interpreter thank hakan |
tensorflowtensorflow | tf 2 0 0rc0 gpu tf dataset bug | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker image tensorflow version use command below tensorflow tensorflow 2 0 0rc0 gpu py3 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 gpu model and memory describe the current behavior I be use tf kera for training with the dataset api and tfrecord at the beginning of the training the dataset api try to load and shuffle the whole training dataset so the memory fill up and the process die I can not train my model at all I be not use any shuffle call in my dataset pipeline describe the expect behavior the dataset should not load and shuffle the record unless the shuffle call be issue code to reproduce the issue import numpy as np import tensorflow as tf import os import cv2 import tensorflow kera layer as k def extract fn data record feature datum tf io fixedlenfeature tf string sample tf io parse single example data record feature datum tf image decode image sample datum return datum 1 class datagenerator tf keras util sequence def init self dataset iterator len self dataset iterator dataset iterator self len len def len self number of batch per epoch return self len def getitem self index generate one batch of datum next element next self dataset iterator x next element 0 y next element 1 return x y with tf io tfrecordwriter dummy dataset tfrecord as writer datum np float32 np random random size 1000 1000 3 255 datum cv2 imencode png datum 1 tostre example tf train example feature tf train feature feature datum tf train feature byte list tf train byteslist value datum for I in range 10000 writer write example serializetostre dataset tf datum tfrecorddataset dummy dataset tfrecord dataset dataset map extract fn n batch 3 dataset dataset batch batch size n batch drop remainder true dataset dataset repeat 5 dataset iterator iter dataset next element next dataset iterator datum generator datagenerator dataset iterator int 10000 n batch input k input shape 1000 1000 3 name input net k conv2d 1 3 activation sigmoid input output k globalaveragepooling2d net model tf keras model model input input output output model compile loss mse optimizer sgd model fit x datum generator epoch 10 other info log note that code generate about 28 gb dummy datum in a tfrecord file since I be have this issue when I be try to use the dataset api with tf keras util sequence on a tfrecord file edit robieta code format |
tensorflowtensorflow | tf2 tf save model save fail on model re use other model | Bug | system information have I write custom code yes os platform and distribution ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version 2 0 0 dev20190821 python version 3 7 3 cuda cudnn version 10 0 gpu model and memory gtx 1080 ti this bug might be relate to the behavior observe by cysmnl in issuecomment 514736847 however I think the bug present be distinct from the original bug describe in describe the current behavior the minimal work example below fail with the follow exception traceback most recent call last file mwe py line 33 in tf save model save second convolution model tmp model2 do not work file python3 7 site package tensorflow core python save model save py line 860 in save meta graph def saveable view signature file python3 7 site package tensorflow core python save model save py line 590 in fill meta graph def signature generate signature signature function resource map file python3 7 site package tensorflow core python save model save py line 464 in generate signature function map input resource map file python3 7 site package tensorflow core python save model save py line 416 in call function with map capture function graph capture resource map file python3 7 site package tensorflow core python save model save py line 338 in map capture to create tensor format interior assertionerror try to export a function which reference untracked object tensor statefulpartitionedcall args 1 0 shape dtype resource tensorflow object e g tf variable capture by function must be track by assign they to an attribute of a track object or assign to an attribute of the main object directly as a side note the error message miss a space after the period l334 describe the expect behavior possibility to save model with tf save model save as they would be saveable with model save code to reproduce the issue this be example be nonsensical but show the problem I ve boil down the problem to a mwe python import tensorflow as tf from tensorflow python keras import model from tensorflow python keras layers import input conv1d lambda first input input shape 1 first result conv1d filter 1 kernel size 1 first input tf newaxis first convolution model model input first input output first result def inner loop tensor the len tf shape tensor 0 collector tf tensorarray tf float32 size the len collector tf while loop cond lambda I I the len body lambda I c I 1 c write I first convolution model tensor I I 1 loop var 0 collector return collector stack second input input shape 1 second result lambda inner loop second input 0 second convolution model model input second input output second result print first convolution model predict 1 2 3 first convolution model save tmp model1 h5 work tf save model save first convolution model tmp model1 work print second convolution model predict 1 2 3 first convolution model save tmp model2 h5 work tf save model save second convolution model tmp model2 do not work print not reach |
tensorflowtensorflow | no attribute python | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | no attribute python | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | invalid link for tensorflow keras official documentation | Bug | url with the issue description of issue what need change the last two link be direct to 404 which mean they don t exist with valid colab file or gitlab file |
tensorflowtensorflow | tf cpp min log level do not work with tf2 0 dev20190820 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 1903 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 dev20190820 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 cudnn 7 6 2 24 gpu model and memory geforce rtx 2080 ti 11 gb describe the current behavior set tf cpp min log level do not work lateset tf 2 0 if I set tf cpp min log level to 2 sill tf show information and warning log include libpng warning with dev20190504 this issue do not occur describe the expect behavior set tf cpp min log level 2 should prevent show information warn log include libpng warning code to reproduce the issue import os import tensorflow as tf import numpy as np os environ tf cpp min log level 2 a tf variable np array 0 1 2 print a you can see many I log use dev20190820 but with dev20190504 no log |
tensorflowtensorflow | some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use concatenation conv 2d depthwise conv 2d logistic reshape here be a list of operator for which you will need custom implementation tflite detection postprocess | Bug | system information window 10 tensorflow instal use pip tensorflow version 1 14 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use concatenation conv 2d depthwise conv 2d logistic reshape here be a list of operator for which you will need custom implementation tflite detection postprocess also please include a link to a graphdef or the model if possible ssd mobilenet v1 coco any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach use follow code to convert the graph file to tflite import tensorflow as tf graph def file c tensorflow1 model research object detection inference graph tflite graph pb input array normalize input image tensor input shap normalize input image tensor 1 300 300 3 output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 output shap raw outputs box encoding 1 10 4 converter tf lite tfliteconverter from frozen graph graph def file input array output array input shap tflite model converter convert open detact tflite wb write tflite model |
tensorflowtensorflow | attempt to build with ndk only in configure py result in invalid android bzl | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 docker mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 14 0 python version 3 7 instal use virtualenv pip conda pip yes bazel version if compile from source 0 24 1 gcc compiler version if compile from source clang from ndk r17b cuda cudnn version gpu model and memory describe the problem I m attempt to build tensorflow light use android ndk compiler without plug in sdk therefore I set tf set android workspace to 0 while I have android ndk home pointing to location of valid ndk installation attempt to perform such build result in error error root cache bazel bazel root 9deba62a2f90185d207a955a930b77dc external local config android android bzl 10 2 indentation error error error loading package extension android bzl have error error error loading package extension android bzl have error this happen because configure py add ndk location specify in env variable by append to android bzl that have only pass in body of function but it be add with 2 space while rest of android bzl be with 4 tab this be the first problem I have to manually remove pass and leave only ndk setup probably there should not be such difference in tabulation when generate android bzl the second issue be whether it should be possible to build tensorflow lite libtensorflowlite so with ndk only I be able to do so after manually correct android bzl provide the exact sequence of command step that you execute before run into the problem environment variable python bin path use default python lib path 1 tf enable xla 0 tf need rocm 0 tf need cuda 0 tf need mpi 0 tf download clang 0 tf set android workspace 0 tf configure io 0 android ndk home android ndk api level 18 tf need opencl sycl 1 tf need computecpp 0 trisycl include dir include host cxx compiler toolchain llvm prebuilt linux x86 64 bin clang host c compiler toolchain llvm prebuilt linux x86 64 bin clang cc opt flag wno sign compare run python configure py rim bazel shutdown run build with bazel build config opt config v2 cxxopt std c 11 define no tensorflow py dep true config android arm64 tensorflow lite libtensorflowlite so verbose failure any other info log manual correction of android bzl result in successful build but it seem there be no symbol relate to gpu delegate I guess a separate target be require |
tensorflowtensorflow | tflite be go die with serious and ridiculous bug for a long time | Bug | null |
tensorflowtensorflow | keras model fit do not work with sparsetensor input with functional api | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 describe the current behavior the program exit with the follow error tensorflow python framework error impl invalidargumenterror you must feed a value for placeholder tensor sparse tensor index with dtype int64 and shape 2 node sparse tensor index describe the expect behavior no error occur and training continue code to reproduce the issue import numpy as np import tensorflow compat v2 as tf def dummy parse fn iterable feature the input be always constant feature sparse tensor tf sparsetensor indice tf constant 0 0 1 1 dtype tf int64 value tf constant 1 0 1 0 dtype tf float32 dense shape tf constant 2 2 dtype tf int64 feature sparse tensor index feature sparse tensor indice feature sparse tensor value feature sparse tensor value feature sparse tensor dense shape feature sparse tensor dense shape label tf constant 1 0 1 0 dtype tf float32 return feature label def get dummy dataset iterable np random random 128 1 astype np float32 return tf datum dataset from tensor slice iterable map dummy parse fn take 1024 if name main print tf version input tf keras layers input shape 2 sparse true name sparse tensor weight tf variable name weight shape 2 1 initial value 1 0 1 0 output tf sparse sparse dense matmul input weight model tf keras model input output optimizer tf keras optimizer sgd learn rate 0 1 momentum 0 9 nesterov true loss tf keras loss binarycrossentropy from logit true model compile optimizer optimizer loss loss metric accuracy model fit get dummy dataset epoch 2 |
tensorflowtensorflow | process batch with different sequence length use stack lstm layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 4 tensorflow instal from source or binary source tensorflow version use command below 2 0 0 beta1 python version 3 65 describe the current behavior an exception be raise when try to stack multiple tf keras layers lstm while the sequence length change across batch this behavior occur if the tf keras model be build with model subclasse on the other hand if the model be build use the functional api everything work as intend describe the expect behavior because of the identical implementation besides the difference in the way the model be build subclassing and functional api I would expect the result to be the same in other word I be confused why an exception be raise at all if use model subclasse code to reproduce the issue import numpy as np import tensorflow as tf def train generator while true sequence length np random randint 10 100 x train np random random 1000 sequence length 5 y train np random random 1000 sequence length 2 yield x train y train work as intend model tf keras model sequential model add tf keras layers lstm 32 return sequence true input shape none 5 model add tf keras layers lstm 8 return sequence true model add tf keras layer dense 2 model compile optimizer adam loss mse model fit generator train generator step per epoch 2 epoch 2 verbose 1 throw an exception class lstmmodel tf keras model def init self super lstmmodel self init self lstm 0 tf keras layers lstm 32 return sequence true input shape none 5 self lstm 1 tf keras layers lstm 8 return sequence true self dense tf keras layer dense 2 def call self input training false output self lstm 0 input output self lstm 1 output output self dense output return output model lstmmodel model compile optimizer adam loss mse model fit generator train generator step per epoch 2 epoch 2 verbose 1 other info log invalidargumenterror derive operation expect a list with 58 element but get a list with 88 element node gradient tensorarrayunstack tensorlistfromtensor grad tensorliststack adam gradient 24 lstm model 22 lstm 56 statefulpartitionedcall grad statefulpartitionedcall op inference keras scratch graph 75269 |
tensorflowtensorflow | tf load op library unable to load manylinux2010 repair custom op | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no use but it break for addon too os platform and distribution e g linux ubuntu 16 04 ubuntu16 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly tf nighty 2 0 preview describe the current behavior currently when I build a custom op in the tensorflow tensorflow custom op ubuntu16 docker image use the define step I get an install able pip package tensorflow zero out 0 0 1 cp27 cp27mu linux x86 64 whl this work fine however if I repair that wheel to be manylinux2010 compliant then tf load op library will fail to find the custom op python c import tensorflow as tf print dir tf load op library manylinux tensorflow zero out python op zero out op so lib handle op list zeroout be tensorflow plugin initopdeflibrary builtin doc name package collection common shape context core dispatch doc control dtype error execute kwarg only op def lib op def library op def pb2 op def registry op pywrap tensorflow six tensor shape deprecate endpoint tf export zero out zero out eager fallback python c import tensorflow as tf print dir tf load op library manylinux2010 tensorflow zero out python op zero out op so lib handle op list be tensorflow plugin initopdeflibrary builtin doc name package collection common shape context core dispatch doc control dtype error execute kwarg only op def lib op def library op def pb2 op def registry op pywrap tensorflow six tensor shape deprecate endpoint tf export notice zero out zero out eager fallback be not find in the loaded library for manylinux2010 code to reproduce the issue git clone cd custom op docker run it rm v pwd workspace w workspace tensorflow tensorflow custom op ubuntu16 bin bash pip install tf nightly configure sh bazel build build pip pkg bazel bin build pip pkg artifact instal auditwheel be too old for manylinux2010 pip3 install upgrade auditwheel libtensorflow framework need to be on ld path export ld library path usr local lib python2 7 dist package tensorflow core repair log look more or less okay auditwheel v repair plat manylinux2010 x86 64 artifact tensorflow zero out 0 0 1 cp27 cp27mu linux x86 64 whl repair txt other info log here be the auditwheel repair log repair txt here be the readelf inspection of the so file readelf txt readelf manylinux2010 txt here be the so file so file zip cc perfinion gunan yifeif edit here be the extract whl directory which will work with the python tf load op library command from above manylinux2010 repair make it so the custom op depend on a newly copy libtensorflow framework so which be part of the new whl custom op dir zip |
tensorflowtensorflow | runtimeerror quantization not yet support for op fake quant | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary docker image late gpu py3 tensorflow version use command below 1 14 python version 3 6 gpu model and memory rtx 2080 ti describe the current behavior I have train an autoencoder and want to convert it to a tflite model I successfully freeze the graph and be able to convert the non quantize 32 bit float version when try to convert the very same frozen graph file with the uint8 option I get an error runtimeerror quantization not yet support for op fake quant describe the expect behavior the frozen model do not seem to be corrupt because I be able to deploy the other version successfully fake quant should inherently be support code to reproduce the issue conversion be attempt with converter tf lite tfliteconverter from frozen graph graph def file input output converter representative dataset representative dataset gen converter target op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter optimization tf lite optimize default tflite model converter convert other info log the error traceback info initialize tensorflow lite runtime traceback most recent call last file train and save py line 289 in vae create tflite model file train and save py line 249 in create tflite model tflite model converter convert file usr local lib python3 6 dist package tensorflow lite python lite py line 908 in convert inference output type file usr local lib python3 6 dist package tensorflow lite python lite py line 200 in calibrate quantize model inference output type allow float file usr local lib python3 6 dist package tensorflow lite python optimize calibrator py line 78 in calibrate and quantize np dtype output type as numpy dtype num allow float file usr local lib python3 6 dist package tensorflow lite python optimize tensorflow lite wrap calibration wrapper py line 115 in quantizemodel return tensorflow lite wrap calibration wrapper calibrationwrapper quantizemodel self input py type output py type allow float runtimeerror quantization not yet support for op fake quant if it help I could provide the frozen graph file |
tensorflowtensorflow | fip enable computer fail due to md5 use | Bug | system information os platform and distribution e g linux ubuntu 16 04 red hat enterprise linux workstation release 7 7 python 2 7 5 default jun 11 2019 14 33 56 gcc 4 8 5 20150623 red hat 4 8 5 39 on linux2 type help copyright credit or license for more information import tensorflow as tf tf version git version v1 14 0 rc1 22 gaf24dc91b5 instal use virtualenv pip conda pipenv install tensorflow tflearn fail to import due to tensorflow use of md5 on an fip enable machine python 2 7 5 default jun 11 2019 14 33 56 gcc 4 8 5 20150623 red hat 4 8 5 39 on linux2 type help copyright credit or license for more information import tflearn traceback most recent call last file line 1 in file lib python2 7 site package tflearn init py line 4 in from import config file lib python2 7 site package tflearn config py line 5 in from variable import variable file lib python2 7 site package tflearn variable py line 7 in from tensorflow contrib framework python ops import add arg scope as contrib add arg scope file lib python2 7 site package tensorflow contrib init py line 31 in from tensorflow contrib import cloud file lib python2 7 site package tensorflow contrib cloud init py line 28 in from tensorflow contrib bigtable python op bigtable api import bigtableclient file lib python2 7 site package tensorflow contrib bigtable init py line 29 in from tensorflow contrib bigtable python op bigtable api import bigtableclient file lib python2 7 site package tensorflow contrib bigtable python op bigtable api py line 44 in resource loader get path to datafile bigtable so file lib python2 7 site package tensorflow contrib util loader py line 56 in load op library ret load library load op library path file lib python2 7 site package tensorflow python framework load library py line 73 in load op library module name hashlib md5 wrapper hexdigest valueerror error 060800a3 digital envelope routine evp digestinit ex disabled for fip replace all md5 call with sha1 call should work |
tensorflowtensorflow | deep learn image tensorflow 1 14 0 m33 on google cloud produce wrong and non deterministic loss after backpropagation | Bug | system information have I write custom code yes the code be attach os platform and distribution e g linux ubuntu 16 04 linux thomas tf 14 4 9 0 9 amd64 1 smp debian 4 9 168 1 deb9u4 2019 07 19 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device test on google cloud tensorflow instal from source or binary binary deep learn image tensorflow 1 14 0 m33 on google cloud tensorflow version v1 14 0 0 g87989f6 1 14 0 python version 2 7 13 3 5 3 bazel version if compile from source not compile from source gcc compiler version if compile from source not compile from source cuda cudnn version not use gpu model and memory none describe the current behavior current behaviour on google cloud use deep learn image tensorflow 1 14 0 m33 be non deterministic during multiple run the loss after perform backpropagation and update a variable be different for different run and do not match the loss of the non optimize standard tensorflow installation this behaviour exist when use python2 and when use python3 create instance with gcloud compute instance create tf 1 14 cpu zone us west1 b image family tf 1 14 cpu image project deeplearne platform release run 1 loss during step 0 0 41999998688697815 loss during step 1 3 698721931466375e 19 loss during step 2 7 38836981337104e 19 run 2 loss during step 0 0 41999998688697815 loss during step 1 0 41999998688697815 loss during step 2 0 41999998688697815 run 3 loss during step 0 0 41999998688697815 loss during step 1 9 46872814455392e 21 loss during step 2 1 8914226722229864e 22 describe the expect behavior on the the same machine use a virtualenv to force the use of non optimize tensorflow as follow virtualenv p python3 test source test bin activate pip3 install tensorflow 1 14 0 run 1 loss during step 0 0 41999998688697815 loss during step 1 1 4199999570846558 loss during step 2 2 4200000762939453 run 2 loss during step 0 0 41999998688697815 loss during step 1 1 4199999570846558 loss during step 2 2 4200000762939453 run 3 loss during step 0 0 41999998688697815 loss during step 1 1 4199999570846558 loss during step 2 2 4200000762939453 code to reproduce the issue import tensorflow as tf if name main session tf session image tf get variable name image shape 1 1 1 1 initializer tf constant initializer 0 42 session run tf global variable initializer kernel tf constant 1 0 shape 1 1 1 1 dtype tf float32 conv out tf nn conv2d image kernel stride 1 1 1 1 padding same max conv out tf math reduce max conv out axis 2 loss tf reduce sum max conv out opt tf train gradientdescentoptimizer learning rate 1 0 name sgd optimizer opt minimize loss var list image name sgd minimize for step in range 3 loss value session run loss optimizer print loss during step format step loss value other info log python code txt tf env txt |
tensorflowtensorflow | can not use tokenizer to tokenize the input word in text classification in tf1 14 without eager execution for tpu | Bug | I be try to write my code for use tpus however I can not simply just tokenize the text I have try many thing to make it work but it just can not there be no documentation on tensorflow on how to do it without enable eager execution in tf1 14 tokenizer tfds feature text tokenizer vocabulary set set for text tensor in all label datum some token tokenizer tokenize text tensor vocabulary set update some token I be receive follow error typeerror expect binary or unicode string get I can not get the valiue of this iterator even by use session run there be literally no help from tensorflow on how to tokenize input for simple text classification without eager mode |
tensorflowtensorflow | how to check model restore ok tf train saver restore | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | tf datum list file sample a subset and repeat iterate over each file when shuffle be enable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x mojave 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 rc1 22 gaf24dc91b5 1 14 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior test the tf datum setup I want to select a subset of the datum to read from disc do that with tf datum dataset list file shuffle true take 2 iterate over the entire folder despite the explicit take describe the expect behavior when shuffle be enable in tf datum dataset list file I should be able to repeat the random sample indefinitely code to reproduce the issue first create dummy datum shell mkdir scratchpad cd scratchpad for I in 0 100 do touch I do then in ipython python import os import tensorflow as tf sess tf interactivesession tf global variable initializer run it tf datum dataset list file os path join take 2 repeat none make one shot iterator get next sess run it for in range 12 the above work as expect with shuffle disabled python import os import tensorflow as tf sess tf interactivesession tf global variable initializer run it tf datum dataset list file os path join shuffle false take 2 repeat none make one shot iterator get next sess run it for in range 12 |
tensorflowtensorflow | tflite incorrect quantisation scale application in unit test util | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes the bug be discover when evaluate new quantisation mode os platform and distribution e g linux ubuntu 16 04 discoverd on linux ubuntu 16 04 but should be applicable to any platform where the test be run mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below master branch python version python3 6 bazel version if compile from source 0 26 1 gcc compiler version if compile from source gcc 5 4 0 cuda cudnn version n a gpu model and memory n a describe the current behavior the quantization scale in perchannelquantizebia l239 in tensorflow lite kernels test util h appear to be apply incorrectly the float point input datum be multiply by the scale value whereas they should be divide by it the correspond test in conv test cc and depthwise conv test cc e g simpleperchannelt l1343 appear to contain the wrong expect value describe the expect behavior the division operation should be use to convert the float point to quantize fix point value and the correspond test change respectively code to reproduce the issue n a other info log n a |
tensorflowtensorflow | get 404 in xla jit page | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change get not find when open clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | guide be need for transform of model suitable for tensorflow lite with gpu delegate | Bug | hello where one can find how to transform model to be suitable for work with tensorflow lite for gpu what should I do to transfer image from 3 component to 4 component I be try to work with mtcnn I have convert it successfully to tflite work fine on the cpu error next operation be not support by gpu delegate neg operation be not support first 2 operation will run on the gpu and the remain 40 on the cpu warn compiletobinary 256 c fakepath 86 169 182 warn x3556 integer divide may be much slow try use uint if possible c fakepath 86 246 259 warn x3556 integer modulus may be much slow try use uint if possible error tflitegpudelegate invoke converttophwc4 input datum size do not match expect size 12288000 6912 error node number 27 tflitegpudelegate fail to invoke |
tensorflowtensorflow | unable to learn model weight use tf nn nce loss | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 kde neon 5 16 base on ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 12 0 git v1 12 0 0 ga6d8ffae09 python version 3 6 8 bazel version if compile from source 0 19 1 gcc compiler version if compile from source gcc 8 0 1 20180414 experimental trunk revision 259383 cuda cudnn version cuda 10 0 cudnn 7 4 2 gpu model and memory nvidia geforce gtx 1050 ti max q with 4 gb of memory describe the current behavior I be try to build a word2vec model by follow this tensorflow tutorial I ve change the code little bit mainly in the part where I have to load training datum the problem be that the model win t learn anything the loss keep fluctuate up and down and the weight of my network stay constant throughout the training process I ve check this use tensorboard I be convince there s something wrong with either the code post in the tutorial or with the tf nn nce loss function as I don t see any problem with the code I write describe the expect behavior I m expect the model to learn something which doesn t mean that I m expect it to achieve a specific accuracy in other word I m expect the network s weight to be update after a training step code to reproduce the issue python import tensorflow as tf import numpy as np vocabulary size 13046 embed size 256 num noise 1 learning rate 1e 3 batch size 1024 epoch 10 def make hparam string embed size num noise learning rate batch size epoch return f es embed size nn num noise lr learning rate bs batch size e epoch these be the hidden layer weight embedding tf get variable name embedding initializer tf random uniform vocabulary size embed size 1 0 1 0 trainable true nce stand for noise contrastive estimation and represent a particular loss function check for more detail nce weight and nce bias be simply the output weight and bias note for some reason even though output weight will have shape embed size vocabulary size we have to initialize they with the shape vocabulary size embed size nce weight tf get variable name output weight initializer tf truncate normal vocabulary size embed size stddev 1 0 np sqrt embed size trainable true nce bias tf get variable name output bias initializer tf constant initializer 0 1 shape vocabulary size trainable true placeholder for input train input tf placeholder tf int32 shape none batch size train label tf placeholder tf int32 shape none 1 batch size 1 this allow we to quickly retrieve the corresponding word embedding for each word in train input match embedding tf nn embed lookup embedding train input compute the nce loss use a sample of the negative label each time loss tf reduce mean tf nn nce loss weight nce weight bias nce bias label train label input match embedding num sample num noise num class vocabulary size use the sgd optimizer to minimize the loss function optimizer tf train gradientdescentoptimizer learning rate learning rate minimize loss add some summary for tensorboard loss summary tf summary scalar nce loss loss input embedding summary tf summary histogram input embedding embedding output embedding summary tf summary histogram output embedding nce weight load data target word np genfromtxt target word txt dtype int delimiter n reshape 1 1 context word np genfromtxt context word txt dtype int delimiter n reshape 1 1 convert to tensor target word tensor tf convert to tensor target word context word tensor tf convert to tensor context word create a tf datum dataset object represent our dataset dataset tf datum dataset from tensor slice target word tensor context word tensor dataset dataset shuffle buffer size target word shape 0 dataset dataset batch batch size create an iterator to iterate over the dataset iterator dataset make initializable iterator next batch iterator get next train the model with tf session as session initialize variable session run tf global variable initializer merge summary tf summary merge all file writer for tensorboard hparam string make hparam string embed size num noise learning rate batch size epoch loss writer tf summary filewriter f tensorboard hparam string global step 0 for epoch in range epoch session run iterator initializer while true try input label session run next batch feed dict train input input 0 train label label cur loss all summary session run optimizer loss merge summary feed dict feed dict write sumarie to disk loss writer add summary all summary global step global step global step 1 print f current loss cur loss except tf error outofrangeerror print f finish epoch epoch break other info log I m attach the training sample target word txt and the correspond label context word txt in case you want to reproduce this issue target word txt context word txt |
tensorflowtensorflow | tensorflow serve batch configuration doc have bunch of broken link | Bug | break url batch configuration all the link under batch configuration section return 404 example link 1 server with multiple model model version or subtask 2 3 batch scheduling parameter and tune |
tensorflowtensorflow | lite fail on define with select tf op true | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code cc binary name prophet so srcs minimal h prophet cc linkopts tflite linkopt select tensorflow android pie android 5 0 and later support only pie lm some builtin op e g tanh need lm condition default dep tensorflow lite framework tensorflow lite delegate flex delegate on off tensorflow lite c c api internal tensorflow lite kernel builtin op tensorflow lite builtin op datum tensorflow lite schema schema fbs linkshare true os platform and distribution vmware linux ubuntu 18 04 x4 on window10x64 tensorflow instal from source github 20190817 tensorflow version 1 14 0 python version 3 7 4 bazel version if compile from source 0 26 gcc compiler version if compile from source 7 4 binary cuda cudnn version cpu gpu model and memory cpu you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c describe the current behavior target tensorflow lite example l17 prophet so fail to build describe the expect behavior build successe code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem bazel build config monolithic cxxopt std c 11 c opt crosstool top external android crosstool host crosstool top bazel tool tool cpp toolchain define with select tf op true cpu armeabi v7a verbose failure tensorflow lite example l17 prophet so other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach bazel cmd log error root tensorflow master tensorflow core kernel build 6576 1 c compilation of rule tensorflow core kernel android tensorflow kernel fail exit 254 clang fail error execute command cd root cache bazel bazel root 510cb3499e6983b92a09020af9102ff3 execroot org tensorflow exec env android build tool version 29 0 2 android ndk api level 18 android ndk home root vender android ndk r17c android sdk api level 18 android sdk home root vender android sdk path anaconda3 bin root anaconda3 bin usr local sbin usr local bin usr sbin usr bin sbin bin usr games usr local game snap bin pwd proc self cwd python bin path root anaconda3 bin python python lib path root anaconda3 lib python3 7 site package tf configure io 0 external androidndk ndk toolchain llvm prebuilt linux x86 64 bin clang d android api 18 isystemexternal androidndk ndk sysroot usr include arm linux androideabi target armv7 none linux androideabi march armv7 a mfloat abi softfp mfpu vfpv3 d16 gcc toolchain external androidndk ndk toolchains arm linux androideabi 4 9 prebuilt linux x86 64 fpic ffunction section funwind table fstack protector strong wno invalid command line argument wno unused command line argument no canonical prefix mthumb os g dndebug md mf bazel out armeabi v7a opt bin tensorflow core kernel objs android tensorflow kernel cwise op not equal to 2 pic d frandom seed bazel out armeabi v7a opt bin tensorflow core kernel objs android tensorflow kernel cwise op not equal to 2 pic o fpic d clang support dyn annotation deigen mpl2 only deigen max align byte 64 deigen have type trait 0 iquote iquote bazel out armeabi v7a opt bin iquote external com google absl iquote bazel out armeabi v7a opt bin external com google absl iquote external nsync iquote bazel out armeabi v7a opt bin external nsync iquote external com google protobuf iquote bazel out armeabi v7a opt bin external com google protobuf iquote external zlib archive iquote bazel out armeabi v7a opt bin external zlib archive iquote external eigen archive iquote bazel out armeabi v7a opt bin external eigen archive iquote external local config sycl iquote bazel out armeabi v7a opt bin external local config sycl iquote external double conversion iquote bazel out armeabi v7a opt bin external double conversion iquote external farmhash archive iquote bazel out armeabi v7a opt bin external farmhash archive iquote external fft2d iquote bazel out armeabi v7a opt bin external fft2d iquote external gemmlowp iquote bazel out armeabi v7a opt bin external gemmlowp isystem external nsync public isystem bazel out armeabi v7a opt bin external nsync public isystem external com google protobuf src isystem bazel out armeabi v7a opt bin external com google protobuf src isystem external zlib archive isystem bazel out armeabi v7a opt bin external zlib archive isystem external eigen archive isystem bazel out armeabi v7a opt bin external eigen archive isystem external double conversion isystem bazel out armeabi v7a opt bin external double conversion isystem external farmhash archive src isystem bazel out armeabi v7a opt bin external farmhash archive src std c 11 deigen avoid stl array iexternal gemmlowp wno sign compare fno exception ftemplate depth 900 mfpu neon dtensorflow monolithic build dtf lean binary wno narrow fomit frame pointer o2 sysroot external androidndk ndk platform android 18 arch arm isystem external androidndk ndk source cxx stl llvm libc include isystem external androidndk ndk source cxx stl llvm libc abi include isystem external androidndk ndk source android support include isystemexternal androidndk ndk sysroot usr include c tensorflow core kernel cwise op not equal to 2 cc o bazel out armeabi v7a opt bin tensorflow core kernel objs android tensorflow kernel cwise op not equal to 2 pic o execution platform bazel tool platform host platform clang error unable to execute command kill clang error clang frontend command fail due to signal use v to see invocation android 4691093 base on r316199 clang version 6 0 2 183abd29fc496f55536e7d904e0abae47888fc7f 34361f192e41ed6e4e8f9aca80a4ea7e9856f327 base on llvm 6 0 2svn target armv7 none linux android thread model posix installeddir external androidndk ndk toolchain llvm prebuilt linux x86 64 bin clang note diagnostic msg please submit a bug report to and include the crash backtrace preprocesse source and associate run script clang note diagnostic msg please attach the follow file to the bug report preprocesse source s and associate run script s be locate at clang note diagnostic msg tmp cwise op not equal to 2 9cbb9b cpp clang note diagnostic msg tmp cwise op not equal to 2 9cbb9b sh clang note diagnostic msg target tensorflow lite example l17 prophet so fail to build |
tensorflowtensorflow | the argument and return of tf keras layers grucell call make no sense at all | Bug | url s with the issue l1615 l1718 please provide a link to the documentation entry for example description of issue what need change the state argument to tf keras layers grucell call be index with h tm1 state 0 previous memory and the function return h and h which be the same value clear description why do this occur its inconsistent with the pytorch torch nn grucell implementation grucell I notice the state issue when I be convert a project from pytorch to tf keras and the same code with just the grucell swap from pytorch to tf keras do not work the error message be tensorflow python framework error impl invalidargumenterror in 0 be not a matrix instead it have shape 200 op matmul name transition gru cell matmul and the solution be to replace my hide with hide for the states parameter furthermore when I get my return type they be a tuple rather than the output upon further inspection the tuple contain the same value twice be there any reason it do this in the doc this be not explain at all correct link l1615 l1718 parameter define state parameter be the confusing one in question return define return h h make no sense raise list and define irrelevant usage example self rnn tf keras layer grucell num unit self rnn input hide where input hide shape n num unit doesn t work need to be change to self rnn input hide to execute furthermore on the lhs of self rnn rather than just an x self rnn I need to do x self rnn why submit a pull request I would gladly change this if someone would confirm this be an issue |
tensorflowtensorflow | doc could mention that dataset sharding be deterministic | Bug | hi everyone thank for your work on maintain and develop tensorflow I wish to raise a suggestion for the documentation of the tf datum dataset python api in particular I be refer to the documentation of the shard operation url s with the issue r1 14 doc shard r2 0 doc shard description of issue what need change clear description from my understanding of the source code the shard operation be deterministic I e if we apply shard on a dataset a with some fix value of num shard and index the operation will always return the same subset of dataset a perhaps the documentation should mention that this operation be deterministic this will help reader understand that the sharding do not involve any randomness currently the doc do not mention this aspect of shard s behaviour correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define yes usage example be there a usage example yes request visual if applicable be there currently visual no but this issue do not require visual submit a pull request I can submit a pr to update the doc if this be indeed consider a useful fix thank for your time |
tensorflowtensorflow | tf gather not support when both param dtype be int and param be resourcevariable | Bug | tf gather have both gpu and cpu kernel for param dtype int64 int32 however when the param be a tensor wrap in a variable it do not work when dtype be int32 or int64 wrap the param input in e g a tf identity solve the issue but the behaviour seem like a bug minimal case to reproduce import tensorflow as tf print tf version tensor float tf range 10 dtype tf float32 tensor int tf range 10 dtype tf int64 var float tf variable tensor float var int tf variable tensor int for I in tensor float tensor int var float var int try tf gather I tf constant 1 dtype tf int64 print work for format I except tf error notfounderror as e print didn t work for format I print e output 2 0 0 beta1 work for 0 1 2 3 4 5 6 7 8 9 work for 0 1 2 3 4 5 6 7 8 9 work for didn t work for no register resourcegather opkernel for gpu device compatible with node node resourcegather opkernel be find but attribute didn t match request attribute tindice dt int64 batch dim 0 dtype dt int64 validate indice true register device xla cpu tindice in dt int32 dt int64 dtype in dt float dt double dt int32 dt uint8 dt int8 dt bfloat16 dt complex128 dt half dt uint32 dt uint64 device xla gpu tindice in dt int32 dt int64 dtype in dt float dt double dt int32 dt uint8 dt int8 dt qint32 dt bfloat16 dt half dt uint32 dt uint64 device xla cpu jit tindice in dt int32 dt int64 dtype in dt float dt double dt int32 dt uint8 dt int8 dt bfloat16 dt complex128 dt half dt uint32 dt uint64 device xla gpu jit tindice in dt int32 dt int64 dtype in dt float dt double dt int32 dt uint8 dt int8 dt qint32 dt bfloat16 dt half dt uint32 dt uint64 device gpu dtype in dt variant tindice in dt int64 device gpu dtype in dt variant tindice in dt int32 device gpu dtype in dt double tindice in dt int64 device gpu dtype in dt double tindice in dt int32 device gpu dtype in dt float tindice in dt int64 device gpu dtype in dt float tindice in dt int32 device gpu dtype in dt half tindice in dt int64 device gpu dtype in dt half tindice in dt int32 device cpu dtype in dt qint32 tindice in dt int64 device cpu dtype in dt qint32 tindice in dt int32 device cpu dtype in dt quint8 tindice in dt int64 device cpu dtype in dt quint8 tindice in dt int32 device cpu dtype in dt qint8 tindice in dt int64 device cpu dtype in dt qint8 tindice in dt int32 device cpu dtype in dt variant tindice in dt int64 device cpu dtype in dt variant tindice in dt int32 device cpu dtype in dt resource tindice in dt int64 device cpu dtype in dt resource tindice in dt int32 device cpu dtype in dt string tindice in dt int64 device cpu dtype in dt string tindice in dt int32 device cpu dtype in dt bool tindice in dt int64 device cpu dtype in dt bool tindice in dt int32 device cpu dtype in dt complex128 tindice in dt int64 device cpu dtype in dt complex128 tindice in dt int32 device cpu dtype in dt complex64 tindice in dt int64 device cpu dtype in dt complex64 tindice in dt int32 device cpu dtype in dt double tindice in dt int64 device cpu dtype in dt double tindice in dt int32 device cpu dtype in dt float tindice in dt int64 device cpu dtype in dt float tindice in dt int32 device cpu dtype in dt bfloat16 tindice in dt int64 device cpu dtype in dt bfloat16 tindice in dt int32 device cpu dtype in dt half tindice in dt int64 device cpu dtype in dt half tindice in dt int32 device cpu dtype in dt int8 tindice in dt int64 device cpu dtype in dt int8 tindice in dt int32 device cpu dtype in dt uint8 tindice in dt int64 device cpu dtype in dt uint8 tindice in dt int32 device cpu dtype in dt int16 tindice in dt int64 device cpu dtype in dt int16 tindice in dt int32 device cpu dtype in dt uint16 tindice in dt int64 device cpu dtype in dt uint16 tindice in dt int32 device cpu dtype in dt int32 tindice in dt int64 device cpu dtype in dt int32 tindice in dt int32 device cpu dtype in dt int64 tindice in dt int64 device cpu dtype in dt int64 tindice in dt int32 op resourcegather name gather |
tensorflowtensorflow | tpustrategy incompatibility with tf io read file | Bug | system information I be use colaboratory and google cloud have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 4 9 168 1 deb9u5 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary unknown tensorflow version use command below 1 14 python version 3 5 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the script end with a segmentation fault or abort describe the expect behavior just a clean run code to reproduce the issue python import all necessary library import tensorflow as tf import cv2 import random as rnd import os if int tf version split 0 1 print tf version tf enable eager execution tf function def read test filename img raw tf io read file tf squeeze filename return img raw import numpy as np image np array rnd randint 0 255 for in range 936 for in range 1024 cv2 imwrite 0 png image raw read test tf constant 0 png tf keras backend clear session if tpu name in os environ tpu worker grpc os environ tpu name resolver tf contrib cluster resolver tpuclusterresolver tpu tpu worker tf config experimental connect to host resolver master tf contrib distribute initialize tpu system resolver strategy tf contrib distribute tpustrategy resolver elif colab tpu addr in os environ tpu worker grpc os environ colab tpu addr resolver tf contrib cluster resolver tpuclusterresolver tpu tpu worker tf config experimental connect to host resolver master tf contrib distribute initialize tpu system resolver strategy tf contrib distribute tpustrategy resolver else strategy tf distribute experimental multiworkermirroredstrategy print strategy with strategy scope pass print success other info log the previous code be part of an image processing neural network the image file be read while run the code to minimize the disk usage the code run smoothly in a cpu or gpu environment however it crash in a tpu one |
tensorflowtensorflow | debug tf kera model with tfdbg get typeerror fetch argument none have invalid type | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below tensorflow gpu 1 14 0 python version python 3 7 1 cuda cudnn version cuda 10 1 cudnn 7 5 gpu model and memory gtx 2060 describe the current behavior I want to use debug tf keras model but get an error message like sh traceback most recent call last file debug keras py line 82 in vae add loss loss fn x in x out file home zqh miniconda3 lib python3 7 site package tensorflow python keras engine base layer py line 917 in add loss new layer base layer util create keras history symbolic loss file home zqh miniconda3 lib python3 7 site package tensorflow python keras engine base layer util py line 200 in create keras history create layer create keras history helper tensor set file home zqh miniconda3 lib python3 7 site package tensorflow python keras engine base layer util py line 244 in create keras history helper constant I backend function op input file home zqh miniconda3 lib python3 7 site package tensorflow python keras backend py line 3292 in call run metadata self run metadata file home zqh miniconda3 lib python3 7 site package tensorflow python debug wrapper framework py line 628 in wrap runner callable runner args feed value file home zqh miniconda3 lib python3 7 site package tensorflow python debug wrapper framework py line 569 in run run metadata run metadata file home zqh miniconda3 lib python3 7 site package tensorflow python client session py line 950 in run run metadata ptr file home zqh miniconda3 lib python3 7 site package tensorflow python client session py line 1158 in run self graph fetch feed dict tensor feed handle feed handle file home zqh miniconda3 lib python3 7 site package tensorflow python client session py line 474 in init self fetch mapper fetchmapper for fetch fetch file home zqh miniconda3 lib python3 7 site package tensorflow python client session py line 261 in for fetch type fetch typeerror fetch argument none have invalid type describe the expect behavior it should be work code to reproduce the issue run this python script then type in run t 10 will get an error message python import tensorflow python as tf from tensorflow python import kera as k from tensorflow python keras import layer as kl from tensorflow python keras import activation as ka import matplotlib pyplot as plt import numpy as np from scipy stat import norm from scipy special import expit from tensorflow python import debug as tfdebug config tf configproto config gpu option allow growth true k backend set session tfdebug localclidebugwrappersession tf session config config k backend set session tf session config config x train y train x test y test k dataset fashion mnist load datum x train np expand dim x train 1 255 x test np expand dim x test 1 255 image size 28 input shape image size image size 1 batch size 100 kernel size 3 filter 16 latent dim 2 2 epoch 30 tf set random seed 9102 def encoder fn input filter x input for I in range 2 filter 2 x kl conv2d filter filter kernel size kernel size activation relu stride 2 padding same x x kl flatten x x kl dense 32 activation relu x kl dense latent dim x kl dense latent dim x return def sample args args tf random normal shape tf shape return tf exp 2 def decoder fn z filter x kl dense 7 7 32 activation relu z x kl reshape 7 7 32 x for I in range 2 x kl conv2dtranspose filter filter kernel size kernel size activation relu stride 2 padding same x filter 2 x kl conv2dtranspose 1 kernel size activation none pad same x return x def loss fn input output xent loss tf reduce sum tf nn sigmoid cross entropy with logit label x in logit x out axis 1 2 3 xent loss tf reduce sum k backend binary crossentropy x in x out axis 1 2 3 kl loss 0 5 tf reduce sum 1 tf square tf exp axis 1 vae loss tf reduce mean xent loss kl loss return vae loss x in k input shape image size image size 1 encoder fn x in filter z kl lambda sample output shape latent dim latent input k input shape latent dim dtype tf float32 output decoder fn latent input filter decoder k model latent input output x out decoder z encoder k model x in vae k model x in x out vae add loss loss fn x in x out vae compile k optimizer nadam 0 001 vae fit x x train batch size batch size epoch epoch shuffle true validation datum x test none other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf trt assertion mparam k 0 fail | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory t4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior see the follow error 2019 08 16 03 21 42 299651 e tensorflow compiler tf2tensorrt util trt logg cc 41 defaultlogger parameter check fail at builder layer cpp topklayer 2009 condition k 0 k max topk k b engine 0 builder layer cpp 2048 virtual bool nvinfer1 topklayer validate const std vector const nvinfer1 networklayer validationcontext const assertion mparam k 0 fail describe the expect behavior what be this error for how to stop this from happen code to reproduce the issue try to create a snippet other info log |
tensorflowtensorflow | tf math op do not work on mirroredvariable | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 5 tensorflow instal from source or binary binary tf nightly tensorflow version use command below 1 15 0 dev20190729 python version 3 7 4 describe the current behavior when I be use the tf distribute mirroredstrategy with multiple replicas tf math op do not work on mirroredvariable when inside a cross replica scope describe the expect behavior the mirroredvariable class l782 be a subclass of the distributeddelegate class l375 which if I understand correctly mean that mirroredvariable be suppose to act like regular tensor that you can perform op on this work fine with most standard op like multiplication division subtraction etc however if you try to use any tf math op you get typeerror fail to convert object of type to tensor code to reproduce the issue python import tensorflow as tf def merge strategy var var strategy extend reduce to tf distribute reduceop sum var var return var 1 this do work return tf math add var 1 this doesn t work def run var var var var return tf distribute get replica context merge call merge args var strategy tf distribute mirroredstrategy cpu 0 cpu 1 with strategy scope var tf variable 2 dtype tf float32 result strategy experimental run v2 run args var print result |
tensorflowtensorflow | valueerror can not add function trtengineop 0 native segment because a different function with the same name already exist | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 14 and dev python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory t4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior not able to call trt create inference graph more than once to create tf trt node for disjoint sub graph throw the above error describe the expect behavior should not throw the above error code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem from tensorflow contrib slim net import resnet v1 import tensorflow as tf import tensorflow contrib slim as slim import numpy as np import time import os from pil import image import tensorflow contrib tensorrt as tftrt from tensorflow python compiler tensorrt import trt convert as tftrt import argparse path to ckpt resnet v1 50 ckpt test image path elephant small jpg tabby tiger cat jpg batch size 25 max batch size 1000 height 224 width 224 channel 3 def load image into numpy array image batch size 1 I m width I m height image size x np array image getdata reshape height width channel astype np uint8 x np expand dim x axis 0 xsl list x shape xsl 0 batch size max batch size x np broadcast to x 0 xsl return x def run resnet 50 create graph input tf placeholder tf float32 shape batch size height width channel with slim arg scope resnet v1 resnet arg scope net end point resnet v1 resnet v1 50 input be train false saver tf train saver with tf session as sess saver restore sess path to ckpt representation tensor sess graph get tensor by name resnet v1 50 pool5 0 if you don t know name like these consider refer to corresponding model file or generate pbtxt file as mention in civilman628 s answer above img np one batch size height width channel load image here with size 1 224 224 3 feature sess run representation tensor placeholder 0 img print feature feature def rename ckpt save name with tf session as sess restore the tf checkpoint for var name var shape in tf contrib framework list variable path to ckpt var tf contrib framework load variable path to ckpt var name new name part name var name split 1 new name join new name part var tf variable var name new name print var name var shape var name ckpt dir tmp name if not os path exist ckpt dir os mkdir ckpt dir sess run tf global variable initializer saver tf train saver saver save sess ckpt dir def rename ckpt mem sess name for var name var shape in tf contrib framework list variable path to ckpt var tf contrib framework load variable path to ckpt var name new name part name var name split 1 new name join new name part var tf variable var name new name print var name var shape var name def build graph sess input graph name graph1 input tf placeholder tf float32 shape batch size height width channel with slim arg scope resnet v1 resnet arg scope net end point resnet v1 resnet v1 50 input be train false scope name with input graph as default get handle to input and output tensor op tf get default graph get operation all tensor name output name for op in op for output in op output tensor dict for key in pool5 if name tensor name key 0 else tensor name name key 0 if tensor name in all tensor name tensor dict key tf get default graph get tensor by name tensor name print tensor name tensor name tensor dict key else print tensor name tensor name not find restore the tf checkpoint saver tf train saver saver restore sess tmp name return tensor dict class tftrt staticmethod def build sess graph name import name import tensor dict build graph sess graph name name outputl for I nname in enumerate tensor dict nvalue tensor dict nname name split 0 print I nname tensor dict nname name outputl append nvalue print outputl outputl node name n name for n in graph as graph def node import pdb pdb set trace freeze the graph frozen graph tf graph util convert variable to constant sess sess graph def output node name outputl remove training node frozen graph tf compat v1 graph util remove training node frozen graph now you can create a tensorrt inference graph from your frozen graph tftrt graph tftrt create inference graph input graph def frozen graph output outputl max batch size max batch size max workspace size byte 1024 1024 1024 precision mode fp16 output nod tf import graph def tftrt graph return element outputl name import name tensor dict for opname opnode in zip outputl output nodes tensor dict opnode name opnode output 0 print tensor dict tensor dict return tensor dict staticmethod def run sess tensor dict image tensor image np expand output dict sess run tensor dict feed dict image tensor image np expand return output dict class tf staticmethod def build sess graph name import name none return build graph sess graph name name staticmethod def run sess tensor dict image tensor image np expand output dict sess run tensor dict feed dict image tensor image np expand return output dict def run runtime tf if runtime tf runclass tf import name1 import name2 placeholder name1 placeholder 0 placeholder name2 placeholder 1 0 elif runtime tftrt runclass tftrt import name1 import1 import name2 import2 placeholder name1 import1 placeholder 0 placeholder name2 import2 placeholder 1 0 graph tf graph config tf configproto config gpu option allow growth true config gpu option per process gpu memory fraction 0 33 config graph option rewrite option auto mixed precision 1 with tf session graph graph config config as sess rename ckpt mem sess resnet v1 50 1 rename ckpt mem sess resnet v1 50 2 tensor dict1 runclass build sess graph name resnet v1 50 1 import name import name1 tensor dict2 runclass build sess graph name resnet v1 50 2 import name import name2 tensorboard dir os environ tensorboard dir file writer tf summary filewriter tensorboard dir sess graph image path test image path 0 print image path format image path image image open image path image image resize width height image np expand load image into numpy array image batch size batch size image tensor1 graph get tensor by name placeholder name1 image tensor2 graph get tensor by name placeholder name2 time0 time time for I in range 1 1001 output dict1 runclass run sess tensor dict1 image tensor1 image np expand output dict2 runclass run sess tensor dict2 image tensor2 image np expand if I 100 0 time take time time time0 I 1 0 print I time take time take time time time0 I 1 0 print time take time take output dict output dict1 output dict2 if name main parser argparse argumentparser parser add argument runtime default tf help tf or tftrt parser add argument rewrite ckpt action store true help rename checkpoint args parser parse args if args rewrite ckpt rename ckpt save resnet v1 50 1 rename ckpt save resnet v1 50 2 run args runtime to run pythonpath pythonpath model research slim python tftrt resnet2x py runtime tftrt other info log file usr local lib python2 7 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python2 7 dist package tensorflow python framework importer py line 430 in import graph def raise valueerror str e valueerror can not add function trtengineop 0 native segment because a different function with the same name already exist |
tensorflowtensorflow | tf keras backend function ignore input shape | Bug | system information have I write custom code yes os platform and distribution window 10 tensorflow instal from source or binary conda install update tensorflow tensorflow version use command below 1 14 0 python version 3 7 3 cuda cudnn version n a gpu model and memory n a describe the current behavior tf keras backend function seem to ignore explicitly define input shape almost as if the model be define use none shape dynamic reshape describe the expect behavior I expect an error to be throw as the wrong input shape be feed to the model instead it seem as if the model be run use a dynamic reshape code to reproduce the issue import numpy as np import tensorflow kera as k import tensorflow kera backend as kb sub in k layer input shape 1 5 1 x k layer conv2d 1 1 1 sub in x k layer flatten x sub k model model input sub in output x main in k layer input shape 1 10 1 main in1 k layers lambda lambda x x 5 main in main in2 k layers lambda lambda x x 8 main in x1 sub main in1 x2 sub main in2 x k layer concatenate x1 x2 main k model model input main in output x arr np arange 30 reshape 3 1 10 1 pre main predict arr print pre shape suboutfunc kb function sub input sub output subout suboutfunc arr print subout shape run this you should see output shape of 3 7 and 3 10 which in my opinion should not be possible the sub model should only accept input of shape 1 5 1 yet in the two above example input of shape 1 2 1 and 1 10 1 be feed to it and it work the first example use the predict method give it the 1 2 1 input via the second lambda split of the main input no error the second example call backend function directly on the sub model with a 1 10 1 also no error maybe this be be the desire behavior of backend function it be not what I expect hence raise up in case this be a bug thank you |
tensorflowtensorflow | tf keras model load model take forever to load | Bug | system information have I write custom code yes os platform and distribution window 10 version 1607 tensorflow instal from pip tensorflow version git version v1 12 1 8794 ge36271a61d version 1 15 0 dev20190814 cuda cudnn version 10 0 7 gpu model and memory nvidia gtx 1050 ti 4096 mb describe the current behavior the code below produce this output save successfull loading w0815 15 02 11 705600 17844 deprecation py 506 from c tensorflow anduin lib site package tensorflow core python op init op py 97 call glorotuniform init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor w0815 15 02 11 706604 17844 deprecation py 506 from c tensorflow anduin lib site package tensorflow core python op init op py 97 call zero init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor even after 30 min it doesn t print the expect output load successfull describe the expect behavior succesful loading in under 1 min code to reproduce the issue import tensorflow as tf from tensorflow keras layer import dense input lambda from tensorflow keras model import model sequential from scipy import sparse import numpy as np def layer lambda input x sparse input x 0 dense input x 1 dense tf transpose dense y tf sparse sparse dense matmul sparse dense return tf transpose y dense mat np eye 30 30 dtype np float32 sparse mat sparse coo matrix dense mat sparse indice np mat sparse mat row sparse mat col transpose sparse tensor tf sparsetensor sparse indice sparse mat data sparse mat shape model sequential model input input shape 20 x dense 20 model input x dense 30 x x lambda layer lambda output shape none 30 30 sparse tensor x model model model input x model predict np one 20 model save model h5 print save successfull print loading model load tf keras model load model model h5 custom object layer lambda layer lambda print load successfull this error occur after fix issue 31607 |
tensorflowtensorflow | mask be not propagate into sequential keras layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os tensorflow instal from source or binary docker pip tensorflow version use command below 1 14 0 2 0 0b python version 3 6 8 describe the current behavior when a mask input be feed to a tf keras model sequential the mask be ignore describe the expect behavior I would expect the mask to be propagate as if the inner layer be not in a sequential layer code to reproduce the issue this work import tensorflow as tf def check mask input mask none assert mask be not none return input model tf keras model sequential tf keras layer mask 0 input shape 1 tf keras layers lambda check mask this fail due to fail assert import tensorflow as tf def check mask input mask none assert mask be not none return input model tf keras model sequential tf keras layer mask 0 input shape 1 tf keras model sequential tf keras layers lambda check mask other info log I think I have narrow this down to a cache invalidation issue in tf keras layers layer should compute mask l2055 l2059 from my debugging it seem that whenever the issue above appear should compute mask be cache as false even though evaluate the actual property expression result in true since this problem be in a base layer this issue might be affect other layer as well |
tensorflowtensorflow | custom op use eigen matrix | Bug | system information os platform and distribution e g linux ubuntu 16 04 win 10 pro mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version 1 13 1 python version 3 6 7 instal use virtualenv pip conda from source bazel version if compile from source 0 21 0 gcc compiler version if compile from source n a cuda cudnn version cuda 10 0 cudnn 7 6 2 gpu model and memory rog strix geforce gtx 1080ti 11 g describe the problem I m try to develop a tensorflow custom op use cuda and inside the op I need to use eigen to run some matrix operation but when I try to calculate the inverse of a size 8 8 matrix I ve encounter several problem this function be write in cuda kernel in cuda kernel function file cu cc I have include c include include include third party eigen3 unsupported eigen cxx11 tensor include tensorflow core framework op h include tensorflow core framework op kernel h include include and in the kernel function I just try a simple test c typedef eigen matrix matrix8f global void csmtorcmkernel eigen map csm map eigen map reg vec eigen map offset8 12160 8 3 eigen map offset3 9728 3 3 eigen map offset4 12160 4 3 eigen map offset6 9728 6 3 eigen map rcm map matrix8f a matrixxf identity 8 8 a 0 0 2 inv a inv a matrix8f inv a a inverse you can ignore the input parameter they aren t use in this test program and when I try to build this op with bazel I get follow error c user administrator bazel zhaozixiao sr6amwum execroot org tensorflow external eigen archive eigen src core solvetriangular h 185 error function eigen block operator const eigen block 0 eigen stride 0 0 1 1 false 1 1 false with xprtype eigen block 0 eigen stride 0 0 1 1 false blockrow 1 blockcol 1 innerpanel false declare implicitly can not be reference it be a deleted function c user administrator bazel zhaozixiao sr6amwum execroot org tensorflow external eigen archive eigen src core genericpacketmath h 368 error asm operand type size 8 do not match type size imply by constraint r and if I try to calculate the inverse of a size 4 4 or below matrix there isn t any problem in fact I find that tensorflow itself can realise big size matrix operation matrix inverse for example but it seem like they don t use eigen structure so I d like to know if cuda kernel support big size big than 4 4 matrix operation base on eigen thank you |
tensorflowtensorflow | strange result when fetch variable under a distribute strategy | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 5 tensorflow instal from source or binary binary tf nightly tensorflow version use command below 1 15 0 dev20190729 python version 3 7 4 describe the current behavior when I be use any distribute strategy and I fetch a tf variable I get a result that look like array 10 44 47 dtype resource u1 instead I have to wrap the variable in a tf identity call in order to get its value properly describe the expect behavior the fetched result should be the value of the variable code to reproduce the issue python import tensorflow as tf import numpy as np import sys class runhook tf train sessionrunhook def before run self run context return tf train sessionrunarg fetch var 0 def after run self run context run value print run value result sys exit 0 def model fn feature label mode param var tf get variable initializer tf constant 1 0 dtype tf float32 name var dtype tf float32 trainable true loss tf identity var opt tf train adamoptimizer 0 001 global step tf train get or create global step train op opt minimize loss global step global step return tf estimator estimatorspec mode mode loss loss train op train op strategy tf distribute mirroredstrategy session config tf configproto config tf estimator runconfig train distribute strategy session config session config log step count step 1 save checkpoint step float inf classifi tf estimator estimator model fn model fn config config x np array 1 2 3 4 y np array 5 6 7 8 train input fn tf estimator input numpy input fn x y batch size 1 num epoch none shuffle true tf estimator train and evaluate classifier train spec tf estimator trainspec input fn lambda train input fn hook runhook eval spec tf estimator evalspec input fn lambda train input fn |
tensorflowtensorflow | incorrect value in result when use tf math log | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no idea tensorflow instal from source or binary binary use pip3 install tensorflow version use command below 1 14 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda9 0 cudnn 7 6 gpu model and memory rtx 2080ti when I run the follow code use cpu result tf math log tf constant 280 41303540865516 100 dtype tf float32 numpy I get 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 6362634 5 636264 5 636264 5 636264 5 636264 I find result 0 be not equal to result 1 the first one be 5 636263370513916 while the latter one be equal to 5 636263847351074 I believe it be a bug code to reproduce the issue from future import absolute import division print function unicode literal import tensorflow as tf from random import tf enable eager execution with tf device cpu 0 result tf math log tf constant 280 41303540865516 100 dtype tf float32 numpy print vs format result 0 result 1 seem gpu have no such issue |
tensorflowtensorflow | dataset prefetch not work as expect not store datum in memory | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 lt tensorflow instal from source or binary conda forge tensorflow version use command below unknown 1 14 0 python version python 3 7 3 cuda cudnn version nvidia smi 418 67 driver version 418 67 cuda version 10 1 gpu model and memory quadro rtx 6000 24190mib exact command to reproduce describe the current behavior I be train a small lstm model and until recently I could use dataset from tensor slice read numpy array directly because all training datum fit into memory unfortunately after add some new datum I run into the 2 gb graph memory limitation and be force to switch to use tfrecord and tfrecorddataset however the actual training datum still fit into ram and I want to make sure it be prefetche even when use the tfrecorddataset therefore I try to use the dataset prefetch methodology to achieve this assume a buffer will be create and constantly fill with datum however it do not work in fact there seem to be little to no difference compare a version with and without a final prefetch x in the data pipeline see the animate gif below tf prefetch the actual dataset be filter in the pipeline and the training stall whenever a sequence of value that be filter out be occur in the datum only a few value in each datum tfrecord file of which many exist be relevant to illustrate this further the datum layout be similar to this file 1 file 2 file 3 file 4 where denote irrelevant and denote relevant datum point in a time series when hold all value in memory as previously be the case the filter be rather fast and irrelevant value be skip unnoticeable the datum pipeline be set up like this feature description feature tf fixedlenfeature 132 tf float32 label tf fixedlenfeature 1 tf float32 def parse function example proto return tf parse single example example proto feature description ds tf datum tfrecorddataset f as posix for f in fs train ds ds map parse function ds ds flat map lambda v tf datum dataset from tensor v feature 2 v label filter datum only allow ls 0 and ls 1 ds ds filter lambda y tf reshape tf logical or tf equal y ls 0 tf equal y ls 1 relabel and re map label to 0 and 1 ds ds flat map lambda x y tf datum dataset from tensor x tf relabel y base label value create slide window for lstm ds ds window size window size shift shift stride stride drop remainder true ds ds flat map lambda x y tf datum dataset zip x batch window size y batch window size batch and prefetch ds ds batch batch size drop remainder true ds ds prefetch 1000000000000000 try many value nothing work describe the expect behavior I expect to find some value for dataset prefetch that read all or enough datum to memory to allow for fast training without stall code to reproduce the issue see datum pipeline above I can not provide the datum as it be proprietary |
tensorflowtensorflow | init node weight assign doesn t exist in graph happen when use convert in tflite | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary tensorflow version tensorflow nightly python version 3 6 instal use virtualenv pip conda pip cuda cudnn version 7 10 describe the problem when I try to convert a tensorflow graphdef into a tensorflow lite flatbuffer from a tf session object a error happend such like this 2019 08 14 16 01 23 946453 I tensorflow core grappler cluster single machine cc 356 start new session 2019 08 14 16 01 23 947157 e tensorflow core grappler grappler item builder cc 656 init node weight assign doesn t exist in graph and my code all show below def main def loss function weight logit label label tf one hot label 4 label tf cast label tf float32 first tf reduce sum tf multiply label logit 1 second 0 tf add tf exp logit 0 tf exp logit 1 second 1 tf add tf exp logit 2 tf exp logit 3 log tf log tf add second 1 second 0 weight tf transpose tf reduce sum tf multiply label weight 1 output tf multiply weight tf add first log return output def normalize stft stft 1 numpy empty stft shape 0 128 128 stft 2 numpy empty stft 1 shape 0 stft 1 shape 1 stft 1 shape 2 1 for I in range stft 1 shape 0 image image fromarray stft I image image resize 128 128 stft 1 I numpy array image min numpy min stft 1 I max numpy max stft 1 I stft 1 I stft 1 I min max min stft 2 I stft 1 I reshape stft 1 shape 1 stft 1 shape 2 1 return stft 2 get the datum stft training mfcc training label train joblib load open flag input mode rb stft test mfcc test label test joblib load open flag test mode rb stft test numpy array stft test mfcc test numpy array mfcc test label test numpy array labels test stft test normalize stft test mfcc test normalize mfcc test stft training numpy array stft train mfcc training numpy array mfcc training label train numpy array label train stft training normalize stft training mfcc training normalize mfcc training stft shape stft training shape stft shape none stft shape 1 stft shape 2 1 mfcc shape mfcc training shape mfcc shape none mfcc shape 1 mfcc shape 2 1 label shape label training shape label shape none stft placeholder tf placeholder stft training dtype stft shape label placeholder tf placeholder label training dtype label shape mfcc placeholder tf placeholder mfcc training dtype mfcc shape dataset training tf datum dataset from tensor slice stft placeholder mfcc placeholder label placeholder dataset training dataset training apply tf datum experimental shuffle and repeat len stft train none dataset training dataset training batch batch size dataset training dataset training prefetch 1 iterator training dataset training make initializable iterator next element training iterator training get next num epoch flag epoch train size label training shape 0 with tf name scope input stft tf placeholder name stft dtype data type shape batch size image height image weith num channel mfcc tf placeholder name mfcc dtype data type shape batch size image height image weith num channel label tf placeholder tf int64 shape batch size with tf name scope test input stft t tf placeholder datum type shape eval batch size image height image weith num channel mfcc t tf placeholder datum type shape eval batch size image height image weith num channel model brn logit model forward stft mfcc logit tf add 0 logit name logit try scalar summary tf scalar summary summarywrite tf train summarywrite merge summary tf merge summary except scalar summary tf summary scalar summarywrite tf summary filewriter merge summary tf summary merge with tf name scope loss weight 1 0 1 7 4 1 5 7 mid loss function weight logit logit label label loss tf reduce sum mid loss summary scalar summary loss loss regularizer tf nn l2 loss model conv1 weight tf nn l2 loss model conv2 weight tf nn l2 loss model fc weight tf nn l2 loss model fc bias batch tf variable 0 dtype datum type with tf name scope train optimizer tf train adamoptimizer 0 001 minimize loss train prediction tf nn softmax logit eval prediction tf nn softmax model forward stft t mfcc t start time time time def eval in batch stft datum mfcc datum sess type size stft datum shape 0 if size eval batch size raise valueerror batch size for eval large than dataset d size prediction numpy ndarray shape size num label dtype numpy float32 for begin in xrange 0 size eval batch size end begin eval batch size if end size if type train prediction begin end sess run train prediction feed dict stft stft datum begin end mfcc mfcc datum begin end else prediction begin end sess run eval prediction feed dict stft t stft datum begin end mfcc t mfcc datum begin end else if type train batch prediction sess run train prediction feed dict stft stft datum eval batch size mfcc mfcc datum eval batch size else batch prediction sess run eval prediction feed dict stft t stft datum eval batch size mfcc t mfcc datum eval batch size prediction begin batch prediction begin size return prediction config tf configproto config gpu option allow growth true with tf session config config as sess tf global variable initializer run merge tf summary merge all writer summarywrite flag log train sess graph sess run iterator training initializer feed dict stft placeholder stft training mfcc placeholder mfcc training label placeholder label train for step in xrange int num epoch train size batch size batch stft batch mfcc batch label sess run next element training feed dict stft batch stft mfcc batch mfcc label batch label sess run optimizer feed dict feed dict if step eval frequency 0 summary l sess run merge loss feed dict feed dict writer add summary summary step elapse time time time start time start time time time rate acc error rate eval in batch stft train mfcc training sess train label train acc summary scalar summary accuracy acc print step d epoch 2f minibatch loss 3f minibatch error 1f accuracy 4f step float step batch size train size l rate acc sys stdout flush test error test acc error rate eval in batch stft test mfcc test sess test label test print testset error 1f accuracy 4f test error test acc converter tf lite tfliteconverter from session sess stft mfcc logit tflite model converter convert open brn tflite wb write tflite model writer close when I run the official demo of convert a tensorflow graphdef into a tensorflow lite flatbuffer from a tf session object the error also happen do that ok I mean can I use the weight train in tensorflow lite or the file doesn t save the weight |
tensorflowtensorflow | tfliteconverter fail with tf gather when the param argument be a layer attribute | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 5 lts tensorflow instal from source or binary conda tensorflow version use command below 2 0 0 dev20190807 python version 3 6 8 cuda cudnn version 10 0 gpu model and memory 8 x tesla p100 pcie 16 gb describe the current behavior I be not able to convert a savedmodel to a flatbuffer use tfliteconverter when the correspond tf keras model contain a layer with a tf gather op for which the param argument come from a variable that be initialize in the build method of that say layer when the param argument be from a locally define variable or when use tf nn embed lookup instead of tf gather everything work perfectly fine it also apply to tf gather nd describe the expect behavior I expect tf gather to work for the case in which the param argument be an attribute of the tf keras layers layer just as it do for the other case mention code to reproduce the issue I write a toy example to reproduce the issue it might be clear than the description above import numpy as np import tensorflow as tf print tf version class embed tf keras layers layer def init self vocab size hide size super embed self init self vocab size vocab size self hide size hide size def build self input shape self share weight self add weight weight shape self vocab size self hide size dtype tf float32 initializer tf random normal initializer mean 0 0 stddev self hide size 0 5 def call self input return tf nn embed lookup self share weight input return tf gather tf zeros shape self vocab size self hide size input return tf gather self share weight input class simplemodel tf keras model def init self vocab size hide size super simplemodel self init self embed layer embed vocab size hide size tf function input signature tf tensorspec shape none dtype tf int64 name input def call self input return self embed layer input vocab size 20000 hide size 300 build the model model simplemodel vocab size hide size input tf random uniform shape 20 dtype tf int64 maxval 100 model input export to savedmodel save model dir simple model tf save model save model save model dir tflite conversion converter tf lite tfliteconverter from save model save model dir tflite model converter convert other info log traceback most recent call last file home michael conda envs tf20 bin toco from protos line 10 in sys exit main file home michael conda envs tf20 lib python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main app run main execute argv sys argv 0 unparse file home michael conda envs tf20 lib python3 6 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home michael conda envs tf20 lib python3 6 site package absl app py line 300 in run run main main args file home michael conda envs tf20 lib python3 6 site package absl app py line 251 in run main sys exit main argv file home michael conda envs tf20 lib python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute enable mlir converter exception placeholder statefulpartitionedcall args 1 should be specie by input array |
tensorflowtensorflow | performance issue per device memory usage increase with number of device within tf distribute strategy | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary tf nightly gpu 2 0 preview tensorflow version use command below v1 12 1 8566 g207bd43 2 0 0 dev20190812 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 4 2 gpu model and memory 4 titan xp 12 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensor memory allocation fail for the same batch size determine by n acton sample as I scale the number of gpu in my strategy describe the expect behavior without the distribute strategy I can readily pass a batch size of 300 to a give gpu within the distribute strategy this drop to 200 on 2 gpu and 70 on 4 gpu code to reproduce the issue example code snippet def tree search input self model world model plan sequence n action sample 1000 tree value tf zero n action sample 1 dtype tf float32 for idx action in enumerate plan sequence print idx idx if idx 0 and action candidate action tf random uniform n action sample 1 num finger 7 minval 1 0 maxval 1 0 dtype tf float32 elif idx 0 and action pass elif idx 0 and not action candidate action tf zero n action sample 1 num finger 7 dtype tf float32 else pass tree value policy model candidate action input input world model candidate action input retval action candidate action tree value tree value return retval tf function def policy input policy model world model strategy take action true false false distribute output strategy experimental run v2 tree search args input self model world model plan sequence distribute output tree value tf concat strategy experimental local result distribute output tree value axis 0 distribute output action tf concat strategy experimental local result distribute output action axis 0 action choose max value action distribute output action distribute output tree value return action other info log on a relate note it seem like there s a regression in the tf summary start trace base method of profiling model that be require by 2 0 the new tensorboard interface allow visualize execution time of op but there doesn t seem to be a way to show memory usage |
tensorflowtensorflow | tf2 unhashable variable break exponentialmovingaverage | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below tf nightly 2 0 preview describe the current behavior as describe in diff ae1a8f7b66539f000615a4ab7e4b2151 variable be no long hashable in tf2 this cause the dictionary tracking of variable to break l371 l448 l462 describe the expect behavior we can likely just keep the variable name as the dictionary key code to reproduce the issue import tensorflow as tf foo tf variable 3 0 ema tf train exponentialmovingaverage 0 1 decay foo ema apply foo traceback most recent call last file break py line 5 in decay a ema apply a file usr local lib python2 7 dist package tensorflow core python training move average py line 425 in apply if var not in self average file usr local lib python2 7 dist package tensorflow core python op variable py line 1085 in hash raise typeerror variable be unhashable if tensor equality be enable typeerror variable be unhashable if tensor equality be enable instead use tensor experimental ref as t |
tensorflowtensorflow | tensorflow lite broadcastto | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 64 bit tensorflow instal from source or binary source tensorflow version use command below tf nightly python version python 3 6 gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory nvidia geforce gtx 1050 ti provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use concatenation conv 2d div fully connect maximum max pool 2d mean pad reshape softmax split v sqrt square stride slice sub transpose here be a list of operator for which you will need custom implementation broadcastto frozen graph pb full log txt any other info log I be try to convert a person re identification model to tflite to run on android however it seem that the operator broadcastto be unsupported I m not sure where this operator be even use because search on tensorboard return nothing I be aware that it be possible to create custom operator however I instantly get lose at the c code I don t even know where to place the new operator if I manage to make it and even then I d have to compile a new aar and use jni in order to use it on android the command I use be toco graph def file output freeze graph pb output file output model tflite input shape 2 1 160 60 3 input array image output array softmax output format tflite inference type float input datum type float |
tensorflowtensorflow | tensorflow for c doesn t compile can not open source file tensorflow c tf attrtype h | Bug | my team be request that we use the c api for tensorflow I follow the instruction outline under install tensorflow for c however when I attempt to compile it fail with the message can not open source file tensorflow c tf attrtype h I be use window 10 with visual studio 2017 the step use for installation compilation be as follow 1 download windows cpu zip file 2 extract to desktop folder tf demo c user desktop tf demo 3 start visual studio 2017 4 create a console application project 5 incorporate example program in visual studio s main file routine 6 update project s additional include directory 6 1 right click on project 6 2 select property 6 3 expand configuration 6 4 expand c c 6 5 select general 6 6 select additional include directory 6 7 enter the path to the include file c user desktop tf demo include 6 8 ok 6 9 ok 7 disable precompile header 7 1 right click on project 7 2 select property 7 3 expand cnfiguration 7 4 expand c c 7 5 select precompile header 7 6 change precompile header to not use 7 7 ok after this I compile and it fail with the message severity code description project file line suppression state error active e1696 can not open source file tensorflow c tf attrtype h demoprog c user desktop tf demo include tensorflow c c api h 22 error c1083 can not open include file tensorflow c tf attrtype h no such file or directory demoprog c user desktop tf demo include tensorflow c c api h 22 I review the content of the tf demo directory and there be no tf attrtype h anywhere in the install package c user desktop tf demo dir b s c user desktop tf demo include c user desktop tf demo lib c user desktop tf demo include tensorflow c user desktop tf demo include tensorflow c c user desktop tf demo include tensorflow c c api h c user desktop tf demo include tensorflow c eager c user desktop tf demo include tensorflow c license c user desktop tf demo include tensorflow c eager c api h c user desktop tf demo lib tensorflow dll c user desktop tf demo lib tensorflow lib |
tensorflowtensorflow | error check fail dim size size 5 vs 4 when use cpu and mkl instead of eigen or gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 6 high sierra mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior run my code on cpu with tensorflow 1 14 use mkl from anaconda throw the follow error 2019 08 12 17 42 33 158451 f tensorflow core util mkl util h 636 check fail dim size size 5 vs 4 abort trap 6 the error trace give I no hint on how to localize the problem see below the issue do not occur when instal a tensorflow build use eigen describe the expect behavior the code should work use both mkl or eigen code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I be not able to localize the exact issue it occur when run ndnet py from other info log full output python ndnet py warn log before flag parsing go to stderr w0812 18 24 24 249456 140736187437952 deprecation py 323 from user soenke anaconda envs tensorflow lib python3 7 site package tensorflow python compat v2 compat py 61 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term testing training 2019 08 12 18 24 26 506695 2019 08 12 18 24 26 621728 I tensorflow core platform cpu feature guard cc 145 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation sse4 1 sse4 2 avx to enable they in non mkl dnn operation rebuild tensorflow with the appropriate compiler flag 2019 08 12 18 24 26 691965 I tensorflow core common runtime process util cc 115 create new thread pool with default inter op set 4 tune use inter op parallelism thread for good performance w0812 18 24 26 750053 140736187437952 deprecation py 323 from user soenke anaconda envs tensorflow lib python3 7 site package tensorflow python data util random seed py 58 add dispatch support wrapper from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where w0812 18 24 26 794230 140736187437952 deprecation py 323 from user soenke document uni master masterarbeit git code public unet deconv dataset handler tfdata dataset handler py 332 py func from tensorflow python op script op be deprecate and will be remove in a future version instruction for update tf py func be deprecate in tf v2 instead there be two option available in v2 tf py function take a python function which manipulate tf eager tensor instead of numpy array it s easy to convert a tf eager tensor to an ndarray just call tensor numpy but have access to eager tensor mean tf py function s can use accelerator such as gpu as well as be differentiable use a gradient tape tf numpy function maintain the semantic of the deprecate tf py func it be not differentiable and manipulates numpy array it drop the stateful argument make all function stateful cropping to near allow input image size crop to near allow input image size w0812 18 24 27 274060 140736187437952 deprecation py 323 from user soenke document uni master masterarbeit git code public unet deconv dataset handler tfdata dataset handler py 139 datasetv1 make one shot iterator from tensorflow python data op dataset op be deprecate and will be remove in a future version instruction for update use for in dataset to iterate over a dataset if use tf estimator return the dataset object directly from your input function as a last resort you can use tf compat v1 datum make one shot iterator dataset input shape 400 100 100 1 building unet v3 for train net input shape 398 98 98 1 w0812 18 24 27 390805 140736187437952 deprecation py 323 from user soenke document uni master masterarbeit git code public unet deconv network architecture op py 173 conv3d from tensorflow python layer convolutional be deprecate and will be remove in a future version instruction for update use tf keras layers conv3d instead input block1 2 396 96 96 2 w0812 18 24 27 957751 140736187437952 deprecation py 323 from user soenke document uni master masterarbeit git code public unet deconv network architecture op py 116 dropout from tensorflow python layers core be deprecate and will be remove in a future version instruction for update use keras layer dropout instead w0812 18 24 28 167316 140736187437952 deprecation py 323 from user soenke document uni master masterarbeit git code public unet deconv network architecture op py 232 average pooling3d from tensorflow python layer pool be deprecate and will be remove in a future version instruction for update use keras layer averagepooling3d instead down block2 4 196 46 46 4 bottom block4 4 192 42 42 4 up block4 2 380 80 80 2 output block2 1 378 78 78 1 net output shape 378 78 78 1 output shape 378 78 78 1 loss be l2loss determine number of trainable var except batch norm for regularization do 2183 save new ckpt and log in model unetv3 small valid fp0 pp0 bn00 chlast poisson n1000 wl520 seed1 bs1 do0 0 loss l2loss0 weightreg 0 001l2 loss datareg 1e 08none example run8 save 2 log per epoch by default save checkpoint every 2 epoch by default start training with start step 0 epoch 1 2 save 0 summarize 0 2019 08 12 18 24 44 145176 f tensorflow core util mkl util h 636 check fail dim size size 5 vs 4 2019 08 12 18 24 44 145177 f tensorflow core util mkl util h 636 check fail dim size size 5 vs 4 abort trap 6 |
tensorflowtensorflow | tf estimator start with gpu and switch to cpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 16 04 4 lts xenial xerus tensorflow instal from source or binary from pip tensorflow version use command below 1 14 but early also for 1 11 python version 3 6 cuda cudnn version 10 0 130 7 6 2 gpu model and memory 1080ti 11 gb few day ago I ve ask a so question sadly without a response so I m try here not sure if this be a bug copy from the question I m train tf estimator model 5025056 trainable parameter simplify code model create custom model prepare my model here tf reset default graph estimator tf estimator estimator model fn model get model fn evaluation hook tf contrib estimator inmemoryevaluatorhook estimator estimator input fn lambda model eval input fn estimator train input fn lambda model train input fn hook evaluation hook and dataset be prepare here def train input fn dataset tf datum tfrecorddataset some filename dataset dataset shuffle 3000 dataset dataset repeat dataset dataset batch 384 dataset dataset prefetch 1 return dataset my dataset consist of image 9000 sample store in tfrecord 163 m gpu be geforce gtx 1080 ti and cpu be i5 6600 cpu 3 30ghz during the training at first everything look fine htop show that each core be work approximately the same way mostly jump between 0 50 utilization and gpu stat show use vidia smi query gpu timestamp pstate temperature gpu utilization gpu utilization memory memory total memory free memory use format csv l 1 indicate that card be work on 100 of its power temperature around 85 c 100 utilization of gpu and memory also most of the memory be use 10 gb up to step 4000 of 10000 tensorflow print global step sec with value range from 1 0 to 1 1 15 gb of available 16 gb of ram be use and no swap after that step that time global step sec be low and low for step 5000 it be 0 617275 aaround step 2000 it be only 0 0938672 and decrease down to 0 0368357 at the end of the training during this process nvidia smi show that utilization gpu and utilization memory be 0 more and more frequently despite the fact that memory use show same amount whole time that be 10 gb and cpu be work at a 100 of a single core ram be at the same level as when training start and no swapping occur periodically mayby once for 20 step gpu utilization be slightly high but quickly return to 0 it look like after some epoch tensorflow train on cpu instead of gpu what can be possible cause of that |
tensorflowtensorflow | model fit generator multithreading be break in tf kera | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 platform independent mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary from pip tensorflow version use command below 1 14 0 and 2 0 python version 3 6 7 summary fit generator have an option call worker set this to 1 will use multithreade to queue up batch from a generator it raise an exception if the generator be not thread safe this be expect however it do not accept thread safe generator describe the current behavior call model fit generator on a keras model in tf 2 0 or compat v2 use a generator object subclasse from collection generator raise an exception that the give generator object do not have a shape attribute this be root in the calling of model iteration which then unsuccessfully attempt to find out wether the generator be in fact a generator by use inspect isgenerator which only recognize native python generator construct by a function contain a yield statement however native python generator can not be thread safe thus fit generator with worker 1 and use multiprocesse false be break in tf keras describe the expect behavior in keras 2 2 4 fit generator simply call the next gen function on the generator provide to fit generator this be work as expect code to reproduce the issue python import numpy as np switch here to switch between work kera and non work tf keras code do break true if do break import tensorflow compat v2 as tf from tensorflow compat v2 import kera from tensorflow compat v2 kera layer import dense from tensorflow compat v2 kera model import sequential tf enable v2 behavior else import kera from keras layer import dense from keras model import sequential import threading from collection import generator class mwe gen generator def init self train datum train label batch size self train datum train datum self train label train label self batch size batch size self batch 0 self lock threading lock def iter self return self def next self return self next def next self with self lock batch self batch batch size self batch size self batch self batch self batch size if self batch len self train datum self batch 0 batch data self train datum batch batch batch size batch label self train label batch batch batch size return batch data batch label def send self arg return self next def close self raise generatorexit inside generator try self throw generatorexit except generatorexit stopiteration pass else raise runtimeerror generator ignore generatorexit def throw self type none value none traceback none raise stopiteration train datum np random normal size 10 1 train label np random normal size 10 1 gen mwe gen train data train label 5 model sequential model add dense 1 input shape 1 model compile loss mse optimizer sgd model fit generator gen step per epoch 2 other info log python traceback most recent call last file multithreade gen py line 72 in model fit generator gen step per epoch 2 file c user pyrestone appdata local program python python36 lib site package tensorflow python keras engine training py line 1433 in fit generator step name step per epoch file c user pyrestone appdata local program python python36 lib site package tensorflow python keras engine training generator py line 144 in model iteration shuffle shuffle file c user pyrestone appdata local program python python36 lib site package tensorflow python keras engine training generator py line 480 in convert to generator like num sample int nest flatten datum 0 shape 0 attributeerror mwe gen object have no attribute shape |
tensorflowtensorflow | cocoapod can not install tensorflowlitec | Bug | I be get error instal tensorflowlitec when try to install pod I be use pod tensorflowliteswift in my podfile error error instal tensorflowlitec usr bin curl f l o var folder 8h w7cb5w9x4m9dg8l9rr3h1qwas0000gn t d20208412 2822 n13n01 file tgz create dir netrc optional retry 2 |
tensorflowtensorflow | tflite metal gpu delegate add operator do not support broadcast | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 io 12 3 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone xr tensorflow instal from source or binary binary but tflite compile from source tensorflow version use command below b v1 13 2 5 g04256c89d8 1 13 2 python version 3 6 8 bazel version if compile from source 0 26 1 gcc compiler version if compile from source clang 1001 0 46 4 cuda cudnn version n a gpu model and memory iphone xr you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run the single operation model give below use the metal gpu delegate give a different output than when execute on cpu or with the full tensorflow interpreter describe the expect behavior the metal gpu delegate give the same output as the cpu interpreter code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem model break tflite zip other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf 2 0 ft gradienttape gradient second gradient none | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window anaconda mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below tensorflow 2 0b1 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior second gradient with input be none describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import tensorflow as tf import numpy as np from matplotlib import pyplot as plt import time import math import tensorflow as tf import numpy as np import tensorflow as keras from tensorflow keras import layer from tensorflow import kera from tensorflow keras util import plot model from scipy stat import multivariate normal as normal d 2 batch size 3 def func x x x x x keras layer dense d x return x input keras input shape d dtype tf float64 name x output func input model keras model input input output output x train tf random uniform batch size d minval 0 maxval 1 dtype tf float64 seed 1000 for epoch in range 1 with tf gradienttape as t1 t1 watch x train with tf gradienttape as t2 t2 watch x train prediction model x train dy dx t2 gradient prediction x train print dy dx dy dx dyy dx t1 gradient dy dx x train print dyy dx dyy dx other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach w0811 23 13 04 473541 11804 ag log py 145 entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert assertionerror warning entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert assertionerror dy dx tf tensor 0 03395048 0 86635576 0 52642456 0 80100264 1 05554004 0 05975098 shape 3 2 dtype float64 dyy dx none |
tensorflowtensorflow | tensorflow 2 0 tensorflow python eager function tfmethodtarget object at could not be transform and will be execute as be | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert assertionerror w0811 12 49 24 207197 2284 ag log py 145 entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause convert assertionerror system information window 10 anaconda python 3 7 tensorflow 2 0b1 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue import time import math import tensorflow as tf import numpy as np import tensorflow as keras from tensorflow keras import layer from tensorflow import kera from tensorflow keras util import plot model from scipy stat import multivariate normal as normal d 10 t 0 1 n time 5 n sample 100 batch size 100 n maxstep 400 h t 0 0 n time t stamp np arange 0 n time h def f tf t x y z v y tf math sin y return v def g tf t x v tf math reduce sum x 3 1 keepdim true return v def k tf n sample w np zero n sample d n time dtype np float64 x sample np zero n sample d n time 1 dtype np float64 for I in range n time w I np reshape normal rvs mean np zero d dtype np float64 cov 1 size n sample n sample d x sample I 1 w I return w x sample def nn tf x x keras layer batchnormalization batch size n sample x x keras layer dense d batch size n sample x x keras layer batchnormalization batch size n sample x return x dw keras input shape d n time batch size n sample dtype tf float64 name dw xx keras input shape d n time 1 batch size n sample dtype tf float64 name x x xx y tf zero n sample 1 dtype tf float64 z tf zero n sample d dtype tf float64 for it in range n time 1 with tf name scope str it 1 y y tf math reduce sum z dw it 1 keepdim true subx tf reshape x it shape n sample d z nn tf subx d y y tf math reduce sum z dw n time 1 1 keepdim true model keras model input xx dw output y optimizer keras optimizer adam learning rate 1e 3 train accuracy tf keras metric sparsecategoricalaccuracy name train accuracy dw train x train k tf n sample for epoch in range 10 with tf gradienttape as tape prediction model x train dw train label g tf t x train n time loss value tf reduce sum tf keras loss mse label prediction grad tape gradient loss value model trainable variable optimizer apply gradient zip grad model trainable variable accuracy train accuracy label prediction print step epoch loss loss value numpy accuracy accuracy numpy other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | bug in keras layer depthwiseconv2d when use stride and dilation | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes attach os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 7 4 2 gpu model and memory nvidia quadro p2000 with 4 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when execute the follow line of code I get an assertion because the result be not as expect but actually the value 29 19 1 I would see how the 19 would be correct if I have a stride of 1 along the y axis however then it would not match up again with the third number be 1 instead of 4 so it be not only that either dilation or stride be ignore but actually provide for both non trivial value result in some strange behavior which I think be actually a bug import numpy as np import tensorflow as tf kernel np array 5 5 dtype np float32 reshape 1 2 1 1 bias np array 4 dtype np float32 img np array 2 1 3 2 3 1 2 1 2 dtype np float32 reshape 1 1 9 1 stride 2 2 dilation 1 2 depthconv2d layer tf keras layer depthwiseconv2d depth multipli 1 kernel size 1 2 stride stride dilation rate dilation pad valid depthwise initializer lambda args kwargs tf constant kernel bias initializer lambda args kwargs tf constant bias depthconv2d output depthconv2d layer tf constant img with tf session as s s run tf global variable initializer result s run depthconv2d output print result assert np all result np array 29 4 1 4 reshape 1 1 4 1 describe the expect behavior no assertion as I compute the value by hand and expect they to be right code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log nothing else |
tensorflowtensorflow | tf 1 14 on ai platform mirroredstrategy fail on 2 gpu with runtimeerror | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ai platform 1 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below 1 14 python version 3 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version ai platform gpu model and memory v100 16 gb describe the current behavior I have write a custom estimator and want to train it on 2 gpu use tf distribute mirroredstrategy submit this job l4 to the ai platform unfortunately after 900 step the training fail with runtimeerror variable creator scope nesting error move call to tf distribute set strategy out of with scope describe the expect behavior the model should train on two gpu asynchronously code to reproduce the issue the estimator definition start here l193 other info log the entire ai platform log another curious detail be that the estimator seem to be execute only on a single gpu turn on the placement log with a similar set up confirm that training just use the first gpu before fail here be the usage graph where you can see a short spike before the failure |
tensorflowtensorflow | basecollectiveexecutor startabort out of range warning when fit model in graph mode tf 2 0 nightly | Bug | system information os platform and distribution window 10 tensorflow instal from binary tensorflow version tf 2 0 nightly gpu preview python version 3 6 cuda cudnn version 10 1 gpu model and memory 960 m I have a very simple model that I have make by inherit from tf keras model which I feed with a dataset I e model mymodel model compile optimizer tf keras optimizer adam learning rate 0 01 amsgrad true loss loss fn run eagerly false dataset tf datum dataset from tensor slice x y dataset dataset shuffle buffer size 10000 dataset dataset batch batch size 1000 model fit dataset epoch 100 verbose 0 callback lossanderrorprintingcallback if I run this use tf 2 0 beta work perfectly fine I e with run eagerly false if I run it use tf nightly preview with run eagerly true again fine however if I try with run eagerly false use nightly preview I get a stream of the follow warning 2019 08 10 16 35 40 168418 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext |
tensorflowtensorflow | tf2 0beta1 not json serializable when use tf keras experimental export save model | Bug | most code be from one of the tensorflow 2 0 beta guide write layer and model with tensorflow kera put it all together an end to end example describe the current behavior typeerror not json serializable b n x06square x12 x06square x1a x0fz mean identity x07 n x01 t x12 x020 x01 describe the expect behavior save model correctly code to reproduce the issue import tensorflow as tf from tensorflow keras import layer get training datum x train tf keras datasets mnist load datum x train x train reshape 60000 784 astype float32 255 original dim 784 intermediate dim 64 latent dim 32 def sampling input z mean z log var input batch tf shape z mean 0 dim tf shape z mean 1 epsilon tf keras backend random normal shape batch dim return z mean tf exp 0 5 z log var epsilon define encoder model original input tf keras input shape original dim name encoder input x layer dense intermediate dim activation relu original input z mean layer dense latent dim name z mean x z log var layer dense latent dim name z log var x z tf keras layers lambda sample z mean z log var encoder tf keras model input original input output z name encoder define decoder model latent input tf keras input shape latent dim name z sample x layer dense intermediate dim activation relu latent input output layer dense original dim activation sigmoid x decoder tf keras model input latent input output output name decoder define vae model output decoder z vae tf keras model input original input output output name vae add kl divergence regularization loss kl loss 0 5 tf reduce mean z log var tf square z mean tf exp z log var 1 vae add loss kl loss train optimizer tf keras optimizer adam learning rate 1e 3 vae compile optimizer loss tf keras loss meansquarederror vae fit x train x train epoch 3 batch size 64 save model tf keras experimental export save model vae vae functional save model other info log typeerror traceback most recent call last in 1 tf keras experimental export save model vae vae functional save model miniconda envs tf2 lib python3 7 site package tensorflow python keras save save model py in export save model model save model path custom object as text input signature serve only 167 168 try 169 export model json model save model path 170 except notimplementederror 171 log warning skip save model json subclasse model do not have miniconda envs tf2 lib python3 7 site package tensorflow python keras save save model py in export model json model save model path 175 def export model json model save model path 176 save model configuration as a json string under asset folder 177 model json model to json 178 model json filepath os path join 179 save model util get or create asset dir save model path miniconda envs tf2 lib python3 7 site package tensorflow python keras engine network py in to json self kwargs 1447 model config self update config 1448 return json dump 1449 model config default serialization get json type kwargs 1450 1451 def to yaml self kwargs miniconda envs tf2 lib python3 7 json init py in dumps obj skipkey ensure ascii check circular allow nan cls indent separator default sort key kw 236 check circular check circular allow nan allow nan indent indent 237 separator separator default default sort key sort key 238 kw encode obj 239 240 miniconda envs tf2 lib python3 7 json encoder py in encode self o 197 exception aren t as detailed the list call should be roughly 198 equivalent to the pysequence fast that join would do 199 chunk self iterencode o one shoot true 200 if not isinstance chunk list tuple 201 chunk list chunk miniconda envs tf2 lib python3 7 json encoder py in iterencode self o one shot 255 self key separator self item separator self sort key 256 self skipkey one shoot 257 return iterencode o 0 258 259 def make iterencode marker default encoder indent floatstr miniconda envs tf2 lib python3 7 site package tensorflow python util serialization py in get json type obj 67 return dict obj 68 69 raise typeerror not json serializable obj typeerror not json serializable b n x06square x12 x06square x1a x0fz mean identity x07 n x01 t x12 x020 x01 |
tensorflowtensorflow | export lib get temp export dir return incorrect value with mixed bytes and str | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device macbook pro tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the return value of export lib get temp export dir be mixed with string and bytes where the byte portion be in literal form of temp b 1234567890 include the letter b and the quote these will then become part of the directory name create describe the expect behavior the return value should be temp 1234567890 code to reproduce the issue from tensorflow estimator python estimator export import export lib from tensorflow python lib io import file io import time base1 tmp test export base temp1 export lib get temp export dir export lib get timestampe export dir base1 print temp1 temp1 decode utf 8 file io recursive create dir temp1 arr os listdir base1 print arr os rmdir temp1 other info log output of above code temp1 tmp test export base temp b 1565380472 temp b 1565380472 as you can see the b become literal |
tensorflowtensorflow | tflite unsupported op log1p sparsereorder segmentsum | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert copy and paste here tflite unsupported op log1p sparsereorder segmentsum some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation expand dim fill fully connect gather less logistic mul reshape select shape softmax stride slice sum tile unique zero like here be a list of operator for which you will need custom implementation log1p sparsereorder also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tflite metal gpu delegate crash in mul operation | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 io 12 3 1 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone xr tensorflow instal from source or binary binary but tflite compile from source tensorflow version use command below b v1 13 2 5 g04256c89d8 1 13 2 python version 3 6 8 bazel version if compile from source 0 26 1 gcc compiler version if compile from source clang 1001 0 46 4 cuda cudnn version n a gpu model and memory iphone xr you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run the single operation model give below use the metal gpu delegate cause the follow error execution of the command buffer be abort due to an error during execution cause gpu hang error ioaf code 3 describe the expect behavior run the model complete successfully code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem model broken tflite zip this basic model fail to run on gpu with the error specify above but will run fine on cpu other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach tflite interpreter be be run use provide sample code and the crash only occur on gpu the model run correctly on cpu |
tensorflowtensorflow | tfdv validate tfexample in tfrecord can t be find in tensorflow datum validation | Bug | the tutorial say use tfdv validate tfexample in tfrecord to check for error on a per example basis but I can t import it and also can t find source in code please check this function url s with the issue |
tensorflowtensorflow | python for loop in eager model yield expect result for keras model predict but not for save model with multiple output | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary na tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior tf keras model s predict method prediction differ from tf save model s prediction describe the expect behavior tf save model save should either fail to serialize or should yield correct prediction on reload code to reproduce the issue python import tensorflow as tf tf enable eager execution print tf version from tensorflow keras layers import input dense lambda from tensorflow keras model import model import numpy as np in0 input shape 1 dtype float32 name my input 0 in1 input shape 1 dtype float32 name my input 1 concatte lambda lambda input tf concat input axis 1 in0 in1 output dense 3 concatte way 1 do not work out lambda lambda output output I name f output I output for I in range 3 way 2 do not work out for I in range 3 out append lambda lambda output output I name f output I output way 3 do work out0 lambda lambda output output 0 name my output 0 post process out1 lambda lambda output output 1 name my output 1 post process out2 lambda lambda output output 2 name my output 2 post process out out0 out1 out2 my model model input in0 in1 output out tf keras backend learning phase 0 my model predict np array 5 3 np array 1 2 yield 1 14 0 array 0 13002533 0 03461001 dtype float32 array 0 45988005 0 3892737 dtype float32 array 0 28218567 0 14224575 dtype float32 on the other hand python tf save model save my model mymodel reload tf save model load v2 mymodel sig reload signature serve default sig my input 0 tf constant np array 5 3 dtype tf float32 my input 1 tf constant np array 1 2 dtype tf float32 yield output 0 output 1 output 2 try it for yourself here provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | gpu race condition from tf map fn | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow all the code that cause this issue use tensorflow keras operation os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 8 cuda cudnn version 10 0 7 6 2 gpu model and memory rtx 6000x2 48 gb describe the current behavior I ve create a custom layer call roi in kera that use tf map fn precisely because it have unknown parameter that it need to take as tensor object this layer work perfectly on cpu inference and training it also work perfectly on gpu during inference but during training with a powerful gpu exception about gpu colocation of roi layer occur tensorflow python framework error impl invalidargumenterror can not assign a device for operation roi map while identity 1 could not satisfy explicit device specification because the node node roi map while identity 1 define at path to custom layer custom py 70 place on device no device assignment be active during op roi map while identity 1 creation node roi map while identity 1 define at path to custom layer custom py 70 additional information about colocation no node device colocation be active during op roi map while identity 1 creation no device assignment be active during op roi map while identity 1 creation manual colocation of roi layer to cpu device with tf device work but I want roi to support gpu as well my hypothesis roi layer work on cpu because only single core at a time should handle the layer even if multiprocesse be activate there be few core slowly balance the task but whenever gpu be utilize thousand of core be work together in parallel and they be not wait for each other to finish their task thus one of the process try to gather datum from tensorarray that be still in while loop use tf map fn which cause the error describe the expect behavior tensorflow should be able to handle these race condition by wait for its own tf map fn to be finish instead of raise exception code to reproduce the issue this be the code that instantly cause the mention issue on my local machine other info log full log traceback most recent call last file path to site package tensorflow python client session py line 1356 in do call return fn args file path to site package tensorflow python client session py line 1339 in run fn self extend graph file path to site package tensorflow python client session py line 1374 in extend graph tf session extendsession self session tensorflow python framework error impl invalidargumenterror can not assign a device for operation roi map while identity 1 could not satisfy explicit device specification because the node colocation node roi map while identity 1 be colocate with a group of nodes that require incompatible device job localhost replica 0 task 0 device gpu 0 all available device job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 device xla gpu 0 job localhost replica 0 task 0 device xla cpu 0 job localhost replica 0 task 0 device gpu 0 colocation debug info colocation group have the follow type and support device root member assign device name index 1 request device name job localhost replica 0 task 0 device gpu 0 assign device name job localhost replica 0 task 0 device gpu 0 resource device name job localhost replica 0 task 0 device gpu 0 support device type cpu possible device stridedslicegrad gpu cpu xla cpu xla gpu nextiteration gpu cpu xla cpu xla gpu mul gpu cpu xla cpu xla gpu equal gpu cpu xla cpu xla gpu dynamicstitch gpu cpu xla cpu xla gpu fill gpu cpu xla cpu xla gpu floormod gpu cpu xla cpu xla gpu shape gpu cpu xla cpu xla gpu reshape gpu cpu xla cpu xla gpu tensorarrayreadv3 gpu cpu xla cpu xla gpu tensorarrayscatterv3 gpu cpu xla cpu xla gpu tensorarraysizev3 gpu cpu xla cpu xla gpu const gpu cpu xla cpu xla gpu tensorarraywritev3 gpu cpu xla cpu xla gpu identity gpu cpu xla cpu xla gpu greaterequal gpu cpu xla cpu xla gpu exit gpu cpu xla cpu xla gpu cast gpu cpu xla cpu xla gpu controltrigger gpu cpu xla cpu xla gpu tensorarraygradv3 gpu cpu xla cpu xla gpu pack gpu cpu xla cpu xla gpu enter gpu cpu xla cpu xla gpu tensorarrayv3 gpu cpu xla cpu xla gpu merge gpu cpu xla cpu xla gpu stackv2 gpu cpu xla cpu xla gpu range gpu cpu xla cpu xla gpu tensorarraygatherv3 gpu cpu xla cpu xla gpu stackpushv2 gpu cpu xla cpu xla gpu switch gpu cpu xla cpu xla gpu realdiv gpu cpu xla cpu xla gpu add gpu cpu xla cpu xla gpu stridedslice gpu cpu xla cpu xla gpu max gpu cpu xla cpu xla gpu loopcond gpu cpu xla cpu xla gpu sum gpu cpu xla cpu xla gpu stackpopv2 gpu cpu xla cpu xla gpu sub gpu cpu xla cpu xla gpu colocation member user request device and framework assign device if any roi map tensorarray 2 tensorarrayv3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map while identity 1 identity roi map while map tensorarray 1 tensorarrayv3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map while map while identity 1 identity roi map while map while stride slice 4 stack pack roi map while map while stride slice 4 stack 1 pack roi map while map while stride slice 4 stridedslice roi map while map while max max roi map while map while tensorarraywrite tensorarraywritev3 enter enter roi map while map while tensorarraywrite tensorarraywritev3 tensorarraywritev3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map while map while exit 2 exit roi map while map tensorarraystack tensorarraysizev3 tensorarraysizev3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map while map tensorarraystack range start const roi map while map tensorarraystack range delta const roi map while map tensorarraystack range range roi map while map tensorarraystack tensorarraygatherv3 tensorarraygatherv3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map while tensorarraywrite tensorarraywritev3 enter enter roi map while tensorarraywrite tensorarraywritev3 tensorarraywritev3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map tensorarraystack tensorarraysizev3 tensorarraysizev3 framework assign device job localhost replica 0 task 0 device gpu 0 roi map tensorarraystack range start const roi map tensorarraystack range delta const roi map tensorarraystack range range roi map tensorarraystack tensorarraygatherv3 tensorarraygatherv3 framework assign device job localhost replica 0 task 0 device gpu 0 training multiplierwrapper gradient f count 3 const training multiplierwrapper gradient f count 4 enter training multiplierwrapper gradient merge 2 merge training multiplierwrapper gradient switch 2 switch training multiplierwrapper gradient add 1 y const training multiplierwrapper gradient add 1 add training multiplierwrapper gradient f count 5 exit training multiplierwrapper gradient const const training multiplierwrapper gradient f acc stackv2 training multiplierwrapper gradient enter enter training multiplierwrapper gradient stackpushv2 stackpushv2 training multiplierwrapper gradient stackpopv2 enter enter training multiplierwrapper gradient stackpopv2 stackpopv2 training multiplierwrapper gradient b count 4 const training multiplierwrapper gradient b count 5 enter training multiplierwrapper gradient merge 3 merge training multiplierwrapper gradient greaterequal 1 enter enter training multiplierwrapper gradient greaterequal 1 greaterequal training multiplierwrapper gradient b count 6 loopcond training multiplierwrapper gradient switch 3 switch training multiplierwrapper gradient sub 1 sub training multiplierwrapper gradient b count 7 exit training multiplierwrapper gradient roi map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 tensorarraygradv3 training multiplierwrapper gradient roi map tensorarraystack tensorarraygatherv3 grad tensorarraygrad gradient flow identity training multiplierwrapper gradient roi map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 tensorarrayscatterv3 training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 enter enter training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 tensorarraygradv3 training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad gradient flow identity training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 const const training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 f acc stackv2 training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 enter enter training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpopv2 enter enter training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 tensorarrayreadv3 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 const const training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 f acc stackv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 enter enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpopv2 enter enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 const 1 const training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 f acc 1 stackv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 enter 1 enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpushv2 1 stackpushv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpopv2 1 enter enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 stackpopv2 1 stackpopv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad tensorarraygradv3 tensorarraygradv3 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarraygrad gradient flow identity training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 const const training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 f acc stackv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 enter enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 stackpopv2 enter enter training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map tensorarraystack tensorarraygatherv3 grad tensorarrayscatter tensorarrayscatterv3 tensorarrayscatterv3 training multiplierwrapper gradient roi map while map while exit 2 grad b exit enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 const const training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 f acc stackv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 enter enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 stackpopv2 enter enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 stackpopv2 stackpopv2 training multiplierwrapper gradient b sync controltrigger training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 enter 1 enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad tensorarraygradv3 tensorarraygradv3 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarraygrad gradient flow identity training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 const const training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 f acc stackv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 enter enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 enter 1 enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpopv2 enter enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpopv2 enter 1 enter training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 tensorarrayreadv3 training multiplierwrapper gradient roi map while map while max grad shape shape training multiplierwrapper gradient roi map while map while max grad size const training multiplierwrapper gradient roi map while map while max grad add const const training multiplierwrapper gradient roi map while map while max grad add add training multiplierwrapper gradient roi map while map while max grad mod floormod training multiplierwrapper gradient roi map while map while max grad shape 1 const training multiplierwrapper gradient roi map while map while max grad range start const training multiplierwrapper gradient roi map while map while max grad range delta const training multiplierwrapper gradient roi map while map while max grad range range training multiplierwrapper gradient roi map while map while max grad fill value const training multiplierwrapper gradient roi map while map while max grad fill fill training multiplierwrapper gradient roi map while map while max grad dynamicstitch const const training multiplierwrapper gradient roi map while map while max grad dynamicstitch f acc stackv2 training multiplierwrapper gradient roi map while map while max grad dynamicstitch enter enter training multiplierwrapper gradient roi map while map while max grad dynamicstitch enter 1 enter training multiplierwrapper gradient roi map while map while max grad dynamicstitch stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while max grad dynamicstitch stackpopv2 enter enter training multiplierwrapper gradient roi map while map while max grad dynamicstitch stackpopv2 enter 1 enter training multiplierwrapper gradient roi map while map while max grad dynamicstitch stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map while max grad dynamicstitch dynamicstitch training multiplierwrapper gradient roi map while map while max grad reshape const const training multiplierwrapper gradient roi map while map while max grad reshape f acc stackv2 training multiplierwrapper gradient roi map while map while max grad reshape enter enter training multiplierwrapper gradient roi map while map while max grad reshape enter 1 enter training multiplierwrapper gradient roi map while map while max grad reshape stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while max grad reshape stackpopv2 enter enter training multiplierwrapper gradient roi map while map while max grad reshape stackpopv2 enter 1 enter training multiplierwrapper gradient roi map while map while max grad reshape stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map while max grad reshape reshape training multiplierwrapper gradient roi map while map while max grad reshape 1 reshape training multiplierwrapper gradient roi map while map while max grad equal const const training multiplierwrapper gradient roi map while map while max grad equal f acc stackv2 training multiplierwrapper gradient roi map while map while max grad equal enter enter training multiplierwrapper gradient roi map while map while max grad equal enter 1 enter training multiplierwrapper gradient roi map while map while max grad equal stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while max grad equal stackpopv2 enter enter training multiplierwrapper gradient roi map while map while max grad equal stackpopv2 enter 1 enter training multiplierwrapper gradient roi map while map while max grad equal stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map while max grad equal equal training multiplierwrapper gradient roi map while map while max grad cast cast training multiplierwrapper gradient roi map while map while max grad sum sum training multiplierwrapper gradient roi map while map while max grad reshape 2 reshape training multiplierwrapper gradient roi map while map while max grad truediv realdiv training multiplierwrapper gradient roi map while map while max grad mul mul training multiplierwrapper gradient roi map while map while stride slice 4 grad shape const training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad const const training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad f acc stackv2 training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad enter enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad enter 1 enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpushv2 stackpushv2 training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 enter enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 enter 1 enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 stackpopv2 training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad const 1 const training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad f acc 1 stackv2 training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad enter 2 enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad enter 3 enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpushv2 1 stackpushv2 training multiplierwrapper gradient nextiteration 2 nextiteration training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 1 enter enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 1 enter 1 enter training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stackpopv2 1 stackpopv2 training multiplierwrapper gradient roi map while map while tensorarraywrite tensorarraywritev3 grad tensorarrayreadv3 b sync controltrigger training multiplierwrapper gradient nextiteration 3 nextiteration training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad const 2 const training multiplierwrapper gradient roi map while map while stride slice 4 grad stridedslicegrad stridedslicegrad node roi map while identity 1 |
tensorflowtensorflow | init operation be not add automatically to collection with tf graphkey init op | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary conda pip tensorflow version use command below 1 14 python version 2 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior in python graph tf graph with graph as default with tf variable scope signal in signal in tf placeholder tf float32 shape 10 40 2 1 with tf variable scope dascope1 conv linear tf keras layer conv2d 8 8 2 padding valid name conv linear use bias true kernel initializer tf initializer lecun normal seed 137 bias initializer tf initializer lecun normal seed 137 signal in with tf variable scope softmax logit tf contrib layer fully connect conv linear 2 activation fn none normalizer fn none normalizer param none weight initializer tf initializer lecun normal seed 731 weight regularizer none bias initializer tf initializer lecun normal seed 777 bias regularizer none reuse none variable collection none output collection none trainable true scope logit softmax tf nn softmax logit axis 0 with tf variable scope loss l vec tf placeholder tf float32 shape 10 2 loss tf keras loss categoricalcrossentropy from logit false label smooth 0 l vec softmax minimize op tf train adamoptimizer learning rate 0 05 minimize loss tf global variable initializer print graph get collection ref tf graphkey init op return to stdout describe the expect behavior it must return the value of the collection key by tf graphkeys init op in this case it should be something like loss init type noop code to reproduce the issue give above other info log this must be a easy to circumvent bug but for coherence I think must be correct |
tensorflowtensorflow | well documentation for dataset from tensor from tensor slice | Bug | url s with the issue please provide a link to the documentation entry for example from tensor slice description of issue what need change while follow google s ml crash course I find it very difficult to understand the difference between dataset from tensor from tensor slice and when to use each one thing that confuse I be that from tensor only create a single tensor despite the name include the plural form tensor beginner get introduce to these apis very early but the current documentation consist of one terse sentence about behaviour plus a multi line warn about memory usage ft create a dataset whose element be slice of the give tensor ft create a dataset with a single element comprise the give tensor I think this would benefit from some elaboration and a clear description of how the two be relate give that user enocunter this api very early the behaviour should ideally be obvious a small example would help communicate this e g my datum my feature 1 2 3 4 5 6 tf datum dataset from tensor my datum model a single 2x3 tensor my datum my feature 1 2 3 4 5 6 tf datum dataset from tensor slice my datum split on row model two 1x3 tensor clear description correct link fine afaik parameter define fine afaik return define fine afaik raise list and define fine afaik usage example there be currently no usage example and I think the documentation would greatly benefit from one request visual if applicable there be currently no visual they might possibly help but I think a usage example be probably sufficient submit a pull request I m not sure if I ll submit a pr to improve this I d like to but I m still quite new to tf and wouldn t like to introduce any inaccuracy |
tensorflowtensorflow | tf 2 0 api docs tf keras backend relu | Bug | url s with the issue description of the issue what need change the documentation github symbol link on the official api doc redirect to another symbol than the expect symbol correct link no parameter define no return define no raise list and define no usage example no request visual if applicable yes submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | memory leakage when convert to tensor | Bug | import numpy as np import tensorflow as tf for I in range 5000 print I array np random random 1024 1024 tf convert to tensor array dtype tf float32 tensorflow version be 1 14 0 numpy version be 1 17 0 python version be 3 6 8 the process be kill when I 2400 on my machine the command watch d free m show that memory decrease over time until it get close to zero then crash I do not find a way to free the memory from the unreferenced tensor good beno t |
tensorflowtensorflow | tool graph transform transformgraph have no doc or example of usage | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue the provide wrapper function have no docs description of issue what need change provide a doc such example of usage transformgraph graph as graph def remove node op loss init clear description I wanna to use this method have a way to do specific edition and graph redefinition while build the model from inside python withou have to go to command line parameter define I think how to use the parameter be exactly the problem the readme md from the repository give bazel example but it dosnt work as it should in the wrapper usage example this be my first try I could not get the desirable result strip init op from graph python graph tf graph with graph as default with tf variable scope signal in signal in tf placeholder tf float32 shape 10 40 2 1 with tf variable scope dascope1 conv linear tf keras layer conv2d 8 8 2 padding valid name conv linear use bias true kernel initializer tf initializer lecun normal seed 137 bias initializer tf initializer lecun normal seed 137 signal in with tf variable scope softmax logit tf contrib layer fully connect conv linear 2 activation fn none normalizer fn none normalizer param none weight initializer tf initializer lecun normal seed 731 weight regularizer none bias initializer tf initializer lecun normal seed 777 bias regularizer none reuse none variable collection none output collection none trainable true scope logit softmax tf nn softmax logit axis 0 with tf variable scope loss l vec tf placeholder tf float32 shape 10 2 loss tf keras loss categoricalcrossentropy from logit false label smooth 0 l vec softmax minimize op tf train adamoptimizer learning rate 0 05 minimize loss tf global variable initializer then python graphdef tf tool graph transform transformgraph graph as graph def remove node op loss init with tf graph as default as g tf import graph def graphdef name for op in g get operation if op name split 1 init print true return true so how to use this wrapper note that init op dosnt have any input output but only dependency arrow as input request visual if applicable be there currently visual if not will it clarify the content submit a pull request wait for instruction of the community about the use of this function be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide |
tensorflowtensorflow | mirrorstrategy fill up memory of gpu that be not select for training in tf 2 0 | Bug | I be use tf distribute mirroredstrategy in order to train a tf keras model I have 2 t4 gpu from google available I want to train my model consist lstm layer on only one gpu I e gpu 1 so in order to select it I define device gpu 1 inside mirrorstrategy as suggest here mirroredstrategy now when I run the file for train the memory of first gpu gpu 0 be also fill up completely however during training only gpu gpu 1 be be utilize completely my question be why be this happen since I only want to utilize gpu gpu 1 in this case first gpu be useless however if I select the first gpu gpu 0 for training then it use only 112 mb of memory of the second gpu gpu 1 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 tesla t4 on 00000000 00 04 0 off 0 n a 60c p0 28w 70w 14449mib 15079mib 0 default 1 tesla t4 on 00000000 00 05 0 off 0 n a 75c p0 74w 70w 14517mib 15079mib 95 default process gpu memory gpu pid type process name usage 0 20576 c python 14439mib 1 20576 c python 14507mib |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.