repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf loss cosine similarity be a negative quantity between 1 and 0 | Bug | in tensorflow website it describe tf loss cosine simialrity as follow note that it be a negative quantity between 1 and 0 where 0 indicate orthogonality and value close to 1 indicate great similarity in fact the quantity be from 1 to 1 it just take a negative from normal cosine similarity the page be at |
tensorflowtensorflow | operator in the model be not support by the standard tensorflow lite runtime | Bug | system information os platform and distribution e g linux ubuntu 16 04 win 10 x64 tensorflow version or github sha if from source 2 0 provide the text output from tflite convert code import tensorflow as tf filename model 24 11 19 3 model tf save model load filename concrete func model signature tf save model default serve signature def key concrete func input 0 set shape 1 8 converter tf lite tfliteconverter from concrete function concrete func tflite model converter convert open convert model new1 tflite wb write tflite model log 2019 11 25 02 07 37 676230 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2019 11 25 02 07 38 516215 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 11 25 02 07 38 516605 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 25 02 07 38 537041 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2019 11 25 02 07 38 537143 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer graph size after 243 node 231 405 edge 393 time 7 653ms 2019 11 25 02 07 38 537239 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer graph size after 243 node 0 405 edge 0 time 3 096ms 2019 11 25 02 07 38 537326 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm while body 296488 706 2019 11 25 02 07 38 537407 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0 001ms 2019 11 25 02 07 38 537479 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 537555 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm while cond 296487 4136 2019 11 25 02 07 38 537652 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 537730 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 537821 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm 1 while cond 296673 1329 2019 11 25 02 07 38 537908 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 537982 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 538056 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm 1 while body 296674 505 2019 11 25 02 07 38 538147 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0 001ms 2019 11 25 02 07 38 538697 I tensorflow core grappler optimizer meta optimizer cc 816 function optimizer function optimizer do nothing time 0ms 2019 11 25 02 07 38 963864 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 11 25 02 07 38 964405 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 25 02 07 40 804458 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item graph to optimize 2019 11 25 02 07 40 804545 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 167 node 77 290 edge 115 time 1596 62ms 2019 11 25 02 07 40 804631 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 167 node 0 290 edge 0 time 168 831ms 2019 11 25 02 07 40 804719 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm while cond 296487 4136 frozen 2019 11 25 02 07 40 804817 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 16 node 0 4 edge 0 time 0 611ms 2019 11 25 02 07 40 804907 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 16 node 0 4 edge 0 time 0 151ms 2019 11 25 02 07 40 804991 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm 1 while body 296674 505 frozen 2019 11 25 02 07 40 805089 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 53 node 1 71 edge 0 time 1 521ms 2019 11 25 02 07 40 805182 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 53 node 0 71 edge 0 time 0 589ms 2019 11 25 02 07 40 805618 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm 1 while cond 296673 1329 freeze 2019 11 25 02 07 40 805734 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 14 node 0 4 edge 0 time 0 561ms 2019 11 25 02 07 40 805837 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 14 node 0 4 edge 0 time 0 155ms 2019 11 25 02 07 40 805941 I tensorflow core grappler optimizer meta optimizer cc 814 optimization result for grappler item inference sequential lstm while body 296488 706 frozen 2019 11 25 02 07 40 806047 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 69 node 1 95 edge 0 time 1 81ms 2019 11 25 02 07 40 806125 I tensorflow core grappler optimizer meta optimizer cc 816 constant fold graph size after 69 node 0 95 edge 0 time 0 815ms traceback most recent call last file d project test3 nmt py line 15 in tflite model converter convert file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core lite python lite py line 474 in convert converter kwargs file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core lite python convert py line 475 in toco convert impl enable mlir converter enable mlir converter file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core lite python convert py line 215 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 11 25 02 07 42 853028 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2019 11 25 02 07 42 970994 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2019 11 25 02 07 42 971178 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 971275 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2019 11 25 02 07 42 971504 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 971600 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2019 11 25 02 07 42 971734 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 971851 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation while 2019 11 25 02 07 42 971919 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 972082 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 972184 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 972291 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2019 11 25 02 07 42 972388 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistfromtensor 2019 11 25 02 07 42 972463 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 972543 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2019 11 25 02 07 42 972613 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 972845 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation while 2019 11 25 02 07 42 972914 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 973048 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 11 25 02 07 42 973170 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorliststack 2019 11 25 02 07 42 992717 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 77 operator 174 array 0 quantize 2019 11 25 02 07 42 993725 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 77 operator 174 array 0 quantize 2019 11 25 02 07 43 009358 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 53 operator 139 array 0 quantize 2019 11 25 02 07 43 012720 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 51 operator 136 array 0 quantize 2019 11 25 02 07 43 013768 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 51 operator 136 array 0 quantize 2019 11 25 02 07 43 014716 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 51 operator 136 array 0 quantize 2019 11 25 02 07 43 015495 I tensorflow lite toco graph transformation graph transformation cc 39 before identify near upsample 51 operator 136 array 0 quantize 2019 11 25 02 07 43 016573 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 32896 byte theoretical optimal value 32768 byte 2019 11 25 02 07 43 016915 I tensorflow lite toco toco tooling cc 471 number of parameter 8969974 2019 11 25 02 07 43 017550 e tensorflow lite toco toco tooling cc 498 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation div exp expand dim fill fully connect gather logistic mul not equal pack reduce max reshape shape split stride slice sub sum tanh tile transpose zero like here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while traceback most recent call last file c user jiraiya anaconda3 envs env tensorflow2 lib runpy py line 193 in run module as main main mod spec file c user jiraiya anaconda3 envs env tensorflow2 lib runpy py line 85 in run code exec code run global file c user jiraiya anaconda3 envs env tensorflow2 script toco from protos exe main py line 7 in file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file c user jiraiya anaconda3 envs env tensorflow2 lib site package absl app py line 299 in run run main main args file c user jiraiya anaconda3 envs env tensorflow2 lib site package absl app py line 250 in run main sys exit main argv file c user jiraiya anaconda3 envs env tensorflow2 lib site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation div exp expand dim fill fully connect gather logistic mul not equal pack reduce max reshape shape split stride slice sub sum tanh tile transpose zero like here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while process finish with exit code 1 |
tensorflowtensorflow | forward mode by double backprop fail on tf square op | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below 1 15 python version 3 x the double backprop trick for compute jacobian vector product jvps fail when tf square op be in the graph recall the double backprop trick construct an initial throwaway backward graph which linearly depend on some dummy variable and then backpropagate through this throwaway graph with respect to the dummy variable the result should be constant wrt the dummy variable and the throwaway graph should be disconnect from the final result below I ve take the example jvp code from issue 234418055 and change tf tanh to tf square to illustrate the failure the code crash with an invalidargumenterror because the dummy placeholder create in fwd gradient be not feed a value the true underlying issue be that the dummy should have disappear in the second call to tf gradient inside fwd gradient because g be linear in v presumably the cause be that the backward op for tf square somehow depend nonlinearly on the dummy variable python tensorflow version 1 x import numpy as np import numpy random as npr import tensorflow as tf def fwd gradient ys xs d xs forward mode pushforward analogous to the pullback define by tf gradient with tf gradient grad ys be the vector be pull back and here d xs be the vector be push forward v tf placeholder ys dtype shape ys get shape name dummy g tf gradient ys xs grad ys v return tf gradient g v grad ys d xs a tf constant npr randn 5 3 dtype tf float32 x tf placeholder tf float32 1 5 y tf square tf matmul x a u tf placeholder tf float32 1 5 jvp fwd gradient y x u x val npr randn 1 5 u val npr randn 1 5 init op tf initialize all variable with tf session as sess sess run init op print sess run jvp feed dict x x val u u val |
tensorflowtensorflow | numpy dot and tensorflow tensordot can t be use together in macbook | Bug | hi I be use tensorflow 2 0 0 and numpy 1 7 3 on my macbook with python 3 7 5 but it fail to use both numpy dot or numpy tensordot and tensorflow tensordot in one jupyter notebook no error report but never finish though any of they alone work properly I have try the same code on linux with the same conda environment and there be no such issue code I test import numpy as np import tensorflow as tf nd nk n 3 2 5 w np np one nd nk dtype np float32 z np np one nk n dtype np float32 only one the the follow two work on macbook tf tensordot w np z np axis 1 or np tensordot w np z np axis 1 or np dot w np z np |
tensorflowtensorflow | Invalid | this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to | |
tensorflowtensorflow | rnn document be not clear | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description problem 1 could you provide the math model of the rnn problem 2 this be not return information about the method of get initial state input none batch size none dtype none and how to define own get initial state what be the connect between state size and get initial state problem 3 could you write more clear about argument of the method call build get initial state such as the dimension of they each dimension be what problem 4 be the cell define layer by user thank |
tensorflowtensorflow | xxx | Invalid | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | xxx | Invalid | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert copy and paste here also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras model do not work with keras input that be create with tensor kwarg | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 rc2 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory current behavior keras input and keras inputlayer give the option to pass in a tf placeholder through the argument tensor none when use the output of keras input tensor some placeholder as input to the functional keras model and subsequently call model predict batch on the newly create model an error occur valueerror error when check model input expect no datum but get array 0 1 2 3 4 5 6 7 8 9 expect behavior no error occur and keras model correctly feed the input through the graph code to reproduce the issue the follow code break snippet 1 import tensorflow as tf import tensorflow kera as keras import numpy as np print tf version tf version ph tf placeholder shape none 2 dtype tf float32 create input layer from a predefine tensorflow placeholder input keras input tensor ph perform some operation a tf constant 0 5 0 0 0 5 dtype tf float32 x tf linalg matmul input a output tf reduce sum x axis 1 make a model model keras model inputs input output output try to feed a batch through the model batch np linspace 0 9 10 reshape 5 2 out model predict batch print out explanation I already debug through the keras code and I think I spot the issue tesorflow python keras engine network py line 342 352 snippet 2 for I layer in enumerate self input layer self input name append layer name if layer be placeholder self feed input name append layer name use batch input shape here because non eager composite tensor may not have a shape attribute that s meaningful sparse for instance have a tensor that s non constant and need to be feed this mean that input layer that create placeholder will need to have the batch input shape attr to allow for input shape validation self feed input shape append layer batch input shape self feed input append layer input snippet 2 be take from the network constructor as can be see only tensor come from layer with be placeholder set to true be add to the feed input array in out case this array be empty because our inputlayer that be create from a tf placeholder have be placeholder set to false look at the constructor code of inputlayer we can see why this be the case tensorflow python keras engine input layer py line 115 138 snippet 3 if input tensor be none if input shape be not none batch input shape batch size tuple input shape else batch input shape none graph backend get graph with graph as default input tensor backend placeholder shape batch input shape dtype dtype name self name sparse sparse rag ragged self be placeholder true self batch input shape batch input shape else if not tf util be symbolic tensor input tensor raise valueerror you should not pass an eagertensor to input for example instead of create an inputlayer you should instantiate your model and directly call it on your input self be placeholder false self batch input shape tuple input tensor shape as list input tensor have the value from the tensor argument in our case this hold the placeholder we pass to input tensor ph since it be not none self be placeholder false be run in the else case this subsequently result in the layer not be add to the feed input of the model which then do not expect any input to the predict method the issue can be work around by manually set the be placeholder field of our input layer snippet 4 input keras history 0 be placeholder true add this line of code just below line 10 of snippet 1 resolve the error fix code snippet 5 import tensorflow as tf import tensorflow kera as keras import numpy as np print tf version tf version ph tf placeholder shape none 2 dtype tf float32 create input layer from a predefine tensorflow placeholder input keras input tensor ph input keras history 0 be placeholder true perform some operation a tf constant 0 5 0 0 0 5 dtype tf float32 x tf linalg matmul input a output tf reduce sum x axis 1 make a model model keras model inputs input output output try to feed a batch through the model batch np linspace 0 9 10 reshape 5 2 out model predict batch print out output tf version 1 15 0 rc2 0 5 2 5 4 5 6 5 8 5 since I can not think of any reason why an inputlayer create from a placeholder should have be placeholder false I think this issue can be resolve by remove the respective line of code |
tensorflowtensorflow | fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above | Bug | I research that issue but unfortunnately all I be able to find be a compatibility problem I make sure to install exactly what be specify in the tensorflow gpu installation procedure the only thing that be different be that I m run the 430 nvidia driver which be instal by ask to install the 418 system information ubuntu 18 04 tensorflow gpu instal use pip tensorflow gpu 2 0 0 python version 3 6 8 cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory nvidia geforce gtx 1050 mobile 2 gb describe the current behavior python for image path in test image path show inference detection model image path python unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above describe the expect behavior expect to show the image with rectangle for detect object code to reproduce the issue I use the provide code and I m get this error while compute the block python for image path in test image path show inference detection model image path other info log python unknownerror traceback most recent call last in 1 for image path in test image path 2 show inference detection model image path in show inference model image path 4 image np np array image open image path 5 actual detection 6 output dict run inference for single image model image np 7 visualization of the result of a detection 8 vis util visualize box and label on image array in run inference for single image model image 7 8 run inference 9 output dict model input tensor 10 11 all output be batch tensor local lib python3 6 site package tensorflow core python eager function py in call self args kwargs 1079 typeerror for invalid positional keyword argument combination 1080 1081 return self call impl args kwargs 1082 1083 def call impl self args kwargs cancellation manager none local lib python3 6 site package tensorflow core python eager function py in call impl self args kwargs cancellation manager 1119 raise typeerror keyword argument unknown expect format 1120 list kwargs key list self arg keyword 1121 return self call flat args self capture input cancellation manager 1122 1123 def filter call self args kwargs local lib python3 6 site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1222 if execute eagerly 1223 flat output forward function call 1224 ctx args cancellation manager cancellation manager 1225 else 1226 gradient name self delay rewrite function register local lib python3 6 site package tensorflow core python eager function py in call self ctx args cancellation manager 509 input args 510 attrs executor type executor type config proto config 511 ctx ctx 512 else 513 output execute execute with cancellation local lib python3 6 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 65 else 66 message e message 67 six raise from core status to exception e code message none 68 except typeerror as e 69 keras symbolic tensor local lib python3 6 site package six py in raise from value from value unknownerror 2 root error s find 0 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node featureextractor mobilenetv1 mobilenetv1 conv2d 0 batchnorm batchnorm mul 1 define at home niccle27 local lib python3 6 site package tensorflow core python framework op py 1751 postprocessor batchmulticlassnonmaxsuppression map while switch 5 970 1 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node featureextractor mobilenetv1 mobilenetv1 conv2d 0 batchnorm batchnorm mul 1 define at home niccle27 local lib python3 6 site package tensorflow core python framework op py 1751 0 successful operation 0 derive error ignore op inference prune 16931 function call stack prune prune |
tensorflowtensorflow | learn vm | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | bufferedinputstream should avoid do reset in seek when new position be still in buffer | Bug | check the code here l161 position of the buffer within file const int64 bufpos tell if position bufpos reset input stream and skip position byte tf return if error reset return skipnbyte position I see two issue correct I if I m wrong 1 tell doesn t return the position of the buffer within file as suggest in comment in fact it return the position of the pos within the file 2 when position be still inside buffer it will do a reset anyway which could impact the performance for read later since it would fill buffer of the previously buffer datum fix pr 34515 |
tensorflowtensorflow | how to get sample weight and learning phase tf2 eager | Bug | model sample weight work w from keras import but not from tensorflow keras I need the sample weight tensor to feed to k function to get layer gradient output etc use model feed sample weight instead do eliminate the error but return and feed np one 32 vs 4 np one 32 for sample weight yield the same output it shouldn t note see this thread regard learn phase apply example python import tensorflow kera backend as k from tensorflow keras layers import input dense from tensorflow keras model import model import numpy as np ipt input 16 out dense 16 ipt model model ipt out model compile adam mse x np random randn 32 16 model train on batch x x grad model optimizer get gradient model total loss model layer 1 output grad fn k function input model input 0 model feed target 0 model sample weight 0 output grad full error trace python file line 16 in output grad file d anaconda envs tf2 env lib site package tensorflow core python keras backend py line 3773 in function return eagerexecutionfunction input output update update name name file d anaconda envs tf2 env lib site package tensorflow core python keras backend py line 3670 in init base graph source graph file d anaconda envs tf2 env lib site package tensorflow core python eager lift to graph py line 249 in lift to graph visit op set x op for x in source file d anaconda envs tf2 env lib site package tensorflow core python eager lift to graph py line 249 in visit op set x op for x in source attributeerror nonetype object have no attribute op |
tensorflowtensorflow | how to get symbolic learning phase tf2 eager | Bug | k learning phase fetch the value not the tensor itself follow backend py I find somewhat of a workaround but it isn t user api friendly I need the learning phase tensor to feed to k function to get layer gradient output etc work fine w import kera backend as k but fail for import tensorflow kera backend as k pass the symbolic learning phase into k function yield python valueerror can not create an execution function which be comprise of element from multiple graph minimal apply example python import tensorflow kera backend as k from tensorflow keras layers import input dense from tensorflow keras model import model import numpy as np ipt input 16 out dense 16 ipt model model ipt out model compile adam mse x np random randn 32 16 model train on batch x x grad model optimizer get gradient model total loss model layer 1 output grad fn k function input model input 0 model feed target 0 k learning phase output grad full error trace python file line 3 in output grad file d anaconda envs tf2 env lib site package tensorflow core python keras backend py line 3773 in function return eagerexecutionfunction input output update update name name file d anaconda envs tf2 env lib site package tensorflow core python keras backend py line 3670 in init base graph source graph file d anaconda envs tf2 env lib site package tensorflow core python eager lift to graph py line 249 in lift to graph visit op set x op for x in source file d anaconda envs tf2 env lib site package tensorflow core python eager lift to graph py line 249 in visit op set x op for x in source attributeerror int object have no attribute op partial workaround python import tensorflow kera backend as k from tensorflow python eager import context from tensorflow python op import array op import weakref from tensorflow python framework import func graph def symbolic learning phase graph get graph with graph as default if graph not in graph learn phase with k name scope phase array op placeholder with default false shape name keras learning phase graph learn phase graph phase return graph learn phase graph def get graph if context execute eagerly global graph if graph be none graph func graph funcgraph keras graph return graph else return op get default graph graph none graph learn phase weakref weakkeydictionary symbolic learning phase |
tensorflowtensorflow | listwrapper do not support insert method for nest list of layer | Bug | system information have I write custom code yes os platform and distribution linux ubuntu 18 04 mobile device n a tensorflow instal from binary tensorflow version 2 0 0 python version 3 7 5 bazel version n a gcc compiler version n a cuda cudnn version 10 1 7 6 4 gpu model and memory geforce gtx 1070 8 gb describe the current behavior list of layer add to another list with the insert method be ignore by the model it work fine with the append method this seem to be a problem with listwrapper describe the expect behavior all list method should be support for define nested list of layer code to reproduce the issue minimal example python class mymodel tf keras model def init self kwargs super init kwargs self conv1 self conv1 append tf keras layer conv2d 8 3 name conv1 self conv2 self conv2 insert 0 tf keras layer conv2d 16 3 name conv2 def call self input x input x self conv1 0 0 x x self conv2 0 0 x return x m mymodel m build none none none 3 m summary for w in m trainable weight print w name output be model my model layer type output shape param conv1 conv2d multiple 224 total param 224 trainable param 224 non trainable param 0 conv1 kernel 0 conv1 bias 0 note that conv2 have be ignore other info log n a |
tensorflowtensorflow | gradient computation return none | Bug | hello I write an example to test tf while loop and tf tensorarray it return none when compute gradient it seem that there be no connection from input to output I don t know why tensorflow version 1 10 1 code log none none traceback most recent call last file 1 py line 203 in tf app run main file home dingzhenyou anaconda2 lib python2 7 site package tensorflow python platform app py line 125 in run sys exit main argv file 1 py line 188 in main model build train model file 1 py line 81 in build train model grad and var self average gradient gv list file 1 py line 100 in average gradient expand g tf expand dim g 0 file home dingzhenyou anaconda2 lib python2 7 site package tensorflow python util deprecation py line 454 in new func return func args kwargs file home dingzhenyou anaconda2 lib python2 7 site package tensorflow python op array op py line 136 in expand dim return gen array op expand dim input axis name file home dingzhenyou anaconda2 lib python2 7 site package tensorflow python ops gen array op py line 2020 in expand dim expanddim input input dim axis name name file home dingzhenyou anaconda2 lib python2 7 site package tensorflow python framework op def library py line 528 in apply op helper input name err valueerror try to convert input to a tensor and fail error none value not support |
tensorflowtensorflow | where be stage package in tf2 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior hi there I want to use the stagingarea structure by use from tensorflow contrib stage import stagingarea in old version how could I use it in tf2 and I see the definition be here tensorflow python op data flow op py describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | 404 link in documentation | Bug | url s with the issue description of issue what need change non function link in the documentation custom training datum by default the script will download the speech command dataset lead to a non working page |
tensorflowtensorflow | tf ragged stack break with rank 1 regular tensor | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary pip tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 5 cuda cudnn version cuda 10 1 cudnn 7 6 2 24 gpu model and memory quadro p2000 describe the current behavior when create a ragged tensor by use tf ragged stack on several regular tensor on the 0 th axis the function crash when the rank of the input tensor be 1 describe the expect behavior accord to the documentation give a list of tensor or ragged tensor with the same rank r r axis here r 1 and axis 0 so the precondition should be fulfil code to reproduce the issue python import tensorflow as tf tf ragged stack 1 2 3 1 2 axis 0 other info log call the 2 line above result in traceback most recent call last file home veith project venvs anontf2 lib python3 6 site package ipython core interactiveshell py line 3326 in run code exec code obj self user global ns self user n file line 1 in tf ragged stack 1 2 3 1 2 axis 0 file home veith project venvs anontf2 lib python3 6 site package tensorflow core python op rag ragged concat op py line 113 in stack return ragged stack concat helper value axis stack value true file home veith project venvs anontf2 lib python3 6 site package tensorflow core python op rag ragged concat op py line 167 in ragged stack concat helper return array op stack rt input axis file home veith project venvs anontf2 lib python3 6 site package tensorflow core python util dispatch py line 180 in wrapper return target args kwargs file home veith project venvs anontf2 lib python3 6 site package tensorflow core python op array op py line 1154 in stack return op convert to tensor value name name file home veith project venvs anontf2 lib python3 6 site package tensorflow core python framework op py line 1184 in convert to tensor return convert to tensor v2 value dtype prefer dtype name file home veith project venvs anontf2 lib python3 6 site package tensorflow core python framework op py line 1242 in convert to tensor v2 as ref false file home veith project venvs anontf2 lib python3 6 site package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home veith project venvs anontf2 lib python3 6 site package tensorflow core python op array op py line 1278 in autopacke conversion function return autopacke helper v dtype name or pack file home veith project venvs anontf2 lib python3 6 site package tensorflow core python op array op py line 1184 in autopacke helper return gen array op pack list or tuple name name file home veith project venvs anontf2 lib python3 6 site package tensorflow core python ops gen array op py line 6293 in pack six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror shape of all input must match value 0 shape 3 value 1 shape 2 op pack name stack workaround expand the rank 1 tensor to rank 2 tensor follow by squeeze the redundant dimension seem to work python import tensorflow as tf x tf ragged stack 1 2 3 1 2 axis 0 print x bounding shape x tf squeeze x axis 1 print x bounding shape estimate cause I do not check the tensorflow source code for this but the general ragged documentation ragged tensor definition state image I assume the issue be that there be no uniform dimension in the regular tensor before stack they intuitively I would have assume that I can use tf ragged stack to create this uniform dimension to form a ragged tensor from several non ragged different dimension tensor as above similar to the regulat tf stack which create a new dimension I be not sure whether this be consider a bug or an error in the documentation of tf ragged stack but it feel like a bug from a user perspective |
tensorflowtensorflow | layer model be not connect no input to return | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mohave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow 2 0 0 tensorflow version use command below 2 0 0 python version python 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior raise exception when load save model through the tf keras model we get tf 2 0 0 attributeerror layer model be not connect no input to return describe the expect behavior it should pass it pass when save and loading model from h5 format code to reproduce the issue python import tensorflow as tf shape 224 224 3 functional model base model2 tf keras application mobilenetv2 include top false weight imagenet input shape shape input tf keras input shape shape name input x base model2 input x tf keras layer globalaveragepooling2d x x tf keras layer dense 256 activation relu name embedding x output tf keras layer dense 2 activation softmax name prob x model2 tf keras model inputs input output output tf keras model save model model2 model model l2 tf keras model load model model this raise exception model load tf keras model input model l2 input output model l2 get layer layer name output for layer name in prob embedding other info log 2019 11 20 15 24 41 862516 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 20 15 24 41 873895 I tensorflow compiler xla service service cc 168 xla service 0x7fa773aa38b0 execute computation on platform host device 2019 11 20 15 24 41 873913 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 20 15 24 55 102446 w tensorflow python util util cc 299 set be not currently consider sequence but this may change in the future so consider avoid use they warn tensorflow from user userx env tf2 lib python3 7 site package tensorflow core python op resource variable op py 1781 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer traceback most recent call last file save2 py line 20 in input model l2 input output model l2 get layer layer name output for layer name in prob embedding file user userx env tf2 lib python3 7 site package tensorflow core python keras engine base layer py line 1557 in input be not connect no input to return attributeerror layer model be not connect no input to return |
tensorflowtensorflow | accuracy and tf metric get accuracy produce different result | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 opensuse tensorflow instal from source or binary pip binary within pyenv tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 5 describe the current behavior the same model behave differently whether one use accuracy or tf keras metric get accuracy see below describe the expect behavior they should behave identically code to reproduce the issue bug import kera import numpy as np import tensorflow kera as keras x np empty 10 224 224 3 y np empty 10 2 model keras application vgg16 vgg16 weight none class 2 model compile optimizer keras optimizer adam loss categorical crossentropy metric accuracy model fit x y epoch 10 model compile optimizer keras optimizer adam loss categorical crossentropy metric kera metric get accuracy model fit x y epoch 10 example output train on 10 sample epoch 1 10 10 10 4s 389ms sample loss inf accuracy 0 9000 epoch 2 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 3 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 4 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 5 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 6 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 7 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 8 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 9 10 10 10 0s 8ms sample loss nan accuracy 0 9000 epoch 10 10 10 10 0s 8ms sample loss nan accuracy 0 9000 train on 10 sample epoch 1 10 10 10 1s 131ms sample loss nan accuracy 0 0000e 00 epoch 2 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 3 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 4 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 5 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 6 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 7 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 8 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 9 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 epoch 10 10 10 10 0s 8ms sample loss nan accuracy 0 0000e 00 other info log closely relate to 34088 |
tensorflowtensorflow | adapter to allow sparse matrix as target label in keras model | Bug | tensorflow version 2 0 os debian 9 I be train a sequential model as show below model sequential model add embed input dim 170000 output dim 100 input length 10 model add globalaveragepooling1d model add dense num target 3811 activation softmax model compile loss categorical crossentropy optimizer adam learning rate 0 001 model fit x train y train batch size 16384 epoch 200 verbose 1 x train shape 12528566 10 y train shape 12528566 3811 x train type numpy ndarray y train type scipy sparse csr csr matrix early in tensorflow 1 13 the training work fine but in tensorflow 2 0 it be throw error fail to find datum adapter that can handle input I can convert y train dense type by command y train toarray but it throw error say memoryerror unable to allocate array with shape 12528566 3811 and datum type int64 the size of y train when dense be 12528566 3811 8 381 gb dense y train work fine but only for small datum so in order to make above thing work I change my loss to sparse categorical crossentropy y train to np asarray y train tocoo col which be basically the index of my label vector which map to the correspond output label and model get train on tensorflow 2 0 and predict as expect model sequential model add embed input dim 170000 output dim 100 input length 10 model add globalaveragepooling1d model add dense num target 3811 activation softmax model compile loss sparse categorical crossentropy optimizer adam learning rate 0 001 now I want to train a model with binary crossentropy loss model with sigmoid activation for which I need to make y train as dense matrix since there be no sparse binary crossentropy loss type if I don t use dense y train I get an error valueerror a target array with shape 12528566 1 be pass for an output of shape none 3811 while use as loss binary crossentropy this loss expect target to have the same shape as the output 1 why be the adapter to handle sparse matrix remove from tensorflow 2 0 2 why be there no sparse binary crossentropy loss type |
tensorflowtensorflow | update the guide for build tensorflow lite with select op for io | Bug | hi we follow the guide io which seem bit outdated as tensorflow contrib have be remove we couldn t get tflite model with select op work on io while the same model be work well on android it seem that bazel could also be use for build the library together with use private cocoapod we test this approach but be also unsuccessful could the documentation be improve with this regard for have clear guideline idea and explanation of how to get it work be also welcome under this issue do you have an estimation when could we expect the cocoapod with select op version for io swift url s with the issue ios description of issue what need change the current guide could be improve especially because tensorflow contrib be remove in the late version of tensorflow therefore tensorflow contrib makefile build all io with tflite sh be no long available for build tensorflow lite with select op support clear description couldn t get a tflite model with select op work in io by follow the guide io the same model be successfully work on android with nightly build would be nice to have a guide for how to build it with bazel correct link n a parameter define n a return define n a raise list and define n a usage example would be good if it would be explain in more detail what have to be do do we only have to compile and link library as explain in the documentation or do we need to modify the code also e g add flex delegate as an option for interpreter request visual if applicable no need for visual submit a pull request n a |
tensorflowtensorflow | error while build api doc | Bug | url s with the issue description of issue what need change get a valueerror while set up to view tensorflow style html locally accord to clear description traceback most recent call last file generate2 py line 284 in app run main file c user aspire conda envs tensorflow2 lib site package absl app py line 299 in run run main main args file c user aspire conda envs tensorflow2 lib site package absl app py line 250 in run main sys exit main argv file generate2 py line 280 in main search hint flag search hint file generate2 py line 273 in build docs doc generator build output dir file c user aspire conda envs tensorflow2 lib site package tensorflow docs api generator generate lib py line 839 in build site path self site path file c user aspire conda envs tensorflow2 lib site package tensorflow docs api generator generate lib py line 507 in write doc fail to generate doc for symbol format full name valueerror fail to generate doc for symbol tf compat v1 flag tf decorator tf stack fileandline |
tensorflowtensorflow | r2 0 2 1 python 3 8 autograph could not transform and will run it as be | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes attach file code warn py38 py txt rename it as py and execute it os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below this problem occur with both r2 0 build from head and r2 1rc0 python version 3 8 virtual environment the problem do not occur with python 3 7 3 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version cuda 10 cudnn 7 6 5 gpu model and memory nvidia geforce rtx 2080 ti describe the current behavior export autograph verbosity 10 run the attach python script rename as py python code warn py38 py output txt read the output in the attached output txt describe the expect behavior there be no warning message when the script be run with python 3 7 3 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see both file attach code warn py38 py txt output txt |
tensorflowtensorflow | model mismatch between create low latency conv model and cnn one fstride4 | Bug | l337 l339 but the stride be actually one instead of four l388 l389 which result in a huge increase of the number of parameter of the subsequent fc layer |
tensorflowtensorflow | multiple public header file be not usable dependency be miss | Bug | hi the follow file import header file from path which be not exist bash cd usr local lib python3 6 dist package tensorflow include grep rnw e third party gpus cuda tensorflow stream executor gpu gpu type h 31 include third party gpu cuda include cucomplex h tensorflow stream executor gpu gpu type h 32 include third party gpu cuda include cuda h tensorflow core util gpu kernel helper h 22 include third party gpu cuda include cuda fp16 h tensorflow core util gpu device function h 34 include third party gpu cuda include cucomplex h tensorflow core util gpu device function h 35 include third party gpu cuda include cuda h this be problematic because the header can t be import as the path request do not exist bash ls usr local lib python3 6 dist package tensorflow include third party gpu cuda include ls can not access usr local lib python3 6 dist package tensorflow include third party gpu cuda include no such file or directory problem be cause by this commit diff 3146a2ef48234a027b42c52f71bcb177 this issue lead to multiple issue as soon as cuda gpu be use with the c api workaround as a temporary solution one may create a symbolic link from the cuda path nonetheless modify system should not be a permanent solution bash mkdir p usr local lib python3 6 dist package tensorflow include third party gpus cuda ln s usr local cuda include usr local lib python3 6 dist package tensorflow include third party gpus cuda ls usr local lib python3 6 dist package tensorflow include third party gpu cuda include solution instead of cpp include third party gpu cuda include cucomplex h include third party gpu cuda include cuda h please do cpp include cuda include cucomplex h include cuda include cuda h |
tensorflowtensorflow | error try to convert a model use full integer quantization | Bug | system information os platform and distribution linux ubuntu 18 04 3 tensorflow instal from source pip3 install upgrade tensorflow 1 15 tensorflow version 1 15 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use fully connect softmax here be a list of operator for which you will need custom implementation identityn traceback most recent call last file usr local bin toco from protos line 8 in sys exit main file usr local lib python3 6 dist package tensorflow core lite toco python toco from protos py line 89 in main app run main execute argv sys argv 0 unparse file usr local lib python3 6 dist package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file usr local lib python3 6 dist package tensorflow core lite toco python toco from protos py line 52 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use fully connect softmax here be a list of operator for which you will need custom implementation identityn model import pathlib import tensorflow as tf mnist tf keras datasets mnist x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layers dropout 0 2 tf keras layer dense 10 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit x train y train epoch 5 model evaluate x test y test verbose 2 save the model into savemodel format save model dir pathlib path save model tf save model save model str save model dir convert the model import pathlib import tensorflow as tf mnist tf keras datasets mnist x train mnist load datum 0 0 255 0 save model dir pathlib path save model convert the model from save model image tf cast x train tf float32 mnist ds tf datum dataset from tensor slice image batch 1 def representative dataset gen for input value in mnist ds take 100 yield input value converter tf lite tfliteconverter from save model str save model dir converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter representative dataset representative dataset gen tflite quant model converter convert tflite quant model file save model dir mnist post quant model io tflite tflite quant model file write bytes tflite quant model any other info log usr bin python3 6 home mapeima code python edgetpu model converter py 2019 11 19 13 34 02 791346 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcuda so 1 dlerror libcuda so 1 can not open share object file no such file or directory 2019 11 19 13 34 02 791364 e tensorflow stream executor cuda cuda driver cc 318 fail call to cuinit unknown error 303 2019 11 19 13 34 02 791397 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host jihr proc driver nvidia version do not exist 2019 11 19 13 34 02 791614 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 19 13 34 02 814669 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2712000000 hz 2019 11 19 13 34 02 815094 I tensorflow compiler xla service service cc 168 xla service 0x3b64140 initialize for platform host this do not guarantee that xla will be use device 2019 11 19 13 34 02 815124 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version warning tensorflow from usr local lib python3 6 dist package tensorflow core lite python convert save model py 60 load from tensorflow python save model loader impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model loader load or tf compat v1 save model load there will be a new function for import savedmodel in tensorflow 2 0 2019 11 19 13 34 02 937627 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 11 19 13 34 02 937702 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 19 13 34 02 948371 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 11 19 13 34 02 948393 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer graph size after 189 node 144 355 edge 290 time 4 452ms 2019 11 19 13 34 02 948398 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0 08ms warn tensorflow from usr local lib python3 6 dist package tensorflow core lite python util py 249 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from usr local lib python3 6 dist package tensorflow core python framework graph util impl py 277 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph usr local lib python3 6 dist package tensorflow core lite python lite py 854 userwarning property target op be deprecate please use target spec support op instead target spec support op instead name 2019 11 19 13 34 02 971121 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 11 19 13 34 02 971187 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 19 13 34 02 977074 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 11 19 13 34 02 977098 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 34 node 8 57 edge 12 time 3 248m 2019 11 19 13 34 02 977102 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 34 node 0 57 edge 0 time 0 73ms use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function use tf function or defun to decorate the function traceback most recent call last file home mapeima code python edgetpu model converter py line 30 in tflite quant model converter convert file usr local lib python3 6 dist package tensorflow core lite python lite py line 983 in convert converter kwargs file usr local lib python3 6 dist package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file usr local lib python3 6 dist package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 11 19 13 34 04 120420 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation identityn 2019 11 19 13 34 04 120589 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 17 operator 26 array 0 quantize 2019 11 19 13 34 04 120707 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 17 operator 26 array 0 quantize 2019 11 19 13 34 04 120849 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 6 operator 12 array 0 quantize 2019 11 19 13 34 04 121155 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 2 5 operator 11 array 0 quantize 2019 11 19 13 34 04 121196 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 3 4 operator 9 array 0 quantize 2019 11 19 13 34 04 121224 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 4 operator 9 array 0 quantize 2019 11 19 13 34 04 121241 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 4 operator 9 array 0 quantize 2019 11 19 13 34 04 121273 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 576 byte theoretical optimal value 576 byte 2019 11 19 13 34 04 121289 I tensorflow lite toco toco tooling cc 439 estimate count of arithmetic op 204042 op equivalently 102021 mac 2019 11 19 13 34 04 121293 I tensorflow lite toco toco tooling cc 454 number of parameter 101770 2019 11 19 13 34 04 121470 e tensorflow lite toco toco tooling cc 481 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use fully connect softmax here be a list of operator for which you will need custom implementation identityn traceback most recent call last file usr local bin toco from protos line 8 in sys exit main file usr local lib python3 6 dist package tensorflow core lite toco python toco from protos py line 89 in main app run main execute argv sys argv 0 unparse file usr local lib python3 6 dist package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file usr local lib python3 6 dist package absl app py line 299 in run run main main args file usr local lib python3 6 dist package absl app py line 250 in run main sys exit main argv file usr local lib python3 6 dist package tensorflow core lite toco python toco from protos py line 52 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use fully connect softmax here be a list of operator for which you will need custom implementation identityn process finish with exit code 1 |
tensorflowtensorflow | tensorflow python keras testing util layer test break when a custom layer be return a list tuple of tensor | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 glinux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip3 instal tensorflow version use command below 2 0 0 python version 3 6 describe the current behavior when we create a custom tensorflow keras layer layer say mycustomlayer that return a list tuple of tensor in it s call method call layer test mycustomlayer input shape expect input shape will break as layer test do not expect list tuple as output of the custom layer s call method and by tensorflow s keras layer documentation here call list tuple of tensor be valid output code to reproduce the issue python import tensorflow as tf from tensorflow python keras testing util import layer test a dummy layer that just return a list of input class mycustomlayer tf keras layers layer def call self input return input input layer test mycustomlayer input shape 1 2 describe the expect behavior the expect behavior be that the above snippet will not return error log attach below potential solution check the input x in dtype method in tensorflow core python keras backend py happy to contribute if this be helpful other info log error log attributeerror traceback most recent call last in 1 layer test mycustomlayer input shape 1 2 2 frame tensorflow 2 0 0 python3 6 tensorflow core python framework test util py in decorate self args kwargs 1583 original var os environ get tf cudnn deterministic 1584 os environ tf cudnn deterministic true 1585 result f self args kwargs 1586 os environ tf cudnn deterministic original var 1587 return result tensorflow 2 0 0 python3 6 tensorflow core python keras testing util py in layer test layer cls kwargs input shape input dtype input datum expect output expect output dtype expect output shape validate training adapt datum 140 x keras layer input shape input shape 1 dtype input dtype 141 y layer x 142 if keras backend dtype y expect output dtype 143 raise assertionerror when test layer s for input s find output 144 dtype s but expect to find s nfull kwargs s tensorflow 2 0 0 python3 6 tensorflow core python keras backend py in dtype x 1247 1248 1249 return x dtype base dtype name 1250 1251 attributeerror list object have no attribute dtype |
tensorflowtensorflow | tflite gl delegate build be leak foundation library | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 mac os x 10 14 6 mojave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version v1 15 0 python version 3 7 4 instal use virtualenv pip conda bazel version if compile from source 0 26 0 gcc compiler version if compile from source ndk 18 cuda cudnn version gpu model and memory describe the problem the foundation library be leak into the gl delegate on v1 15 0 this be relate to this bug but it didn t happen on v1 14 0 any other info log joe macbook pro 3 gpu jbowser bazel build c opt config android arm64 copt os copt dtflite gpu binary release copt fvisibility hide linkopt s strip always libtensorflowlite gpu gl so start local bazel server and connect to it info option provide by the client inherit common option isatty 1 terminal column 238 info read rc option for build from user jbowser tensorflow source tensorflow bazelrc build option apple platform type macos define framework share object true define open source build true define use fast cpp protos true define allow oversize protos true spawn strategy standalone strategy genrule standalone c opt announce rc define grpc no are true define prefix usr define libdir prefix lib define includedir prefix include info read rc option for build from user jbowser tensorflow source tensorflow tf configure bazelrc build option action env python bin path usr local opt python3 bin python3 7 action env python lib path usr local cellar python 3 7 4 framework python framework version 3 7 lib python3 7 site package python path usr local opt python3 bin python3 7 config xla action env android ndk home user jbowser ndk build android ndk r18 action env android ndk api level 18 action env android build tool version 29 0 2 action env android sdk api level 29 action env android sdk home user jbowser library android sdk action env tf configure io 0 info find applicable config definition build xla in file user jbowser tensorflow source tensorflow tf configure bazelrc define with xla support true info find applicable config definition build android arm64 in file user jbowser tensorflow source tensorflow bazelrc config android cpu arm64 v8a fat apk cpu arm64 v8a info find applicable config definition build android in file user jbowser tensorflow source tensorflow bazelrc crosstool top external android crosstool host crosstool top bazel tool tool cpp toolchain info analyze target tensorflow lite delegates gpu libtensorflowlite gpu gl so 45 package load 3483 target configure info find 1 target info delete stale sandbox base private var tmp bazel jbowser e70d978bebad2fab57f73474f3b8c22a sandbox error user jbowser tensorflow source tensorflow tensorflow lite delegates gpu build 110 1 linking of rule tensorflow lite delegates gpu libtensorflowlite gpu gl so fail exit 1 external androidndk ndk toolchains aarch64 linux android 4 9 prebuilt darwin x86 64 lib gcc aarch64 linux android 4 9 x aarch64 linux android bin ld can not find foundation no such file or directory clang error linker command fail with exit code 1 use v to see invocation target tensorflow lite delegates gpu libtensorflowlite gpu gl so fail to build use verbose failure to see the command line of fail build step info elapse time 68 598 critical path 26 36 info 227 process 227 local fail build do not complete successfully |
tensorflowtensorflow | tf2 0 tf reduce mean crash python float point exception if the count become zero due to overflow | Bug | system information have I write custom code yes os platform and distribution ubuntu 18 04 tensorflow instal from binary source test as well tensorflow version 2 0 python version 3 7 describe the current behavior run the script python import tensorflow as tf datum tf zero 256 dtype tf uint8 print tf reduce mean datum crash the python interpreter e g float point exception core dump likely as 256 overflow in uint8 to 0 lead to an uncaught division by zero describe the expect behavior a result possibly incorrect due to too small dtype or some other way to deal with the issue e g assertion error or other exception however no crashing of python ideally tf reduce mean could yield correct result for non float point dtype as well code to reproduce the issue see above |
tensorflowtensorflow | gpu device not find in colab | Bug | my code work well with gpu in colab yesterday but this morning it become very slow so I suspect that cpu be use despite hardware accelerator be set to gpu in change runtime type explicitly so I check the availability of gpu follow the tutorial code chunk tensorflow version 2 x import tensorflow as tf device name tf test gpu device name if device name device gpu 0 raise systemerror gpu device not find print find gpu at format device name indeed I get systemerror gpu device not find I try this with different session or google user account the same result you should be able to replicate this in a new colab session maybe some tensorflow update be involve it could be that colab session provide gpu but tensorflow can t see it e g tf test gpu device name or tf test be gpu available please help thank you |
tensorflowtensorflow | memory leak in custom tensor | Bug | hello everyone cc alextp there be a memory leak for custom tensor from you can find an mwe here long story short in gpflow we define parameter of the model as custom tensor variable l36 the parameter be a tf module container with single tf variable and tfp bijector inside the custom tensor behave as a standard tensorflow tensor plus it apply transformation on internal tf variable and return forward transform tensor of the variable every time when someone decide to read it user tensorflow function when I use a model with parameter class and compute gradient w r t the loss I observe constant memory growth and when I use the model with same variable but do transformation of the variable manually the memory stay the same if I do forward mode only there be no issue it happen only when gradienttape be involve ps memory profiler doesn t work in colab you will have to run it on your computer maco 10 14 6 python 3 7 0 tensorflow version bash python c import tensorflow as tf print tf version git version tf version version v2 0 0 rc2 26 g64c3d382ca 2 0 0 middle of the training python compute gradient iteration 56 have pass with loss 494609555730402 0 filename mwe2 py line mem usage increment line content 118 484 5 mib 484 5 mib profile 119 def train step compute grad bool true 120 484 6 mib 0 1 mib datum next dataset it 121 484 6 mib 0 0 mib variable model trainable variable 122 484 6 mib 0 0 mib with tf gradienttape watch access variable false as tape 123 484 6 mib 0 0 mib tape watch variable 124 490 6 mib 6 1 mib loss loss fn data 125 126 490 6 mib 0 0 mib if compute grad 127 490 6 mib 0 0 mib tf print f compute gradient 128 491 2 mib 0 6 mib grad tape gradient loss variable 129 491 2 mib 0 0 mib grad and val zip grad variable 130 491 2 mib 0 0 mib opt apply gradient grad and val 131 132 491 2 mib 0 0 mib tf print f iteration opt iteration numpy have pass with loss loss numpy in the end python compute gradient iteration 100 have pass with loss 273947779893557 2 filename mwe2 py line mem usage increment line content 118 514 8 mib 514 8 mib profile 119 def train step compute grad bool true 120 514 9 mib 0 1 mib datum next dataset it 121 514 9 mib 0 0 mib variable model trainable variable 122 514 9 mib 0 0 mib with tf gradienttape watch access variable false as tape 123 514 9 mib 0 0 mib tape watch variable 124 520 9 mib 6 1 mib loss loss fn data 125 126 520 9 mib 0 0 mib if compute grad 127 520 9 mib 0 0 mib tf print f compute gradient 128 521 5 mib 0 6 mib grad tape gradient loss variable 129 521 5 mib 0 0 mib grad and val zip grad variable 130 521 5 mib 0 0 mib opt apply gradient grad and val 131 132 521 5 mib 0 0 mib tf print f iteration opt iteration numpy have pass with loss loss numpy |
tensorflowtensorflow | runtimeerror dst tensor not initialise while save checkpoint not while train | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 lts tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 cuda cudnn version nvidia driver 418 cuda 10 0 libcudnn7 7 6 2 24 1 cuda10 0 libcudnn7 dev 7 6 2 24 1 cuda10 0 libnvinfer5 5 1 5 1 cuda10 0 libnvinfer dev 5 1 5 1 cuda10 0 gpu model and memory t4 16 gb describe the current behavior the gpu memory be only use for 58 detect by use limit growth option gpu total memory 15843721216 gpu free memory 6628966400 gpu use memory 9214754816 gpu memory percentage use 58 if gpu try currently memory growth need to be the same across gpu for gpu in gpu tf config experimental set memory growth gpu true logical gpu tf config experimental list logical device gpu print len gpu physical gpu len logical gpu logical gpu except runtimeerror as e memory growth must be set before gpu have be initialize print e training run fine but only when save the model as a checkpoint it crash def createcheckpointmanager self if self model weight path checkpoint dir self model weight path self training session i d checkpoint ckpt print store checkpoint checkpoint dir self checkpoint tf train checkpoint optimizer self optimizer encode network self encode network decode network self decode network self manager tf train checkpointmanager self checkpoint checkpoint dir max to keep 3 else print model checkpoint path not find not save checkpoint checkpointmanager save log traceback most recent call last file usr lib python3 6 multiprocesse pool py line 119 in worker result true func args kwd file home v1 py line 107 in runtrainingepoch storecheckpoint file home v1 py line 179 in storecheckpoint self manager save file home env lib python3 6 site package tensorflow core python training checkpoint management py line 720 in save save path self checkpoint write prefix file home env lib python3 6 site package tensorflow core python training track util py line 1819 in write output self saver save file prefix file prefix file home env lib python3 6 site package tensorflow core python training track util py line 1155 in save file prefix file prefix tensor object graph tensor object graph tensor file home env lib python3 6 site package tensorflow core python training track util py line 1103 in save cache when graph building save op saver save file prefix file home env lib python3 6 site package tensorflow core python training save functional saver py line 230 in save sharde save append saver save shard prefix file home env lib python3 6 site package tensorflow core python training save functional saver py line 69 in save tensor append spec tensor file home env lib python3 6 site package tensorflow core python training save saveable object py line 52 in tensor return self tensor if callable self tensor else self tensor file home env lib python3 6 site package tensorflow core python training save saveable object util py line 94 in f return array op identity x file home env lib python3 6 site package tensorflow core python util dispatch py line 180 in wrapper return target args kwargs file home env lib python3 6 site package tensorflow core python op array op py line 209 in identity copy input copy pylint disable protect access file home env lib python3 6 site package tensorflow core python framework op py line 1015 in copy new tensor self copy nograd ctx device name file home env lib python3 6 site package tensorflow core python framework op py line 1008 in copy nograd new tensor self copy to device device name runtimeerror dst tensor be not initialize describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem the model be the exact implementation of the cvae tutorial from the tensorflow website self optimizer tf keras optimizer adam 1e 4 self encode network tf keras sequential tf keras layers inputlayer input shape self width self height 3 tf keras layer conv2d filter 32 kernel size 4 stride 2 2 activation relu tf keras layer conv2d filter 64 kernel size 2 stride 2 2 activation relu tf keras layer flatten tf keras layer dense self latent dim self latent dim deconvolution 2d transpose formula output shape input shape stride self decode network tf keras sequential tf keras layers inputlayer input shape self latent dim tf keras layer dense unit 64 64 128 activation tf nn relu tf keras layers reshape target shape 64 64 128 tf keras layer conv2dtranspose filter 64 kernel size 3 stride 2 2 padding same activation relu tf keras layer conv2dtranspose filter 32 kernel size 3 stride 2 2 padding same activation relu tf keras layer conv2dtranspose filter 3 kernel size 2 stride 1 1 padding same because the layer size be increase I suspect it have something to do with memory because early issue refer the dst tensor issue to a memory problem with the gpu however this be not possible because only 58 be utilise and training run fine therefore it seem there be an issue with save the checkpoint |
tensorflowtensorflow | dynamic batch size bucketing for gpu in mirrorstrategy in tf 2 0 and avoid nan | Bug | tensorflow version 2 hi all I be try to do bucket my tensorflow dataset so that I can change my batch size dynamic batch size base on different sequence length so that I can exploit gpu performance well I be use strategy tf distribute mirroredstrategy assume I be have 8 gpu n gpu 8 accord to mirror strategy if we be have a batch size of 16 batch size 16 it will distribute 2 batch size n gpu batch dataset to each of the gpu now assume batch size be dynamic and change in each iteration over the dataset iterator in order to make it distribute whenever batch size be not a multiple of n gpu I will pad it and make it to the near possible multiple of n gpu if batch size 6 I will pad with 2 row of zero to make it 8 everything work well in cpus but when I be use iterator strategy experimental distribute dataset input datum thing become difficult I be provide the code import tensorflow as tf import numpy as np def create dummy squad data max seq length n vocab size 30000 np random seed 1 input ids np random randint 1 vocab size 1 n max seq length input mask np random randint 0 2 n max seq length token type ids np random randint 1 2 n max seq length start label np random randint 1 max seq length 1 n end label np random randint 1 max seq length 1 n return input ids input mask token type id start label end label def map to dict input ids input mask token type id start label end label input input input ids input ids input input mask input mask input segment ids token type ids label label start position start label label end position end label return input label def pad batch input label batch multiple 8 batch size tf shape input input ids 0 mod batch size batch multiple have mod tf cast tf cast mod tf bool tf int32 batch padding batch multiple have mod mod input pad for k feature in input item rank len feature shape padding 0 0 for in range rank padding 0 1 batch padding pad feature tf pad feature padding input pad k pad feature label pad for k feature in label item rank len feature shape padding 0 0 for in range rank padding 0 1 batch padding pad feature tf pad feature padding label pad k pad feature return input pad label pad main code start from here cpu code input ids input mask token type id start label end label create dummy squad data 5 100 d tf datum dataset from tensor slice input ids input mask token type id start label end label d d map map to dict batch size per gpu 1 num gpu 8 batch size batch size per gpu num gpu d d batch batch size d d map pad batch for item in d pass last batch of item be have batch size 4 if we do not pad as I be use padding thing be good sample output item 0 input ids code in gpu this be what I want with strategy scope iterator strategy experimental distribute dataset d all batch datum be curious to know how thing work with strategy scope for batch datum in iterator all batch datum append batch datum print batch data shape per gpu batch datum 0 input ids value 0 shape as you see below gpu 0 1 2 3 take each 4 element and pad it individually which be not I expect and gpu 4 5 6 7 not receive anything at the time of model the last 4 gpu return nan as loss and when I be use strategy reduce tf distribute reduceop sum per replica loss axis none it become nan here per replica loss be loss from 8 gpu first 4 be have value but last 4 be nan sample output batch datum 0 input ids input ids perreplica 0 job localhost replica 0 task 0 device gpu 0 1 job localhost replica 0 task 0 device gpu 1 2 job localhost replica 0 task 0 device gpu 2 3 job localhost replica 0 task 0 device gpu 3 4 job localhost replica 0 task 0 device gpu 4 5 job localhost replica 0 task 0 device gpu 5 6 job localhost replica 0 task 0 device gpu 6 7 job localhost replica 0 task 0 device gpu 7 expect behaviour if I be not use pad batch when batch size 8 each gpu get 1 dataset each and thing be good but as I be have dynamic batch size all batch size be not multiple of n gpu 8 now when I be try to make sure that batch size be multiple of n gpu thing get mess up if the last batch be 4 we pad 4 zero to make it 8 and I expect it will be distribute to each gpu 1 dataset each and last 4 gpu get zero padded row each so that they will not return nan and thing be good but how could I achieve it guptapriya might be an extension of sorry if I confuse you with long explanation |
tensorflowtensorflow | error massage tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above | Bug | python version 3 6 7 tensorflow gpu version 2 0 0 keras version 2 3 1 cudnn version 10 0 cuda version 10 0 mnist mlp py work perfectly but code which be give below give I this error tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d 7 convolution define at c user acseckin anaconda3 envs tensorflow lib site package tensorflow core python framework op py 1751 op inference keras scratch graph 4815 function call stack keras scratch graph project code derive from code from numpy import load from numpy import zero from numpy import one from numpy random import randint from keras optimizers import adam from keras initializers import randomnormal from keras model import model from keras model import input from keras layers import conv2d from keras layers import conv2dtranspose from keras layers import leakyrelu from keras layers import activation from keras layers import concatenate from keras layers import dropout from keras layers import batchnormalization from keras layers import leakyrelu from matplotlib import pyplot define the discriminator model def define discriminator image shape weight initialization init randomnormal stddev 0 02 source image input in src image input shape image shape target image input in target image input shape image shape concatenate image channel wise merge concatenate in src image in target image c64 d conv2d 64 4 4 stride 2 2 padding same kernel initializer init merge d leakyrelu alpha 0 2 d c128 d conv2d 128 4 4 stride 2 2 padding same kernel initializer init d d batchnormalization d d leakyrelu alpha 0 2 d c256 d conv2d 256 4 4 stride 2 2 padding same kernel initializer init d d batchnormalization d d leakyrelu alpha 0 2 d c512 d conv2d 512 4 4 stride 2 2 padding same kernel initializer init d d batchnormalization d d leakyrelu alpha 0 2 d second last output layer d conv2d 512 4 4 padding same kernel initializer init d d batchnormalization d d leakyrelu alpha 0 2 d patch output d conv2d 1 4 4 padding same kernel initializer init d patch out activation sigmoid d define model model model in src image in target image patch out compile model opt adam lr 0 0002 beta 1 0 5 model compile loss binary crossentropy optimizer opt loss weight 0 5 return model define an encoder block def define encoder block layer in n filter batchnorm true weight initialization init randomnormal stddev 0 02 add downsample layer g conv2d n filter 4 4 stride 2 2 padding same kernel initializer init layer in conditionally add batch normalization if batchnorm g batchnormalization g training true leaky relu activation g leakyrelu alpha 0 2 g return g define a decoder block def decoder block layer in skip in n filter dropout true weight initialization init randomnormal stddev 0 02 add upsampling layer g conv2dtranspose n filter 4 4 stride 2 2 padding same kernel initializer init layer in add batch normalization g batchnormalization g training true conditionally add dropout if dropout g dropout 0 5 g training true merge with skip connection g concatenate g skip in relu activation g activation relu g return g define the standalone generator model def define generator image shape 256 256 3 weight initialization init randomnormal stddev 0 02 image input in image input shape image shape encoder model e1 define encoder block in image 64 batchnorm false e2 define encoder block e1 128 e3 define encoder block e2 256 e4 define encoder block e3 512 e5 define encoder block e4 512 e6 define encoder block e5 512 e7 define encoder block e6 512 bottleneck no batch norm and relu b conv2d 512 4 4 stride 2 2 padding same kernel initializer init e7 b activation relu b decoder model d1 decoder block b e7 512 d2 decoder block d1 e6 512 d3 decoder block d2 e5 512 d4 decoder block d3 e4 512 dropout false d5 decoder block d4 e3 256 dropout false d6 decoder block d5 e2 128 dropout false d7 decoder block d6 e1 64 dropout false output g conv2dtranspose 3 4 4 stride 2 2 padding same kernel initializer init d7 out image activation tanh g define model model model in image out image return model define the combine generator and discriminator model for update the generator def define gan g model d model image shape make weight in the discriminator not trainable d model trainable false define the source image in src input shape image shape connect the source image to the generator input gen out g model in src connect the source input and generator output to the discriminator input dis out d model in src gen out src image as input generate image and classification output model model in src dis out gen out compile model opt adam lr 0 0002 beta 1 0 5 model compile loss binary crossentropy mae optimizer opt loss weight 1 100 return model load and prepare training image def load real sample filename load compress array datum load filename unpack array x1 x2 datum arr 0 datum arr 1 scale from 0 255 to 1 1 x1 x1 127 5 127 5 x2 x2 127 5 127 5 return x1 x2 select a batch of random sample return image and target def generate real sample dataset n sample patch shape unpack dataset traina trainb dataset choose random instance ix randint 0 traina shape 0 n sample retrieve select image x1 x2 traina ix trainb ix generate real class label 1 y one n sample patch shape patch shape 1 return x1 x2 y generate a batch of image return image and target def generate fake sample g model sample patch shape generate fake instance x g model predict sample create fake class label 0 y zero len x patch shape patch shape 1 return x y generate sample and save as a plot and save the model def summarize performance step g model dataset n sample 3 select a sample of input image x reala x realb generate real sample dataset n sample 1 generate a batch of fake sample x fakeb generate fake sample g model x reala 1 scale all pixel from 1 1 to 0 1 x reala x reala 1 2 0 x realb x realb 1 2 0 x fakeb x fakeb 1 2 0 plot real source image for I in range n sample pyplot subplot 3 n sample 1 I pyplot axis off pyplot imshow x reala I plot generate target image for I in range n sample pyplot subplot 3 n sample 1 n sample I pyplot axis off pyplot imshow x fakeb I plot real target image for I in range n sample pyplot subplot 3 n sample 1 n sample 2 I pyplot axis off pyplot imshow x realb I save plot to file filename1 plot 06d png step 1 pyplot savefig filename1 pyplot close save the generator model filename2 model 06d h5 step 1 g model save filename2 print save s and s filename1 filename2 train pix2pix model def train d model g model gan model dataset n epoch 50 n batch 1 determine the output square shape of the discriminator n patch d model output shape 1 unpack dataset traina trainb dataset calculate the number of batch per train epoch bat per epo int len traina n batch calculate the number of train iteration n step bat per epo n epoch manually enumerate epoch for I in range n step select a batch of real sample x reala x realb y real generate real sample dataset n batch n patch generate a batch of fake sample x fakeb y fake generate fake sample g model x reala n patch update discriminator for real sample d loss1 d model train on batch x reala x realb y real update discriminator for generate sample d loss2 d model train on batch x reala x fakeb y fake update the generator g loss gan model train on batch x reala y real x realb summarize performance print d d1 3f d2 3f g 3f I 1 d loss1 d loss2 g loss summarize model performance if I 1 bat per epo 10 0 summarize performance I g model dataset load image datum dataset load real sample fabric 256 npz print load dataset 0 shape dataset 1 shape define input shape base on the loaded dataset image shape dataset 0 shape 1 define the model d model define discriminator image shape g model define generator image shape define the composite model gan model define gan g model d model image shape train model train d model g model gan model dataset |
tensorflowtensorflow | merge two rt graph through an error | Bug | I want to merge two rt graph into a single graph but while merge I m get error my code be as follow import tensorflow as tf from tensorflow python compiler tensorrt import trt convert as trt save model1 home xavier2 save model ssd tomato l1 save model2 home xavier2 save model fast rcnn tomato l2 grid 750x750 def create trt inference graph graph path converter trt trtgraphconverter input save model dir graph path precision mode trt trtprecisionmode fp16 convert graph def converter convert return convert graph def def get serialize graph graph path convert graph def create trt inference graph graph path serial def convert graph def serializetostring return serial def def get frozen graph graph path graph def tf graphdef graph def parsefromstring get serialize graph graph path return graph def def rename frame name graphdef suffix bug report at issuecomment 428091121 for n in graphdef node if while in n name if frame name in n attr n attr frame name s str n attr frame name replace while context while context suffix encode utf 8 l1 graph tf graph with l1 graph as default trt graph1 get frozen graph save model1 tf input1 tf scores1 tf boxes1 tf classes1 tf num detections1 tf import graph def trt graph1 return element import image tensor 0 import detection score 0 import detection box 0 import detection class 0 import num detection 0 connect graph tf graph with connected graph as default l1 graph def l1 graph as graph def g1name ve rename frame name l1 graph def g1name tf import graph def l1 graph def name g1name trt graph2 get frozen graph save model2 g2name level2 rename frame name trt graph2 g2name tf score tf box tf class tf num detection tf import graph def trt graph2 return element import detection score 0 import detection box 0 import detection class 0 import num detection 0 it throw follow error traceback most recent call last file usr local lib python3 6 dist package tensorflow python framework importer py line 427 in import graph def graph c graph serialize option pylint disable protect access tensorflow python framework error impl invalidargumenterror can not add function trtengineop 1 native segment because a different function with the same name already exist during handling of the above exception another exception occur traceback most recent call last file line 10 in file usr local lib python3 6 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 6 dist package tensorflow python framework importer py line 431 in import graph def raise valueerror str e valueerror can not add function trtengineop 1 native segment because a different function with the same name already exist my spec be tensorflow 1 14 0 python 3 6 8 platform ubuntu 18 04 2 lts gnu linux 4 9 140 tegra aarch64 |
tensorflowtensorflow | save method show buggy confusing behaviour | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device nn tensorflow instal from source or binary nn tensorflow version use command below 2 0 0 python version 3 6 bazel version if compile from source nn gcc compiler version if compile from source nn cuda cudnn version nn gpu model and memory nn describe the current behavior tf keras model save show confuse behavior with the save format argument see gist even when save format be set as tf the model be save as h5 if the filepath end in suffix h5 also it default random string argument to tf format describe the expect behavior the value of the save format argument should be the format of the save file irrespective of the filepath or else there should be a boolean argument like save as h5 instead code to reproduce the issue scrollto 1h73rxh5stgl other info log source code l923 l975 outdate documentation save update doc for current behavior in pr more detail model save weight handle it well see gist |
tensorflowtensorflow | image preprocesse for transfer learn confuse | Bug | I m really confused about what preprocesse to apply on image datum when work with pre train model for transfer learn of course this be very specific for each model but I think kera bring some disambiguation by bring the preprocess input function by use it user should normally not worry about how to transform an image before push it into the pre train model I recently try to use one of such model provide in tensorflow 2 kera even if this function exist the relate documentation be let s say minimal but what be very disturbing be that this function be not use at all in the provide tutorial for example this one pre process image by divide raw pixel by 255 image generator tf keras preprocesse image imagedatagenerator rescale 1 255 in this other tutorial pre processing be do like this image image 127 5 1 however both tutorial be suppose to use a mobilenet v2 model pre train on imagenet this make 3 different way to process an image without any help documentation to shed some light on what appear to I like black magic any help would be welcome along with an appropriate documentation of course |
tensorflowtensorflow | tf 2 0 nest gradient tape unconnected graph | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution macos 10 15 1 tensorflow instal from binary pip 19 3 1 tensorflow version v2 0 0 rc2 26 g64c3d382ca 2 0 0 python version python 3 6 5 describe the current behavior a copy of my model model copy should be train one step then I need my meta model to be train with the loss of my model copy it seem that the graph be unconnected it only work if I use the meta model for the training step describe the expect behavior I would expect that model copy be know to both gradient tape and can be use w o use meta model code to reproduce the issue python import tensorflow as tf import tensorflow kera backend as keras backend import tensorflow kera as keras class metamodel keras model def init self super init self hidden1 keras layer dense 5 input shape 1 self out kera layer dense 1 def forward self x x keras activation relu self hidden1 x x self out x return x def copy model model x copy model metamodel copy model forward x copy model set weight model get weight return copy model def compute loss model x y logit model forward x prediction of my model mse keras backend mean keras loss mean square error y logit compute loss between prediciton and label truth return mse logit optimizer outer keras optimizer adam alpha 0 01 with tf gradienttape as g meta model to learn in outer gradient tape meta model metamodel input for training x tf constant 3 0 shape 1 1 1 y tf constant 3 0 shape 1 1 1 meta model forward x model copy copy model meta model x with tf gradienttape as gg loss compute loss model copy x y gradient gg gradient loss model copy trainable variable k 0 for layer in range len model copy layer if I use meta model for update this work model copy layer layer kernel tf subtract meta model layer layer kernel tf multiply alpha gradient k model copy layer layer bias tf subtract meta model layer layer bias tf multiply alpha gradient k 1 if I use model copy for update instead gradient meta always will be none none model copy layer layer kernel tf subtract model copy layer layer kernel tf multiply alpha gradient k model copy layer layer bias tf subtract model copy layer layer bias tf multiply alpha gradient k 1 k 2 calculate loss of model copy test loss compute loss model copy x y build gradient for meta model update gradient meta g gradient test loss meta model trainable variable gradient always none 11 elf optimizer outer apply gradient zip gradient meta meta model trainable variable other info log be it intend to work as above this would force I not to be able to use a different optimizer in the inner loop as the network need somehow to be connect |
tensorflowtensorflow | default instal version of tensorflow lite arduino library be pre compile cause confusing error report | Bug | hiya petewarden I ve be re port my demos hooray for arduino board right now when folk install the tensorflow library it default to the pre compile version which cause very obscure error about register argument if they be not use the exact same processor use vfp register argument and libtensorflowlite dot a do not error 10 2 please make the default non pre compile arduino ide have a huge collection of support board and as be will confuse a lot of people |
tensorflowtensorflow | autocastvariable assign return wrap variable instead of cast version | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 2 1 nightly python version 3 7 describe the current behavior autocastvariable variable forward assign and scatter to the underlying float32 variable l187 l188 thus the return value of assign method with read value true be a normal tf variable and not an autocastvariable this mean that calculation directly depend on the assign operation might run in float32 instead of float16 or be I miss something describe the expect behavior autocastvariable assign should return an autocastvariable variable instead tf variable so that the dtype be preserve reedwm be this intend behaviour code to reproduce the issue python import tensorflow as tf from tensorflow python keras mixed precision experimental import autocast variable var tf variable 0 dtype tf float32 var autocast variable autocastvariable var with tf compat v1 get default graph enable auto casting variable tf float16 assert var dtype tf float16 assign should return an autocastvariable but return tf variable var assign var assign 5 assert not isinstance var assign autocast variable autocastvariable assert var assign dtype tf float32 |
tensorflowtensorflow | behaviour of the loss function with sample weight kera | Bug | url s with the issue fit description of issue what need change I think we need a description of how the loss be compute when temporal sample weight be apply I e how be the loss aggregate across sample time step as address in 25178 the behaviour have even change over time in the external version of kera from ignore zero value to count they make the situation confuse even if I agree the current implementation be the right one I think it be far from intuitive especially when use zero weight zero padding which decrease the loss value and the effective learning rate I be not very familiar with the structure to be follow in the build in tf keras documentation but this could be also specify elsewhere than in the fit method doc which be already pretty dense sorry if this be specify elsewhere in the doc but I can t find it so if it exist it should maybe be reference in the fit doc thank a lot emilien |
tensorflowtensorflow | tensorflow 1 15 doesn t exist within pip install | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change it say tensorflow 1 15 be final version for 1xx version yet pip install 1 15 return this pip3 install tensorflow 1 15 collect tensorflow 1 15 could not find a version that satisfy the requirement tensorflow 1 15 from version 0 12 1 1 0 0 1 0 1 1 1 0rc0 1 1 0rc1 1 1 0rc2 1 1 0 1 2 0rc0 1 2 0rc1 1 2 0rc2 1 2 0 1 2 1 1 3 0rc0 1 3 0rc1 1 3 0rc2 1 3 0 1 4 0rc0 1 4 0rc1 1 4 0 1 4 1 1 5 0rc0 1 5 0rc1 1 5 0 1 5 1 1 6 0rc0 1 6 0rc1 1 6 0 1 7 0rc0 1 7 0rc1 1 7 0 1 7 1 1 8 0rc0 1 8 0rc1 1 8 0 1 9 0rc0 1 9 0rc1 1 9 0rc2 1 9 0 1 10 0rc0 1 10 0rc1 1 10 0 1 10 1 1 11 0rc0 1 11 0rc1 1 11 0rc2 1 11 0 1 12 0rc0 1 12 0rc1 1 12 0rc2 1 12 0 1 12 2 1 12 3 1 13 0rc0 1 13 0rc1 1 13 0rc2 1 13 1 1 13 2 1 14 0rc0 1 14 0rc1 1 14 0 2 0 0a0 2 0 0b0 2 0 0b1 no match distribution find for tensorflow 1 15 it should be correct I think |
tensorflowtensorflow | how to use per channel quantize in tflite | Bug | hello from the tflite conv cc I can see that now tflite support per channel quantize be there any guide or doc how to use per channel quantize when convert tf model to tflite |
tensorflowtensorflow | could not restore weight to model with same structure when enable eager | Bug | sytem ubuntu18 04 tf version 2 0 hardware gtx1080ti and gt 840 m cuda 10 0 cudnn 7 6 class retinanet tf keras model def init self config super retinanet self init self config config self num class config num class self weight decay config weight decay self mode config mode self batch size config batch size if config mode train else 1 self lr config lr image tf keras input shape none none 3 batch size self batch size dtype tf float32 self backone backone model config backone image config backone conv trainable config backone bn trainable weight backone weight config backone self opt tf keras optimizer sgd self lr momentum 0 9 if config mode train gt tf keras input shape none 5 batch size self batch size dtype tf float32 self model self build graph image gt else self model self build graph image def build graph self image gt none num fpn layer 5 fpn channel 256 endpoint self backone image fpn fpn generator endpoint 1 fpn channel num fpn layer mode dconv p3 p4 p5 p6 p7 fpn dw rate 8 16 32 64 128 anchor 4 4 4 2 4 2 4 2 4 2 4 3 4 3 4 3 4 3 4 4 4 2 4 2 4 2 4 2 4 3 4 3 4 3 4 3 4 4 4 2 4 2 4 2 4 2 4 3 4 3 4 3 4 3 4 4 4 2 4 2 4 2 4 2 4 3 4 3 4 3 4 3 4 4 4 2 4 2 4 2 4 2 4 3 4 3 4 3 4 3 anchor all anchor generator fpn anchor dw rate flatten true anchor all tf concat anchor all axis 0 self cla head self cla head fpn channel 5 self reg head self reg head fpn channel 5 p3c self cla head p3 p3r self reg head p3 p4c self cla head p4 p4r self reg head p4 p5c self cla head p5 p5r self reg head p5 p6c self cla head p6 p6r self reg head p6 p7c self cla head p7 p7r self reg head p7 pc tf concat p3c p4c p5c p6c p7c axis 1 pr tf concat p3r p4r p5r p6r p7r axis 1 pc tf nn sigmoid pc if self mode train I 0 loss tf constant 0 0 dtype tf float32 shape 1 2 cond lambda loss I tf less I self batch size body lambda loss I tf add loss self compute one image loss tf gather gt I anchor all tf gather pc I tf gather pr I tf add I 1 loss tf while loop cond body loss I loss self batch size return tf keras model input image gt output loss name retinanet else nms score threshold 0 5 nms max box 100 nms iou threshold 0 45 pr pr 0 confidence pc 0 y1x1y2x2 bbox decode anchor all pr normlization 10 10 5 5 filter mask tf great equal confidence nms score threshold score class i d bbox for I in range self num class scoresi tf boolean mask confidence I filter mask I bboxi tf boolean mask y1x1y2x2 filter mask I select index tf image non max suppression bboxi scoresi nms max box nms iou threshold score append tf gather scoresi select index bbox append tf gather bboxi select index class i d append tf one like tf gather scoresi select index tf int32 I bbox tf concat bbox axis 0 score tf concat score axis 0 class i d tf concat class i d axis 0 1 detection pre score bbox class i d return tf keras model inputs image output detection pre name retinanet def compute one image loss self gt anchor pc pr slice index tf argmin gt axis 0 0 gt tf gather gt tf range 0 slice index dtype tf int64 gt bbox gt 0 4 label tf cast gt 4 5 dtype tf int32 1 pos threshold 0 5 neg threshold 0 4 gaiou bbox iou gt bbox anchor pos pc pos label pos pr pos gt bbox pos a neg pc partition pos neg sample gt bbox label gaiou pc pr anchor pos threshold neg threshold pos gr bbox encode pos gt bbox pos a normlization 10 10 5 5 reg loss tf reduce sum smooth l1 loss pos pr pos gr pos label tf one hot pos label self num class dtype tf float32 cla loss focal loss pos pc pos label neg pc alpha 0 25 gamma 2 reg loss tf reshape reg loss 1 1 cla loss tf reshape cla loss 1 1 loss tf concat cla loss reg loss axis 1 return loss def cla head self input channel anchor x tf keras input shape none none input channel dtype tf float32 conv layer conv2d 256 3 1 same kernel initializer he normal x bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn conv layer conv2d 256 3 1 same kernel initializer he normal relu bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn conv layer conv2d 256 3 1 same kernel initializer he normal relu bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn pre layer conv2d self num class anchor 3 1 same kernel initializer he normal bias initializer tf constant initializer 4 595 relu pre tf reshape pre self batch size 1 self num class return tf keras model inputs x output pre name cla head def reg head self input channel anchor x tf keras input shape none none input channel dtype tf float32 conv layer conv2d 256 3 1 same kernel initializer he normal x bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn conv layer conv2d 256 3 1 same kernel initializer he normal relu bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn conv layer conv2d 256 3 1 same kernel initializer he normal relu bn layer batchnormalization 3 epsilon 1 001e 5 conv relu layer activation relu bn pre layer conv2d 4 anchor 3 1 same kernel initializer he normal relu pre tf reshape pre self batch size 1 4 return tf keras model inputs x output pre name reg head def save weight self filepath overwrite true save format none self model save weight filepath overwrite save format def load weight self filepath by name false self model load weight filepath by name the save weight code be config num class 20 batch size batch size mode train lr lr weight decay 1e 4 backone resnetv1 18 backone conv trainable true backone bn trainable true ssd retinanet config ssd save weight save weight 1 tf the load weight code be config num class 20 batch size batch size mode test lr lr weight decay 1e 4 backone resnetv1 18 backone conv trainable true backone bn trainable true ssd retinanet config ssd load weight save weight 1 tf when enable eager the weight save under mode train could not be restore into model when mode test the error message be as follow warn tensorflow inconsistent reference when load the checkpoint into this object graph either the trackable object reference in the python program have change in an incompatible way or the checkpoint be generate in an incompatible program two checkpoint reference resolve to different object and warn tensorflow inconsistent reference when load the checkpoint into this object graph either the trackable object reference in the python program have change in an incompatible way or the checkpoint be generate in an incompatible program two checkpoint reference resolve to different object and traceback most recent call last file home master workspace objdect test1 py line 23 in ssd load weight save weight 1 tf file home master workspace objdect retinanet py line 170 in load weight self model load weight filepath by name file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python keras engine training py line 181 in load weight return super model self load weight filepath by name file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python keras engine network py line 1149 in load weight status self trackable saver restore filepath file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training track util py line 1270 in restore checkpoint checkpoint proto i d 0 restore self graph view root file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training tracking base py line 209 in restore restore op trackable restore from checkpoint position self pylint disable protect access file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training tracking base py line 908 in restore from checkpoint position tensor saveables python saveables file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training track util py line 289 in restore saveable validate saveable restore self save path tensor file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training save functional saver py line 255 in restore restore op update saver restore file prefix file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training save functional saver py line 102 in restore restore tensor restore shape none file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python training save saveable object util py line 115 in restore self handle op self var shape restore tensor file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python op resource variable op py line 291 in shape safe assign variable handle shape assert be compatible with value tensor shape file home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python framework tensor shape py line 1115 in assert be compatible with raise valueerror shape s and s be incompatible self other valueerror shape 20 and 100 be incompatible the shape 20 be the shape of reg head the shape 100 be the shape of cla head maybe the loading order be out of order when disable eager the weight save under mode train could be restore into model when mode test with some warning warn tensorflow from home master app anaconda3 envs tf2 lib python3 7 site package tensorflow core python op resource variable op py 1630 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer warn tensorflow inconsistent reference when load the checkpoint into this object graph either the trackable object reference in the python program have change in an incompatible way or the checkpoint be generate in an incompatible program two checkpoint reference resolve to different object and warn tensorflow inconsistent reference when load the checkpoint into this object graph either the trackable object reference in the python program have change in an incompatible way or the checkpoint be generate in an incompatible program two checkpoint reference resolve to different object and |
tensorflowtensorflow | tf function hang on ragged tensor input | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 no tensorflow instal from source or binary colab tensorflow version use command below 2 0 0 python version 3 6 gpu model and memory none describe the current behavior when use tf function with a number of for loop on a raggedtensor the function call hang I stop wait after half hour when run the function directly no tf function decorator the function execute immediately when convert the ragged tensor to dense tensor the function execute immediately I couldn t pinpoint the exact combination of operation that cause this autograph behavior but I try to reduce my code to the simple combination that still cause this behavior I struggle for hour with my own code try to get tf function to work until I figure it be due to the ragged tensor for loop tf function hang the kernel I observe similar behavior on my machine and a colab machine as well describe the expect behavior should execute in a comparable time to a dense tensor code to reproduce the issue tensorflow version 2 x import tensorflow as tf import numpy as np inp tf ragged constant np arange 1000 dtype np float32 reshape 10 10 10 tf function if not use tf function it run well def ragged example r tensor s tf constant 0 0 for I in tf range 10 inner r tensor I for x in inner b tf reduce sum x s s b return s inp inp to tensor if this be uncommented it run well ragged example inp this hang other observation also note very large memory footprint as a result 9 gb and grow |
tensorflowtensorflow | error description be not clear with new experimental tf lite converter | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux colab tensorflow instal from source or binary binary tensorflow version or github sha if from source tf nightly command use to run the converter or code if you re use the python api import tensorflow as tf mnist tf keras datasets mnist x train y train x test y test mnist load datum x train x test x train 255 0 x test 255 0 without the follow two line it will throw valueerror can not set tensor get value of type notype but expect type float32 for input 0 name flatten input x train tf dtype cast x train tf float32 x test tf dtype cast x test tf float32 model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layers dropout 0 2 tf keras layer dense 10 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit x train y train epoch 1 model evaluate x test y test converter tf lite tfliteconverter from keras model model converter experimental new converter true converter experimental enable mlir converter true tflite model converter convert import numpy as np expect model predict x test 0 1 run the model with tensorflow lite interpreter tf lite interpreter model content tflite model interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail interpreter set tensor input detail 0 index x test 0 1 interpreter invoke result interpreter get tensor output detail 0 index assert if the result of tflite model be consistent with the tf model np testing assert almost equal expect result print do the result of tensorflow match the result of tensorflow lite the output from the converter invocation valueerror can not set tensor get value of type notype but expect type float32 for input 0 name flatten input failure detail conversion be successful if the data type be float32 if the data type of input datum be float64 then it will throw valueerror can not set tensor get value of type notype but expect type float32 for input 0 name flatten input which be not clear most of the keras model in tensorflow website under tutorial be with float64 datatype so if the user try to convert they into tf lite model they will end up in this valueerror I think we need to update the error description instead of show notype may it be well to use float64 or other datum type that be not compatible here be the link to colab gist |
tensorflowtensorflow | bracket in directory for tf train checkpointmanager in tf2 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux debian gcp notebook tensorflow instal from source or binary preinstalle on gcp notebook tensorflow version use command below 2 0 0 python version 3 5 3 gpu model and memory none describe the current behavior this problem concern tf train checkpointmanager and tf train checkpoint in tf 2 0 if the checkpoint directory contain square bracket or then load checkpoint with tf train checkpoint restore fail checkpointmanager will save and track checkpoint as expect but do not remove they accord to max to keep I see that square bracket be not recommend for google cloud storage blob however this happen for both checkpoint store in google cloud storage and checkpoint store locally as an example valid checkpoint exist in tmp checkpoint with bracket call checkpoint dir tmp checkpoint with bracket tf train checkpoint restore tf train late checkpoint checkpoint dir give the error notfounderror unsuccessful tensorslicereader constructor fail to find any match file for tmp checkpoint with bracket ckpt 4 describe the expect behavior allow square bracket in the checkpoint path or fail at checkpointmanager creation if directory parameter contain disallowed character code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem code work as be add a bracket somewhere in the path in line 28 to see error |
tensorflowtensorflow | tflite crash sigabrt while run conv3d on android | Bug | system information have I write custom code yes os platform and distribution build environment be linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device runtime environment be samsung s8 tensorflow instal from source or binary build from source with select tf op use bazel build cxxopt std c 11 c opt config android arm config monolithic tensorflow lite java tensorflow lite with select tf op tensorflow version 1 15 0rc2 keras version 2 2 4 tf python version 3 6 8 bazel version if compile from source 0 25 2 gcc compiler version if compile from source 5 4 0 cuda cudnn version 10 0 gpu model and memory not relevant describe the current behavior I be try to get a network with conv3d op to run on my android system use tflite I have follow all the step mention here and I can convert the network without issue during runtime I also appear to be able to load the converted tflite model without issue however during my call to runformultipleinputsoutput tflite crash give I a sibabrt come from libtensorflowlite flex jni so full stack trace below describe the expect behavior I expect it to not crash when run code to reproduce the issue I make a dummy network to try and isolate the issue I build the network use the follow import tensorflow as tf import numpy as np model tf keras model sequential model add tf keras layers conv3d 1 4 4 4 input shape 4 8 8 1 name conv model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer rmsprop lr 0 0001 metric tf keras metric categorical accuracy x np random random 1 4 8 8 1 y np random random 1 1 5 5 1 model train on batch x y model predict x save tf keras model in hdf5 format keras file conv3d h5 tf keras model save model model keras file convert the model to tflite format converter tf lite tfliteconverter from keras model file keras file converter target op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert open conv3d tflite wb write tflite model I then load and run the model on the android use bytebuffer to hold the input output I can provide this code if request but I don t suspect it to be the problem as I use it for other working project I m confident that this be a conv3d issue because I have also build a conv2d dummy network use the exact same build procedure runtime environment and it run without crash other info log the full android backtrace during the call to runformultipleinputsoutput a debug a debug build fingerprint samsung dream2ltexx dream2lte 9 ppr1 180610 011 g955fxxs5dsi1 user release key a debug revision 10 a debug abi arm a debug pid 6774 tid 6817 name thread 2 com segmentation qussegservicenvw a debug signal 6 sigabrt code 6 si tkill fault addr a debug r0 00000000 r1 00001aa1 r2 00000006 r3 00000008 a debug r4 00001a76 r5 00001aa1 r6 b7628eac r7 0000010c a debug r8 b7629014 r9 b7628fa0 r10 b762903c r11 e4095c70 a debug ip b7628e48 sp b7628e98 lr e6c73e71 pc e6c6ae62 a debug backtrace a debug 00 pc 0001ce62 system lib libc so abort 58 a debug 01 pc 002181cd datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 02 pc 002213fd datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 03 pc 00225795 datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 04 pc 0021fb63 datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 05 pc 00335dcd datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 06 pc 00338313 datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so a debug 07 pc 0020925b datum app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow lib arm libtensorflowlite flex jni so java org tensorflow lite nativeinterpreterwrapper run 26 a debug 08 pc 00415879 system lib libart so art quick generic jni trampoline 40 a debug 09 pc 00411375 system lib libart so art quick invoke stub internal 68 a debug 10 pc 003ea57b system lib libart so art quick invoke static stub 222 a debug 11 pc 000a1627 system lib libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 154 a debug 12 pc 001e88c9 system lib libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 236 a debug 13 pc 001e33b7 system lib libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 814 a debug 14 pc 003e60af system lib libart so mterpinvokestatic 130 a debug 15 pc 00404294 system lib libart so executemterpimpl 14612 a debug 16 pc 001aa16c dev ashmem dalvik class dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk 6774 6774 delete org tensorflow lite nativeinterpreterwrapper run 164 a debug 17 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 18 pc 001cc219 system lib libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 152 a debug 19 pc 001e339f system lib libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 790 a debug 20 pc 003e50d3 system lib libart so mterpinvokevirtual 442 a debug 21 pc 00404114 system lib libart so executemterpimpl 14228 a debug 22 pc 001a9962 dev ashmem dalvik class dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk 6774 6774 delete org tensorflow lite interpreter runformultipleinputsoutput 10 a debug 23 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 24 pc 001cc15f system lib libart so art interpreter enterinterpreterfromentrypoint art thread art codeitemdataaccessor const art shadowframe 82 a debug 25 pc 003d8bb9 system lib libart so artquicktointerpreterbridge 880 a debug 26 pc 004158ff system lib libart so art quick to interpreter bridge 30 a debug 27 pc 0001c0fd dev ashmem dalvik jit code cache 6774 6774 delete com segmentation qussegservicenvw tensorflowsegmentrunner segnetrunner segmentchunk 492 a debug 28 pc 004113bb system lib libart so art quick osr stub 42 a debug 29 pc 0024d8a9 system lib libart so art jit jit maybedoonstackreplacement art thread art artmethod unsigned int int art jvalue 1388 a debug 30 pc 003e9aab system lib libart so mterpmaybedoonstackreplacement 86 a debug 31 pc 00410bf4 system lib libart so executemterpimpl 66164 a debug 32 pc 0002e7b8 dev ashmem dalvik classes2 dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk classes2 dex 6774 6774 delete com segmentation qussegservicenvw tensorflowsegmentrunner segnetrunner segmentchunk 76 a debug 33 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 34 pc 001cc219 system lib libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 152 a debug 35 pc 001e339f system lib libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 790 a debug 36 pc 003e5f61 system lib libart so mterpinvokedirect 196 a debug 37 pc 00404214 system lib libart so executemterpimpl 14484 a debug 38 pc 0002e750 dev ashmem dalvik classes2 dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk classes2 dex 6774 6774 delete com segmentation qussegservicenvw tensorflowsegmentrunner segnetrunner access 100 a debug 39 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 40 pc 001cc15f system lib libart so art interpreter enterinterpreterfromentrypoint art thread art codeitemdataaccessor const art shadowframe 82 a debug 41 pc 003d8bb9 system lib libart so artquicktointerpreterbridge 880 a debug 42 pc 004158ff system lib libart so art quick to interpreter bridge 30 a debug 43 pc 0001b5fd dev ashmem dalvik jit code cache 6774 6774 delete com segmentation qussegservicenvw tensorflowsegmentrunner segmentframe 604 a debug 44 pc 00411375 system lib libart so art quick invoke stub internal 68 a debug 45 pc 003ea479 system lib libart so art quick invoke stub 224 a debug 46 pc 000a1615 system lib libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 136 a debug 47 pc 001e88c9 system lib libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 236 a debug 48 pc 001e33b7 system lib libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 814 a debug 49 pc 003e50d3 system lib libart so mterpinvokevirtual 442 a debug 50 pc 00404114 system lib libart so executemterpimpl 14228 a debug 51 pc 0002edb4 dev ashmem dalvik classes2 dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk classes2 dex 6774 6774 delete com segmentation qussegservicenvw tensorflowsegmentrunner segmentcine 40 a debug 52 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 53 pc 001cc219 system lib libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 152 a debug 54 pc 001e339f system lib libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 790 a debug 55 pc 003e5ca3 system lib libart so mterpinvokeinterface 1010 a debug 56 pc 00404314 system lib libart so executemterpimpl 14740 a debug 57 pc 00028310 dev ashmem dalvik classes2 dex extract in memory from data app com segmentation qussegservicenvw 52gmpws9z9fl4loxs6xvow base apk classes2 dex 6774 6774 delete com segmentation qussegservicenvw cineplayeractivity 3 1 run 20 a debug 58 pc 001c7b33 system lib libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 2760711098 378 a debug 59 pc 001cc15f system lib libart so art interpreter enterinterpreterfromentrypoint art thread art codeitemdataaccessor const art shadowframe 82 a debug 60 pc 003d8bb9 system lib libart so artquicktointerpreterbridge 880 a debug 61 pc 004158ff system lib libart so art quick to interpreter bridge 30 a debug 62 pc 00411375 system lib libart so art quick invoke stub internal 68 a debug 63 pc 003ea479 system lib libart so art quick invoke stub 224 a debug 64 pc 000a1615 system lib libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 136 a debug 65 pc 0034b0c5 system lib libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 52 a debug 66 pc 0034be1d system lib libart so art invokevirtualorinterfacewithjvalue art scopedobjectaccessalreadyrunnable const jobject jmethodid jvalue 320 a debug 67 pc 0036d203 system lib libart so art thread createcallback void 866 a debug 68 pc 00064899 system lib libc so pthread start void 140 a debug 69 pc 0001e329 system lib libc so start thread 24 |
tensorflowtensorflow | copy paste error for value embedding token embed query input | Bug | should be value input thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | can not convert model to the tflite format with gru grucell | Bug | despite the declare support it seem it be not possible to convert the exist tf 2 0 model with gru grucell to the tflite format where be the problem here be the sample from tensorflow keras layer import from tensorflow keras import sequential import numpy as np import tensorflow as tf model sequential model add input 10 1 model add gru 1 model add rnn tf compat v1 nn rnn cell grucell 1 model compile loss mean squared error optimizer adam x np random uniform size 10 10 1 y np random uniform size 10 1 model fit x y model save model h5 include optimizer false and the log 1 for the gru 2019 11 13 18 31 17 808698 e tensorflow lite toco toco tooling cc 498 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the following traceback most recent call last file home harry local bin toco from protos line 10 in sys exit main file home harry local lib python3 5 site package tensorflow core lite toco python toco from protos py line 93 in main app run main execute argv sys argv 0 unparse file home harry local lib python3 5 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home harry local lib python3 5 site package absl app py line 299 in run run main main args file home harry local lib python3 5 site package absl app py line 250 in run main sys exit main argv file home harry local lib python3 5 site package tensorflow core lite toco python toco from protos py line 56 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the follow some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use reshape strided slice here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while 2 for the grucell traceback most recent call last file home harry local bin toco line 10 in sys exit main file home harry local lib python3 5 site package tensorflow core lite python tflite convert py line 594 in main app run main run main argv sys argv 1 file home harry local lib python3 5 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home harry local lib python3 5 site package absl app py line 299 in run run main main args file home harry local lib python3 5 site package absl app py line 250 in run main sys exit main argv file home harry local lib python3 5 site package tensorflow core lite python tflite convert py line 577 in run main convert tf2 model tflite flag file home harry local lib python3 5 site package tensorflow core lite python tflite convert py line 228 in convert tf2 model model keras model load model flag keras model file file home harry local lib python3 5 site package tensorflow core python keras save save py line 146 in load model return hdf5 format load model from hdf5 filepath custom object compile file home harry local lib python3 5 site package tensorflow core python keras save hdf5 format py line 168 in load model from hdf5 custom object custom object file home harry local lib python3 5 site package tensorflow core python keras saving model config py line 55 in model from config return deserialize config custom object custom object file home harry local lib python3 5 site package tensorflow core python keras layers serialization py line 106 in deserialize printable module name layer file home harry local lib python3 5 site package tensorflow core python keras util generic util py line 303 in deserialize keras object list custom object item file home harry local lib python3 5 site package tensorflow core python keras engine sequential py line 377 in from config custom object custom object file home harry local lib python3 5 site package tensorflow core python keras layers serialization py line 106 in deserialize printable module name layer file home harry local lib python3 5 site package tensorflow core python keras util generic util py line 303 in deserialize keras object list custom object item file home harry local lib python3 5 site package tensorflow core python keras layers recurrent py line 958 in from config cell deserialize layer config pop cell custom object custom object file home harry local lib python3 5 site package tensorflow core python keras layers serialization py line 106 in deserialize printable module name layer file home harry local lib python3 5 site package tensorflow core python keras util generic util py line 305 in deserialize keras object return cls from config cls config file home harry local lib python3 5 site package tensorflow core python keras engine base layer py line 519 in from config return cls config typeerror init miss 1 require positional argument unit |
tensorflowtensorflow | api function change from keras 2 3 1 to keras 2 4 2 | Bug | please make sure that this be a feature request as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag feature template system information tensorflow version tf 2 0 0 stable kera 2 4 2 a and 1 13 1 kera 2 3 1 b be you willing to contribute it yes no yes describe the feature and the current behavior state I want to do a transfer learn on mobilenet and change the input size I copy the code colab scrollto dv7a jca4cgc accord to change input size of pre train model in keras everythin go fine on my machine b tf 1 13 1 kera 2 3 1 I could got exact the same output of author namely the input size get change while on my another machine a tf 2 0 0 keras 2 4 2 I have to do some change and the result of input shape stay 224x224x3 but not as except 130x130x3 tensorflow 1 13 1 kera 2 3 1 work fine work on tensorflow 1 13 1 import kera import numpy as np keras backend clear session def change model model new input shape none 40 40 3 replace input shape of first layer model layer 0 batch input shape new input shape rebuild model architecture by export and import via json new model keras model model from json model to json copy weight from old model to new one for layer in new model layer try layer set weight model get layer name layer name get weight print load layer format layer name except print could not transfer weight for layer format layer name return new model from keras application mobilenet import mobilenet from keras preprocesse import image from keras application mobilenet import preprocess input decode prediction import numpy as np model mobilenet weight imagenet include top true input shape 224 224 3 new model change model model new input shape none 130 130 3 new model summary ouput be 130x130x3 layer type output shape param input 1 inputlayer none 130 130 3 0 conv1 pad zeropadding2d none 131 131 3 0 conv1 conv2d none 65 65 32 864 conv1 bn batchnormalization none 65 65 32 128 tensorflow 2 0 0 kera 2 4 2 work on tensorflow 2 0 0 import kera import tensorflow as tf from keras application mobilenet import mobilenet def change model model new input shape none 40 40 3 replace input shape of first layer model layer 0 batch input shape new input shape rebuild model architecture by export and import via json new model tf keras model model from json model to json copy weight from old model to new one for layer in new model layer try layer set weight model get layer name layer name get weight print load layer format layer name except print could not transfer weight for layer format layer name return new model model mobilenet include top true weight imagenet input shape 224 224 3 backend tf keras backend layer tf keras layer model tf keras model util tf keras util new model change model model new input shape none 130 130 3 print new model summary output stay 224x224x3 layer type output shape param input 1 inputlayer none 224 224 3 0 conv1 pad zeropadding2d none 225 225 3 0 conv1 conv2d none 112 112 32 864 conv1 bn batchnormalization none 112 112 32 128 conv1 relu relu none 112 112 32 0 will this change the current api how not sure |
tensorflowtensorflow | train with muliple gpu get error out of range end of sequence | Bug | I be train with tensorflow2 0 with multiple gpu it get the follow error but if I use only one gpu it run without any error my tensorflow version be tensorflow gpu 2 0 0 tensorflow python framework error impl cancellederror 4 root error s find 0 cancel operation be cancel node cond 6 else 59 iteratorgetnext 1 out of range end of sequence node cond 4 else 37 iteratorgetnext 2 out of range end of sequence node cond 7 else 70 iteratorgetnext metric accuracy div no nan readvariableop 6 154 3 out of range end of sequence node cond 7 else 70 iteratorgetnext 0 successful operation 1 derive error ignore op inference distribute function 83325 function call stack distribute function distribute function distribute function distribute function and this be my code import tensorflow as tf import tensorflow dataset as tfds datum name uc merce dataset tfds load data name train datum test datum dataset train dataset train def parse img dict img tf image resize with pad img dict image 256 256 label img dict label return img label train datum train datum map parse train datum train datum batch 96 test datum test data map parse test datum test datum batch 96 strategy tf distribute mirroredstrategy with strategy scope model tf keras application resnet50 weight none class 21 input shape 256 256 3 model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit train datum epoch 50 verbose 2 validation datum test datum model save model resnet h5 format data name |
tensorflowtensorflow | valueerror when compute jacobian in eager mode | Bug | please make sure that this be a feature request as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag feature template I m try to add jacobian regularization to our loss function for reduce the intput of sensitivity but in tensorflow version 1 14 0 it s hard to build the dynamic graph of jacobian regularization I try to use gradienttape for get the jacobian matrix but there be problem to get the weight parameter s gradient raise valueerror tape be already record which be mean we can not use gradienttape to get the jacobian regularization function I guess there be another way to get it do that s be build the dynamic graph of jacobian regularization function expliciltly but this could be very hard for I be there any way to do this convenient system information tensorflow version you be use tensorflow version 1 14 0 be you willing to contribute it yes no no cause I m not very famililar with tensorflow s framework describe the feature and the current behavior state I want to use the jacobian regurlarization to my model in tensorflow will this change the current api how yes who will benefit with this feature expert researcher any other info |
tensorflowtensorflow | tensorflow java 1 15 0 fail to load native lib | Bug | describe the current behavior org tensorflow nativelibrary tryloadlibraryfaile no tensorflow jni in java library path org tensorflow nativelibrary jniresourcename org tensorflow native linux x86 64 libtensorflow jni so org tensorflow nativelibrary frameworkresourcename org tensorflow native linux x86 64 libtensorflow framework so org tensorflow nativelibrary org tensorflow native linux x86 64 libtensorflow framework so not find this be fine assume org tensorflow native linux x86 64 libtensorflow jni so be not build to depend on it org tensorflow nativelibrary extract native library to xxx tmp tensorflow native library 1573652659702 0 libtensorflow jni so org tensorflow nativelibrary copy 154073736 byte to xxx tmp tensorflow native library 1573652659702 0 libtensorflow jni so java lang unsatisfiedlinkerror xxx tmp tensorflow native library 1573652659702 0 libtensorflow jni so libtensorflow framework so 1 at java lang classloader nativelibrary load native method at java lang classloader loadlibrary0 classloader java 1941 at java lang classloader loadlibrary classloader java 1824 at java lang runtime load0 runtime java 809 at java lang system load system java 1086 at org tensorflow nativelibrary load nativelibrary java 101 at org tensorflow tensorflow init tensorflow java 67 at org tensorflow tensorflow tensorflow java 82 at org tensorflow savedmodelbundle savedmodelbundle java 170 describe the expect behavior no exception |
tensorflowtensorflow | collective allgather fail on polymorphic shape | Bug | system information os platform and distribution ubuntu 18 04 tensorflow instal from source or binary binary whl tensorflow version use command below tensorflow gpu 2 0 0 python version 3 6 8 cuda cudnn version 10 0 7 6 4 gpu model and memory geforce gtx 1080 ti describe the current behavior python import numpy as np import tensorflow as tf from tensorflow core protobuf import config pb2 from tensorflow python op import collective op t0 0 1 2 3 4 5 6 7 t1 10 11 12 13 14 15 16 17 expect 0 1 2 3 4 5 6 7 10 11 12 13 14 15 16 17 group size 2 group key 1 instance key 123 with tf compat v1 session config config pb2 configproto device count cpu group size as sess with tf device cpu 0 in0 tf compat v1 placeholder dtype tf int32 shape none c0 collective op all gather in0 group size group size group key group key instance key instance key with tf device cpu 1 in1 tf compat v1 placeholder dtype tf int32 shape none c1 collective op all gather in1 group size group size group key group key instance key instance key success result sess run c0 c1 feed dict in0 t0 in1 t1 assert np allclose result 0 expect assert np allclose result 1 expect fail result sess run c0 c1 feed dict in0 t0 1 in1 t1 1 2019 11 13 17 45 50 521948 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal inconsistent output shape get 14 but expect be 16 in one session if one run the above graph second time with the feed in a size different from the first time it will raise an error inconsistent output shape get 14 but expect be 16 describe the expect behavior the graph construction above set the expect shape of the placeholder as polymorphic none however after the first session run the collective op cache its output shape which be 16 in our case in tensorflow tensorflow core kernel collective op cc but should it be expect that the collective op keep its graph define polymorphic behavior specifically in our case should it allow a user to all gather two size 7 tensor into a size 14 code to reproduce the issue see above |
tensorflowtensorflow | tensorflow fail to find trtengineop when build pip package and libtensorflow cc so | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 14 python version 3 6 instal use virtualenv pip conda build from source bazel version if compile from source 0 24 1 gcc compiler version if compile from source gcc 7 4 0 cuda cudnn version 10 0 gpu model and memory 2080ti 11 gb describe the problem in brief I build tensorflow for both python pip package and c libtensorflow cc so I want to train a model convert it to tf trt in python then run it on c and it fail when run it on c here be the detail I m try to build tensorflow python version by bazel build config opt config cuda tensorflow tool pip package build pip package and build c version by bazel build config opt config monolithic config cuda tensorflow libtensorflow cc so and I want they both with tensorrt support after build it sequentially configure first build pip package and then libtensorflow cc so I find that the python version could load tensorrt successfully but when I m try to run session run on c it give I the error say check fail status ok loading error op type not register trtengineop in binary run on my dev docker make sure the op and kernel be register in the binary run in this process note that if you be load a save graph which use op from tf contrib accessing e g tf contrib resampler should be do before import the graph as contrib op be lazily register when the module be first access since I m use tf 1 14 where the tensorrt have be promote to first class citizen from tf contrib so I believe that I don t need to dynatically load it in my c code use c api tf loadlibrary so I start to suspect that maybe the tensorrt support be successfully instal but not on the c one and I didn t find any good reference on the internet on how to enable tf trt for tensorflow cc so the success of python tensorflow with tensorrt support be test by run from tensorflow python compiler tensorrt import trt convert as trt although I have to run export ld library path ld library path usr local tensorrt 5 1 2 2 lib first otherwise it will say importerror libnvinfer so 5 can not open share object file no such file or directory the follow be the exact command python bin path usr bin python3 6 python lib path usr local lib python3 6 dist package tf enable xla 1 tf need opencl sycl 0 tf need rocm 0 tf need cuda 1 tf cuda version 10 tf cuda path usr local cuda usr lib x86 64 linux gnu usr include cuda toolkit path usr local cuda cudnn install path usr lib x86 64 linux gnu tf cudnn version 7 tf need tensorrt 1 tf tensorrt version 5 tensorrt install path usr local tensorrt 5 1 2 2 tf nccl version 2 tf cuda compute capability 6 1 7 5 tf cuda clang 0 tf download clang 0 gcc host compiler path usr bin gcc clang cuda compiler path usr local clang 8 0 0 bin clang tf need mpi 0 cc opt flag mavx wno sign compare tf set android workspace 0 configure bazel build config opt config cuda tensorflow tool pip package build pip package bazel bin tensorflow tool pip package build pip package tmp tensorflow pkg pip3 install tmp tensorflow pkg tensorflow 1 14 0 cp36 cp36 m linux x86 64 whl bazel build config opt config monolithic config cuda tensorflow libtensorflow cc so any other info log I run the conversion in python by from tensorflow python compiler tensorrt import trt convert as trt converter trt trtgraphconverter input save model dir input save model dir converter convert converter save output save model dir I run the converted model in c by const auto status loadsavedmodel session option run option net param model path tensorflow ksavedmodeltagserve bundle gpu i d and I get error message from status error message could you kindly provide some hint on what I might be do wrong |
tensorflowtensorflow | checkpoint manager overwrite not own checkpoint | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 manjaro linux testing tensorflow instal from pypi binary tensorflow version use command below v1 12 1 16854 g6778662 2 1 0 dev20191028 python version 3 7 4 describe the current behavior a tf train checkpointmanager erase checkpoint in its directory path that be not write by it this be a different behaviour from tf train saver in tf1 describe the expect behavior do not remove checkpoint that be not own by a checkpointmanager instance in the code below I expect to have checkpoint 1 2 4 5 at the end instead only 4 and 5 survive code to reproduce the issue import tensorflow as tf var tf variable initial value 12 checkpoint tf train checkpoint var var manager tf train checkpointmanager checkpoint checkpoint directory delete I max to keep 2 manager save 0 manager save 1 manager save 2 manager2 tf train checkpointmanager checkpoint checkpoint directory delete I max to keep 2 manager2 save 3 manager2 save 4 manager2 save 5 why be 1 and 2 delete by manager2 other info log |
tensorflowtensorflow | after graph transform with pb file something go error | Bug | enviroment gpu type tesla t4 nvidia driver version 418 87 01 cuda version 10 1 243 cudnn version 7 6 3 python version 3 7 4 tensorflow version 1 14 1 bazel version 0 24 1 operating system version ubuntu 16 04 6 lts gnu linux 4 4 0 142 generic x86 64 tool graph transform it s a toolkit in tensorflow original code tensorflow tool graph transform step 1 build bash bazel build tensorflow tool graph transform transform graph 2 transform pb bash bazel bin tensorflow tool graph transform transform graph in graph my pb out graph out pb input input output biasadd transform add default attribute strip unused nodes type float remove nod op identity op checknumeric fold constant ignore error true fold batch norm fold old batch norm round weight num step 256 quantize weight quantize nodes strip unused node sort by execution order 3 test origin pb file I e my pb it s ok 4 test output pb file I e out pb it go error bash 0 invalid argument request output max must be request output min but get nan and 0 node tacotron 2 inference decoder while customdecoderstep mul eightbit requantize tacotron 2 inference decoder while customdecoderstep decoder lstm decoder lstm multi rnn cell cell 0 decoder lstm 1 add 545 1 invalid argument request output max must be request output min but get nan and 0 node tacotron 2 inference decoder while customdecoderstep mul eightbit requantize additional I use the follow code to test if pb file work python def pb2inference args tf reset default graph my graph def tf graphdef with tf gfile gfile args pb rb as fid serialize graph fid read my graph def parsefromstring serialize graph tf import graph def my graph def name my graph tf get default graph input my graph get tensor by name input 0 out my graph get tensor by name biasadd 0 with tf session graph my graph as sess feed dict input seq out sess run out feed dict feed dict it work in my pb but get error in out pb in addition the transform process have no warning or error |
tensorflowtensorflow | get different result if train keras model eagerly | Bug | problem I get unexpected result while try to train a simple kera model in eagerly mode so I can debug the problem be reproducible on local machine and colab describe the current behavior same keras model train in eager mode but get different result compare to non eager mode the model didn t convergent describe the expect behavior result should not depend on eagerly mode on or off code to reproduce the issue colab notebook that reproduce problem import tensorflow as tf import tensorflow dataset as tfds model don t work if uncomment follow line tf config experimental run function eagerly true train dataset tfds load name cifar10 train model tf keras model sequential tf keras layer flatten input shape 32 32 3 tf keras layer dense 10 activation softmax batch size 50 model compile loss sparse categorical crossentropy metric accuracy train set train dataset map lambda item item image item label batch batch size model fit train set epoch 5 other info log correct output epoch 1 5 1000 1000 19 19ms step loss 330 7991 accuracy 0 1935 epoch 2 5 1000 1000 15 15ms step loss 299 7854 accuracy 0 2237 epoch 3 5 1000 1000 15 15ms step loss 292 7084 accuracy 0 2317 output in eagerly mode epoch 1 5 1000 1000 30 30ms step loss 14 5070 accuracy 0 0999 epoch 2 5 1000 1000 28 28ms step loss 14 5060 accuracy 0 1000 epoch 3 5 1000 1000 28 28ms step loss 14 5060 accuracy 0 1000 |
tensorflowtensorflow | could not create cudnn handle cudnn status not initialize | Bug | system information os platform and distribution linux ubuntu 16 04 tensorflow instal from pip tensorflow version 1 14 0 gpu python version 3 6 4 instal use virtualenv pip also have anaconda cuda cudnn version cuda 10 0 cudnn how to know the version gpu model and memory gpu rtx 2080 10989mib train on 15285 sample validate on 3822 sample epoch 1 100 2019 11 13 11 58 28 507273 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcubla so 10 0 2019 11 13 11 58 28 790550 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudnn so 7 2019 11 13 11 58 28 791219 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status not initialize 2019 11 13 11 58 28 791275 e tensorflow stream executor cuda cuda dnn cc 337 possibly insufficient driver version 410 48 0 2019 11 13 11 58 28 791290 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status not initialize 2019 11 13 11 58 28 791312 e tensorflow stream executor cuda cuda dnn cc 337 possibly insufficient driver version 410 48 0 traceback most recent call last file main resnet py line 229 in shuffle true file anaconda3 lib python3 6 site package kera engine training py line 1239 in fit validation freq validation freq file anaconda3 lib python3 6 site package kera engine training array py line 196 in fit loop out fit function in batch file anaconda3 lib python3 6 site package tensorflow python keras backend py line 3292 in call run metadata self run metadata file anaconda3 lib python3 6 site package tensorflow python client session py line 1458 in call run metadata ptr tensorflow python framework error impl unknownerror 2 root error s find 0 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d 1 convolution mean 417 1 unknown fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d 1 convolution 0 successful operation 0 derive error ignore any advice or suggestion will be appriciate thx |
tensorflowtensorflow | multiworkermirroredstrategy distribution error basecollectiveexecutor startabort invalid argument lower bind check fail for input 1 from node mkl2tf 30 to node scope allocator concat 1 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no ish keras multi worker mirror example os platform and distribution e g linux ubuntu 16 04 fedora server 31 tensorflow instal from source or binary source master branch commit 73a34133f6a414a03e54971f4975584c3d6251cc identical on both machine tensorflow version use command below v1 12 1 17924 g73a34133f6 2 0 0 python version 3 7 5 bazel version if compile from source 1 1 0 gcc compiler version if compile from source gcc 9 2 1 cuda cudnn version n a gpu model and memory n a describe the current behavior crash at model fit describe the expect behavior not crash code to reproduce the issue exact code use node0 node1 other info log log from run node0 node1 to compile on fedora 31 I do need to use a grpc version patch this patch can be find here issuecomment 547867642 in issue 33758 it consist of run this command before compile tensorflow curl l git apply bazel configure with python 3 directory and default otherwise bazel build command use be as follow bazel build config mkl config opt tensorflow tool pip package build pip package |
tensorflowtensorflow | tfliteconverter getopwithoutput error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary pip install tf nightly tensorflow version use command below 2 1 0 dev20191111 python version 3 6 8 describe the current behavior I m try to convert a bert keras model import from the transformer library use the convert method from the tfliteconverter the follow getopwithoutput error happen no matter the value use pass to allow custom op or target spec support op tensorflow lite toco tooling util cc 935 check fail getopwithoutput model output array specify output array identity be not produce by any op in this graph describe the expect behavior the tfliteconverter should convert the model and return the tflite version code to reproduce the issue other info log the same bert large uncased whole word mask finetune squad model import from the transformer library work perfectly when use directly on a qa task without tflite conversion also the conversion of the distilbert model from the same library and use the same step work correctly with converter target spec support op tf lite opsset select tf op |
tensorflowtensorflow | error in the document of tf keras layer dense | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the tf keras layer dense can actually take input shape as function input but it be not show in this document in addition the example of this document have the function dense but I try on my googlecolab and it be not define in tensorflow2 |
tensorflowtensorflow | c installation still suggest 1 14 0 download | Bug | url s with the issue description of issue what need change the download instruction link to 1 14 0 library rather than 1 15 0 the 1 15 0 library appear to exist and 1 15 0 be a release accord to |
tensorflowtensorflow | keras fail to combine predict variable sized sequence | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 17764 gae26958 2 1 0 dev20191111 python version google colab py3 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version google colab gpu model and memory google colab describe the current behavior exception raise from numpy concatenate when predict sequence label with variable sequence length batch describe the expect behavior there should be no error as in fit evaluate method work fine code to reproduce the issue other info log python valueerror traceback most recent call last in 1 model predict get dataset label false 5 frame usr local lib python3 6 dist package tensorflow core python keras engine train py in predict self x batch size verbose step callback max queue size worker use multiprocesse 972 max queue size max queue size 973 worker worker 974 use multiprocesse use multiprocesse 975 976 def reset metric self usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in predict self model x batch size verbose step callback max queue size worker use multiprocesse kwargs 496 model modekey predict x x batch size batch size verbose verbose 497 step step callback callback max queue size max queue size 498 worker worker use multiprocesse use multiprocesse kwargs 499 500 usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in model iteration self model mode x y batch size verbose sample weight step callback max queue size worker use multiprocesse kwargs 473 mode mode 474 training context training context 475 total epoch 1 476 cbks make logs model epoch log result mode 477 usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 185 186 end of an epoch 187 aggregator finalize 188 return aggregator result 189 usr local lib python3 6 dist package tensorflow core python keras engine training util py in finalize self 349 def finalize self 350 for result in self result 351 result finalize 352 self result I result for I in self result 353 self result nest pack sequence as self structure self result usr local lib python3 6 dist package tensorflow core python keras engine training util py in finalize self 187 188 else 189 self result np concatenate self result axis 0 190 191 if isinstance self result op eagertensor array function internal in concatenate args kwargs valueerror all the input array dimension for the concatenation axis must match exactly but along dimension 1 the array at index 0 have size 7 and the array at index 1 have size 9 |
tensorflowtensorflow | tf size have no documentation | Bug | example return the number of element in the tensor it equal the length of the flatten tensor |
tensorflowtensorflow | sparsecategoricalaccuracy invalidargumenterror | Bug | system information os platform and distribution e g linux ubuntu 16 04 win10 tensorflow version use command below 2 0 0 python version 3 7 5 when I try to run the follow code an error occur python import numpy as np import tensorflow as tf class mnistloader def init self mnist tf keras datasets mnist self train datum self train label self test datum self test label mnist load datum self train datum np expand dim self train datum astype np float32 255 0 axis 1 self test datum np expand dim self train datum astype np float32 255 0 axis 1 self train label self train label astype np int32 self test label self test label astype np int32 self num train datum self num test datum self train datum shape 0 self test datum shape 0 def get batch self batch size index np random randint 0 np shape self train datum 0 batch size return self train datum index self train label index class mlp tf keras model def init self super init self flatten tf keras layer flatten self dense1 tf keras layer dense unit 100 activation tf nn relu self dense2 tf keras layer dense unit 10 def call self input x self flatten input x self dense1 x x self dense2 x output tf nn softmax x return output num epoch 5 batch size 50 learning rate 0 001 model mlp data loader mnistloader optimizer tf keras optimizer adam learn rate learning rate num batch int datum loader num train datum batch size num epoch for batch index in range num batch x y datum loader get batch batch size with tf gradienttape as tape y pre model x loss tf keras loss sparse categorical crossentropy y true y y pre y pre loss tf reduce mean loss print batch d loss f batch index loss numpy grad tape gradient loss model variable optimizer apply gradient grad and var zip grad model variable sparse categorical accuracy tf keras metric sparsecategoricalaccuracy num batch int datum loader num test datum batch size for batch index in range num batch start index end index batch index batch size batch index 1 batch size y pre model predict data loader test datum start index end index y true datum loader test label start index end index y true y true reshape 1 1 sparse categorical accuracy update state y true y true y pre y pre if batch index 1 print y pre y pre shape print y pre print y true y true shape print y true print test accuracy f sparse categorical accuracy result describe the current behavior the information be as follow invalidargumenterror traceback most recent call last in 6 y true datum loader test label start index end index 7 y true y true reshape 1 1 8 sparse categorical accuracy update state y true y true y pre y pre 9 10 if batch index 1 d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python keras util metric util py in decorate metric obj args kwargs 73 74 with tf util graph context for symbolic tensor args kwargs 75 update op update state fn args kwargs 76 if update op be not none update op will be none in eager execution 77 metric obj add update update op d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python keras metrics py in update state self y true y pre sample weight 579 y pre y true 580 581 match self fn y true y pre self fn kwargs 582 return super meanmetricwrapper self update state 583 match sample weight sample weight d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python keras metrics py in sparse categorical accuracy y true y pre 2784 y pre math op cast y pre k dtype y true 2785 2786 return math op cast math op equal y true y pre k floatx 2787 2788 d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python ops math ops py in equal x y name 1304 a tensor of type bool with the same size as that of x or y 1305 1306 return gen math op equal x y name name 1307 1308 d software anaconda3 envs tensorflow2 0 lib site package tensorflow core python ops gen math op py in equal x y incompatible shape error name 3617 else 3618 message e message 3619 six raise from core status to exception e code message none 3620 add node to the tensorflow graph 3621 if incompatible shape error be none d software anaconda3 envs tensorflow2 0 lib site package six py in raise from value from value invalidargumenterror incompatible shape 0 vs 50 op equal I don t know why this happen I have check the api and try to reshape the y true it doesn t work either |
tensorflowtensorflow | tf trt optimize graph get wrong result when use batch size 7 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 2 gpu model and memory tesla t4 with 15001mib memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when use tf trt optimize graph to do inference it only work as expect when batch size 7 I test batch size from 1 16 and the result be completely wrong when batch size 7 describe the expect behavior small batch size should also give I the correct result since my original un optimize graph can handle small batch size code to reproduce the issue setup accord to in path to tftrt example object detection import tensorflow as tf from tensorflow python compiler tensorrt import trt convert as trt import cv2 from collection import namedtuple from pil import image import numpy as np import time import json import subprocess import os import glob from tftrt example object detection graph util import force nms cpu as f force nms cpu from tftrt example object detection graph util import replace relu6 as f replace relu6 from tftrt example object detection graph util import remove assert as f remove assert def optimize model frozen graph use trt true force nms cpu true replace relu6 true remove assert true precision mode fp32 minimum segment size 2 max workspace size byte 1 32 maximum cache engine 100 calib image dir none num calib image none calib batch size 1 calib image shape none output path none same function copy from path to tftrt example object detection object detection py l328 pass optimize a customed frozen graph frozen graph path path to frozen graph pb frozen graph tf graphdef with open frozen graph path rb as f frozen graph parsefromstre f read frozen graph optimize model frozen graph force nms cpu false replace relu6 true remove assert true use trt true precision mode fp16 max workspace size byte 17179869184 output path trt frozen graph pb run inference frozen graph path trt frozen graph pb frozen graph tf graphdef with open frozen graph path rb as f frozen graph parsefromstre f read image bs h w 3 input name input box name output box class name output label score name output score num detection name output num detection with tf graph as default as tf graph with tf session config tf config as tf sess tf import graph def frozen graph name tf input tf graph get tensor by name input name 0 tf box tf graph get tensor by name box name 0 tf class tf graph get tensor by name class name 0 tf score tf graph get tensor by name score name 0 tf num detection tf graph get tensor by name num detection name 0 box class score num detection tf sess run tf box tf class tf score tf num detection feed dict tf input batch image other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach my frozen graph can be download at warn tensorflow from notebook tensorrt tftrt example object detection graph util py 31 the name tf placeholder be deprecate please use tf compat v1 placeholder instead warn tensorflow from notebook tensorrt tftrt example object detection graph util py 31 the name tf placeholder be deprecate please use tf compat v1 placeholder instead info tensorflow link tensorrt version 6 0 1 info tensorflow load tensorrt version 6 0 1 info tensorflow run against tensorrt version 6 0 1 graph size mb native tf 54 5 graph size mb trt 106 0 num nodes native tf 10992 num nodes tftrt total 10156 num node trt only 290 time s trt conversion 15 8726 |
tensorflowtensorflow | can t load optimizer weight after add layer without parameter | Bug | model a python ipt input batch shape 32 240 4 x1 conv1d 16 20 stride 200 padding same ipt x1 batchnormalization x1 x2 conv1d 16 200 stride 120 padding same ipt x2 batchnormalization x2 model b python ipt input batch shape 32 250 4 x1 conv1d 16 20 stride 200 ipt x1 batchnormalization x1 x2 conv1d 16 200 stride 120 ipt x2 batchnormalization x2 the two have identical weight shape however a s optimizer weight can not be load onto b as b have a different build order image code below this be a tiny snippet of a much large model which need its timestep parameter change every x epoch and zeropadding1d appear to change layer build order whenever it s use this doesn t affect model weight as they re map via a dictionary whereas optimizer weight be map sequentially list to list reproducible in both tf1 tf2 and w keras tf keras import what s the problem and how to fix relevant so environment win 10 os cuda 10 0 130 cudnn 7 6 0 python 3 7 4 gtx 1070 observation swap any other layer not just batchnormalization and any number of layer before concatenate optimizer weight end up be simply swap in get weight can change stride instead of batch shape 1 can use maxpooling1d w stride 1 padding valid lead to zeropadding1d but it doesn t change build order don t know why model a summary python layer type output shape param connect to input 1 inputlayer 32 240 4 0 conv1d conv1d 32 2 16 1296 input 1 0 0 conv1d 1 conv1d 32 2 16 12816 input 1 0 0 bn 1 batchnormalization 32 2 16 64 conv1d 0 0 bn 2 batchnormalization 32 2 16 64 conv1d 1 0 0 concatenate concatenate 32 2 32 0 bn 1 0 0 bn 2 0 0 gap 0 globalaveragepooling1d 32 32 0 concatenate 0 0 dense dense 32 1 33 gap 0 0 0 model b summary note the swap layer python input 2 inputlayer 32 250 4 0 conv1d 2 conv1d 32 2 16 1296 input 2 0 0 bn 1 batchnormalization 32 2 16 64 conv1d 2 0 0 conv1d 3 conv1d 32 3 16 12816 input 2 0 0 zero padding1d zeropadding1d 32 3 16 0 bn 1 0 0 bn 2 batchnormalization 32 3 16 64 conv1d 3 0 0 concatenate 1 concatenate 32 3 32 0 zero padding1d 0 0 bn 2 0 0 gap 0 globalaveragepooling1d 32 32 0 concatenate 1 0 0 dense 1 dense 32 1 33 gap 0 0 0 minimally reproducible code python also work with from keras from tensorflow keras layers import input conv1d zeropadding1d concatenate from tensorflow keras layer import batchnormalization dense globalaveragepooling1d from tensorflow keras model import model import numpy as np def make model batch shape ipt input batch shape batch shape x1 conv1d 16 20 stride 200 padding same ipt x1 batchnormalization x1 x2 conv1d 16 200 stride 120 padding same ipt x2 batchnormalization x2 x1 x2 zero pad x1 x2 preout concatenate x1 x2 preout globalaveragepooling1d preout out dense 1 preout model model ipt out model compile adam mse return model def zero pad x1 x2 diff int x2 shape 1 int x1 shape 1 if diff 0 x1 zeropadding1d diff 0 x1 elif diff 0 x2 zeropadding1d abs diff 0 x2 return x1 x2 def make data batch shape return np random randn batch shape np random randint 0 2 batch shape 0 1 batch shape a 32 240 4 batch shape b 32 250 4 batch shape c 32 240 4 model a make model batch shape a model b make model batch shape b model c make model batch shape c control group x a y a make data batch shape a x b y b make data batch shape b x c y c make data batch shape c model a train on batch x a y a model b train on batch x b y b model c train on batch x c y c optimizer weight a model a optimizer get weight model c optimizer set weight optimizer weight a print model c optimizer weight set successfully model b optimizer set weight optimizer weight a print model b optimizer weight set successfully will not print output python model c optimizer weight set successfully valueerror optimizer weight shape 16 not compatible with provide weight shape 200 4 16 |
tensorflowtensorflow | tf keras backend sqrt tf constant 1 0 be 0 which be misleading and tf sqrt tf constant 1 0 be nan which be the way it should be | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 1 15 python version 3 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory run in cpu describe the current behavior tf keras backend sqrt tf constant 1 0 return 0 as clip by value be be do in the source code which be highly misleading as can be see only in the source and not in the function document whereas tf sqrt tf constant 1 0 return nan which be the expect behavior of any sqrt function this cause some bug which be very difficult to track describe the expect behavior make sqrt function return only the expect behavior and remove the clip by value code to reproduce the issue import tensorflow as tf tf enable eager execution tf keras backend sqrt tf constant 1 0 numpy tf sqrt tf constant 1 0 numpy other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | do not get performance improvement on tf matmul when build with avx2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento 7 tensorflow version use command below v1 14 python version 3 6 5 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 8 3 1 describe the current behavior run the benchmarke test for tf matmul operation under different tensorflow package 1 the pip package v1 14 release by tensorflow this package be not compile with avx2 as indicate by this log info tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2 build the tensorflow package with avx2 from source use the command bazel build c opt copt march native tensorflow tool pip package build pip package however there be no big performance difference between the above two package 1 tensorflow v1 14 cpu 8192 x 8192 matmul take 1 54 sec 713 30 g op sec 2 build from source with avx2 8192 x 8192 matmul take 1 60 sec 687 73 g ops sec describe the expect behavior the benchmark with the second package should be fast than the first one code to reproduce the issue borrow the benchmark code from here python import os import sys import tensorflow as tf import time n 8192 dtype tf float32 matrix1 tf variable tf one n n dtype dtype matrix2 tf variable tf one n n dtype dtype product tf matmul matrix1 matrix2 avoid optimize away redundant nodes config tf configproto graph option tf graphoption optimizer option tf optimizeroption opt level tf optimizeroption l0 sess tf session config config import os import sys import tensorflow as tf import time n 8192 dtype tf float32 matrix1 tf variable tf one n n dtype dtype matrix2 tf variable tf one n n dtype dtype product tf matmul matrix1 matrix2 avoid optimize away redundant nodes config tf configproto graph option tf graphoption optimizer option tf optimizeroption opt level tf optimizeroption l0 sess tf session config config sess run tf global variable initializer iter 10 pre warming sess run product op start time time for I in range iter sess run product op end time time op n 3 n 1 n 2 n 2 n 1 addition n 3 multiplication elapse end start rate iter op elapse 10 9 print n d x d matmul take 2f sec 2f g op sec n n elapse iter rate |
tensorflowtensorflow | loss and metric differ with masking or sample weight | Bug | update on jan 8 2021 I update the title from metric incorrect for rnn with mask as I discover more information that widen the scope of this issue see comment on that date system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary conda tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 7 6 0 gpu model and memory 1080 ti describe the current behavior I be train an rnn gru where my vary length sequence be right padded with 0s and a mask be apply many sequence be more than half 0s padding I compile the model with a loss of mean squared error and a metric of mean squared error but the output be different when the mask be in effect model compile optimizer keras optimizer rmsprop loss mean squared error metric mean square error or equivalently model compile optimizer keras optimizer rmsprop loss kera loss meansquarederror metric keras metrics meansquarederror example output note the different value for loss vs mean squared error for both training and validation epoch 1 50 210328 210328 610s 3ms sample loss 4 5338e 06 mean squared error 1 1923e 05 val loss 2 5456e 06 val mean square error 6 7928e 06 epoch 2 50 210328 210328 525s 2ms sample loss 2 1835e 06 mean square error 5 7421e 06 val loss 2 2920e 06 val mean square error 6 1160e 06 epoch 3 50 210328 210328 513s 2ms sample loss 1 9939e 06 mean squared error 5 2437e 06 val loss 2 2535e 06 val mean square error 6 0133e 06 epoch 50 50 210328 210328 527s 3ms sample loss 1 5595e 06 mean squared error 4 1011e 06 val loss 1 7867e 06 val mean square error 4 7677e 06 when I disable the masking I get the follow output epoch 1 3 210328 210328 516s 2ms sample loss 7 1682e 06 mean squared error 7 1682e 06 val loss 6 7091e 06 val mean square error 6 7091e 06 epoch 2 3 210328 210328 434s 2ms sample loss 5 9133e 06 mean squared error 5 9133e 06 val loss 6 7091e 06 val mean square error 6 7091e 06 epoch 3 3 210328 210328 442s 2ms sample loss 5 9085e 06 mean squared error 5 9085e 06 val loss 6 7073e 06 val mean square error 6 7073e 06 without the mask the value for loss and mean squared error agree for the validation set the value be not really improve and the value of 6 7e 06 seem to be what you get when you evaluate on the 0s that would otherwise be ignore by the masking compare the value between the run suggest that the mean squared error calculation be not use the mask when it be in effect but the loss calculation do use the mask we d expect low value when we correctly ignore irrelevant time step describe the expect behavior the value for loss and mean squared error should agree and both use the mask code to reproduce the issue I don t have full code and datum to share since my model and datum be proprietary other info log I can t think of any relevant log |
tensorflowtensorflow | unable to convert pb to tflite | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary python m pip install tensorflow 1 15 tensorflow version or github sha if from source 1 15 I be currently try to convert pb file from export tflite ssd graph py into a tflite file and encounter these error 2019 11 11 06 45 44 042588 f tensorflow lite toco tooling util cc 935 check fail getopwithoutput model output array specify output array tflite detection postprocess be not produce by any op in this graph be it a typo this should not happen if you trigger this error please send a bug report with code to reporduce this error to the tensorflow lite team fatal python error abort this be what I try to run tflite convert output file c tensorflow1 model research object detection inference graph detect tflite graph def file c tensorflow1 model research object detection inference graph tflite graph pb inference type float inference input type quantize uint8 input array normalize input image tensor input shape 1 300 300 3 output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 mean value 128 std dev value 128 change concat input range false allow custom op link be the full traceback I ve also check from and confirm that the output array be tflite detection postprocess output name tflite detection postprocess input name normalize input image tensor |
tensorflowtensorflow | tf2 0 can not load kera save model with customize loss function | Bug | I run tensorflow kera on colab research google com in tf2 0 I train a model with a customize loss function name loss then save it by keras model save when I try load it by keras model load model filename custom object loss loss it raise tensorflow 2 0 0 python3 6 tensorflow core python keras engine training util py in get loss function loss 1092 return loss lossfunctionwrapper 1093 loss fn 1094 name loss fn name 1095 reduction loss util reductionv2 sum over batch size 1096 attributeerror loss object have no attribute name even to set loss name xxx before call load model do not help if set the compile param to true for load model it may load successfully but the loaded model can not work on evaluate or predict to load the file save by kera of tf1 5 be all ok the model be train by the same customized loss function the model be save to a file by tf1 5 while to a path by tf2 0 how can I solve it |
tensorflowtensorflow | only one gpu be use during fit validation phase | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow version use command below 2 0 0 describe the current behavior during validation phase of kera fit only one gpu be use while all gpu be use during the training phase code to reproduce the issue this issue can be easily reproduce use the code of this official tutorial by feed the validation dataset eval dataset to model fit |
tensorflowtensorflow | logit and label must have the same first dimension | Bug | system information mac os 10 14 6 tensorflow 1 14 working model python input layer input shape x shape 1 model embed input dim len vocab 1 output dim 32 input length x shape 1 input layer model bidirectional lstm unit 50 return sequence true recurrent dropout 0 2 model output layer dense 3 activation softmax model model model input layer output layer model compile optimizer adam loss sparse categorical crossentropy metric acc model summary try to create same model with tf lite layer of bi directional lstm python import os os environ tf enable control flow v2 1 import tensorflow as tf import numpy as np from tensorflow lite experimental example lstm rnn import bidirectional dynamic rnn def build lstm layer num layer lstm layer for I in range num layer lstm layer append tf lite experimental nn tflitelstmcell num unit 50 name rnn format I forget bias 1 0 final lstm layer tf keras layers stackedrnncells lstm layer return final lstm layer def build bidirectional input num layer use dynamic rnn true lstm inputs transpose inp tf transpose input 1 0 2 output output states bidirectional dynamic rnn build lstm layer num layer build lstm layer num layer lstm inputs dtype float time major true fw lstm output bw lstm output output final out tf concat fw lstm output bw lstm output axis 2 final out tf unstack final out axis 0 resultant out final out 1 return resultant out tf reset default graph model tf tf keras model sequential tf keras layers input shape x shape 1 name input tf keras layer embed input dim len vocab 1 output dim 32 input length x shape 1 tf keras layers lambda build bidirectional argument num layer 2 use dynamic rnn true tf keras layer flatten tf keras layer dense 3 activation tf nn softmax name output model tf compile optimizer adam loss sparse categorical crossentropy metric accuracy model tf summary input be token sequence and output should be ner tag which I get from keras model but not from above model python x shape 30 16 y shape 30 16 1 I p array 15 10 38 4 32 57 39 0 0 0 0 0 0 0 0 0 o p array 1 1 1 1 2 1 1 0 0 0 0 0 0 0 0 0 output log error epoch 1 10 invalidargumenterror traceback most recent call last in 2 train x test x train y test y train test split x y test size 0 2 3 4 history model tf fit train x train y epoch 10 batch size 3 library framework python framework version 3 7 lib python3 7 site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 778 validation step validation step 779 validation freq validation freq 780 step name step per epoch 781 782 def evaluate self library framework python framework version 3 7 lib python3 7 site package tensorflow python keras engine training array py in model iteration model input target sample weight batch size epoch verbose callback val input val target val sample weight shuffle initial epoch step per epoch validation step validation freq mode validation in fit prepared feed value from dataset step name kwargs 361 362 get output 363 batch out f in batch 364 if not isinstance batch out list 365 batch out batch out library framework python framework version 3 7 lib python3 7 site package tensorflow python keras backend py in call self input 3290 3291 fetch self callable fn array val 3292 run metadata self run metadata 3293 self call fetch callback fetch len self fetch 3294 output structure nest pack sequence as library framework python framework version 3 7 lib python3 7 site package tensorflow python client session py in call self args kwargs 1456 ret tf session tf sessionruncallable self session session 1457 self handle args 1458 run metadata ptr 1459 if run metadata 1460 proto datum tf session tf getbuffer run metadata ptr invalidargumenterror logit and label must have the same first dimension get logit shape 3 3 and label shape 48 node loss output loss sparsesoftmaxcrossentropywithlogit sparsesoftmaxcrossentropywithlogit |
tensorflowtensorflow | batchnorm doesn t work in custom model or layer | Bug | system information windows10 tensorflow 2 0 0 python version 3 7 describe the current behavior inaccessibletensorerror the tensor tensor batch normalization batch normalization trainable 0 dtype bool can not be access here it be define in another function or code block use return value explicit python local or tensorflow collection to access it define in funcgraph name build graph i d 3078774714504 access from funcgraph name keras graph i d 3077450685512 code to reproduce the issue import tensorflow as tf from tensorflow keras model import model from tensorflow keras layer import conv2d batchnormalization leakyrelu def get norm norm type if norm type batch return batchnormalization else raise valueerror f unrecognize norm type norm type class discriminator model def init self base filter 32 lrelu alpha 0 2 pad type same norm type batch super discriminator self init name discriminator 1 self conv1 conv2d filter base filter 32 kernel size 3 padding pad type self relu1 leakyrelu alpha lrelu alpha 2 self conv2a conv2d filter base filter 2 64 stride 2 kernel size 3 padding pad type self relu2a leakyrelu alpha lrelu alpha self conv2b conv2d filter base filter 4 128 kernel size 3 padding pad type self norm2 get norm norm type self relu2b leakyrelu alpha lrelu alpha 3 self conv3a conv2d filter base filter 4 128 stride 2 kernel size 3 padding pad type self relu3a leakyrelu alpha lrelu alpha self conv3b conv2d filter base filter 8 256 kernel size 3 padding pad type self norm3 get norm norm type self relu3b leakyrelu alpha lrelu alpha 4 self conv4 conv2d filter base filter 8 256 kernel size 3 padding pad type self norm4 get norm norm type self relu4 leakyrelu alpha lrelu alpha final self conv final conv2d filter 1 256 kernel size 3 padding pad type def build self input shape super discriminator self build input shape def call self input tensor training false 1 x self conv1 input tensor training training x self relu1 x training training 2 x self conv2a x training training x self relu2a x training training x self conv2b x training training x self norm2 x training training x self relu2b x training training 3 x self conv3a x training training x self relu3a x training training x self conv3b x training training x self norm3 x training training x self relu3b x training training 4 x self conv4 x training training x self norm4 x training training x self relu4 x training training final x self conv final x training training return x if name main import numpy as np shape 1 128 128 3 nx np random rand shape astype np float32 t tf keras input shape nx shape 1 batch size nx shape 0 tf keras backend clear session d discriminator out d t d summary print f input shape t shape print f output shape out shape |
tensorflowtensorflow | impossible to use tf keras callback modelcheckpoint in distribute training | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below 2 0 0 describe the current behavior it be not possible to use tf keras callback modelcheckpoint in distribute training runtimeerror add update be call in a cross replica context this be not expect if you require this feature please file an issue code to reproduce the issue see this colab notebook |
tensorflowtensorflow | repeat number in uniform random tensor create on gpu | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below r1 12 python version 3 6 6 bazel version if compile from source 0 15 1 gcc compiler version if compile from source cuda cudnn version 9 0 7 4 gpu model and memory quadro m2200 4gib describe the current behavior I need to generate a tensor of uniform random inside a custom op I basically copy the code for generate a tensor for zero as be do here l31 c template struct tensorrandom void operator const device d typename ttype flat t t device d t random this be my only change to the tensorzero function inside my kernel I have something like the follow c tensorshape my shape n h w tensor random mat op require ok ctx ctx allocate output random mat my shape random mat const device device ctx eigen device this will be a gpu functor tensorrandom device random mat flat vlog 1 random mat random mat shape debugstre random mat summarizevalue n h w when I compile this and run my op which I ve define in the python api I get something like this for n h w 2 4 2 random mat 2 4 2 0 93335256 0 53328224 0 18036943 0 12565934 0 93335256 0 53328224 0 042617 0 61869474 0 93335256 0 53328224 0 70387461 0 88239244 0 93335256 0 53328224 0 76217792 0 65087953 notice the first value be repeat every 4 element the second value be also repeat every 4 value the rest be seemingly random describe the expect behavior I would expect the output to be random instead the first and second value be always repeat every fourth element code to reproduce the issue see above only way I be able to test be by also create the python binding for the op other info log this issue occur for all shape of tensor I have try I ve also check the tensor be indeed align I m use single precision float point number I highly suspect this be a gpu specific problem if there be a well way to generate a random tensor I d be happy to use that as well |
tensorflowtensorflow | warn when use fit generator | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue fit generator predict generator description of issue what need change clear description I have have pain debugging here so it would be well if the documentation mention the fallacy here my experience be I use ctype c to generate training datum into a fix numpy buffer and the generator merely invoke c method and return the same buffer in each iteration with different content obviously this be problematic as the internal implementation of fit generator will iterate in advance the correct usage be to make a copy of the buffer each time iterator be iterate the document should probably mention explicitly that the method do not make deep copy of the return value of the generator on its generation so e g it be probably not a good idea to share a same numpy buffer across different batch if one want to yield from same buffer due to certain reason one good practice be to make a nparray copy before yield |
tensorflowtensorflow | can not load save model fit via csv dataset | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I load datum from a csv file to create the dataset for fit my model after save the model I can t load it again I get the follow error message valueerror we expect a dictionary here instead we get describe the expect behavior the program should load the model code to reproduce the issue python import tensorflow as tf from tensorflow import kera dataset tf datum dataset tf datum experimental make csv dataset xor csv 4 label name result column tf feature column numeric column a tf feature column numeric column b input column keras layer densefeature column layer input column keras layer dense 4 activation relu keras layer dense 2 model keras sequential layer model compile loss mean squared error optimizer adam metric binary accuracy model fit dataset step per epoch 4 epoch 2000 keras model save model model test model h5 model save test model h5 model keras model load model test model h5 xor csv a b result 0 0 0 0 1 1 1 0 1 1 1 0 other info log traceback most recent call last file main py line 76 in main model keras model load model test model h5 file lib python3 7 site package tensorflow core python keras save save py line 146 in load model return hdf5 format load model from hdf5 filepath custom object compile file lib python3 7 site package tensorflow core python keras save hdf5 format py line 168 in load model from hdf5 custom object custom object file lib python3 7 site package tensorflow core python keras saving model config py line 55 in model from config return deserialize config custom object custom object file lib python3 7 site package tensorflow core python keras layers serialization py line 106 in deserialize printable module name layer file lib python3 7 site package tensorflow core python keras util generic util py line 303 in deserialize keras object list custom object item file lib python3 7 site package tensorflow core python keras engine sequential py line 380 in from config model build build input shape file lib python3 7 site package tensorflow core python keras engine sequential py line 260 in build super sequential self build input shape file lib python3 7 site package tensorflow core python keras engine network py line 682 in build self call x kwargs file lib python3 7 site package tensorflow core python keras engine sequential py line 281 in call output layer input kwargs file lib python3 7 site package tensorflow core python keras engine base layer py line 778 in call output call fn cast input args kwargs file lib python3 7 site package tensorflow core python autograph impl api py line 237 in wrapper raise e ag error metadata to exception e valueerror in convert code lib python3 7 site package tensorflow core python feature column dense feature py 129 call feature valueerror we expect a dictionary here instead we get |
tensorflowtensorflow | an op call int appear in tensorflow | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow the dependency com google ortool ortool constraint solver cp be introduce in the tensorflow compiler xla service cpu cpu compiler dep os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version python 3 6 8 bazel version if compile from source 0 25 1 gcc compiler version if compile from source gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 cuda cudnn version non gpu model and memory non describe the current behavior I be try to re develop xla when I introduce com google ortool ortool constraint solver cp dependency in the tensorflow compiler xla service cpu cpu compiler dep the source code can be pass and pass pip be instal normally but once the tensorflow interface be call an op registration error name int be throw this be the change of tensorflow compiler xla service cpu cpu compiler cc library name cpu compiler srcs cpu compiler cc hdr cpu compiler h dep compiler functor buffer info util conv canonicalization cpu executable cpu hlo support checker cpu instruction fusion cpu layout assignment cpu option disassembler dot op emitter ir emission util ir emitter parallel task assignment simple orc jit com google absl absl memory com google absl absl string target machine feature com google absl absl type span tensorflow compiler xla service copy insertion tensorflow compiler xla service hlo casting util tensorflow compiler xla service dump tensorflow compiler xla service map inliner tensorflow compiler xla service hlo get dimension size rewriter tensorflow compiler xla service conditional to select tensorflow compiler xla service scatter expander tensorflow compiler xla service slice sinker tensorflow compiler xla cpu function runtime tensorflow compiler xla literal tensorflow compiler xla protobuf util tensorflow compiler xla status macros tensorflow compiler xla statusor tensorflow compiler xla type tensorflow compiler xla util tensorflow compiler xla xla data proto tensorflow compiler xla service algebraic simplifi tensorflow compiler xla service batch dot simplification tensorflow compiler xla service batchnorm expander tensorflow compiler xla service buffer assignment tensorflow compiler xla service buffer liveness tensorflow compiler xla service call inliner tensorflow compiler xla service cholesky expander tensorflow compiler xla service conditional simplifi tensorflow compiler xla service convolution group converter tensorflow compiler xla service dot decomposer tensorflow compiler xla service dynamic index splitter tensorflow compiler xla service executable tensorflow compiler xla service flatten call graph tensorflow compiler xla service hlo tensorflow compiler xla service hlo constant fold tensorflow compiler xla service hlo cse tensorflow compiler xla service hlo dce tensorflow compiler xla service hlo element type converter tensorflow compiler xla service hlo order tensorflow compiler xla service hlo pass tensorflow compiler xla service hlo pass pipeline tensorflow compiler xla service hlo proto tensorflow compiler xla service hlo proto util tensorflow compiler xla service hlo memory scheduler tensorflow compiler xla service hlo subcomputation unification tensorflow compiler xla service hlo verifier tensorflow compiler xla service index array analysis tensorflow compiler xla service llvm compiler tensorflow compiler xla service reduce precision insertion tensorflow compiler xla service reshape mover tensorflow compiler xla service rng expander tensorflow compiler xla service sort simplifi tensorflow compiler xla service transpose fold tensorflow compiler xla service triangular solve expander tensorflow compiler xla service tuple simplifi tensorflow compiler xla service while loop constant sink tensorflow compiler xla service while loop invariant code motion tensorflow compiler xla service while loop simplifi tensorflow compiler xla service zero sized hlo elimination tensorflow compiler xla service llvm ir llvm util fixdep keep tensorflow core lib fixdep keep tensorflow core stream executor no cuda llvm aarch64 code gen fixdep keep llvm aarch64 disassembler fixdep keep llvm arm code gen fixdep keep llvm arm disassembler fixdep keep llvm core llvm mc fixdep keep llvm object llvm support llvm target fixdep keep llvm x86 code gen fixdep keep llvm x86 disassembler fixdep keep com google ortool ortool constraint solver cp select tensorflow linux ppc64le llvm powerpc disassembler llvm powerpc code gen condition default alwayslink true contain compiler registration this be where the introduction of ortool depend on third party ortool workspace bzl load third party repo bzl third party http archive load bazel tool tool build def repo http bzl http archive load bazel tool tool build def repo git bzl git repository new git repository def repo git repository name com google protobuf cc commit 0974557 release v3 8 0 remote http archive name com google ortool urls sha256 13a4de5dba1f64e2e490394f8f63fe0a301ee55466ef65fe309ffd5100358ea8 strip prefix or tool 7 2 build file third party ortool build bazel patch cmds find name build print0 xargs 0 se I s com google protobuf cc com google protobuf g http archive name com github glog glog urls sha256 f28359aeba12f30d73d9e4711ef356dc842886968112162bc73002645139c39c strip prefix glog 0 4 0 build file com github glog glog build patch cmds mkdir glog internal mkdir build cd build cmake cp config h glog internal cp config h src se I s config h src config h g bazel glog bzl this be the running environment and error information after instal the tensorflow source code python 3 6 8 default oct 7 2019 12 59 55 gcc 8 3 0 on linux type help copyright credit or license for more information import tensorflow as tf home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 516 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 517 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 518 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 519 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 520 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home ubuntu virtualenvs run lib python3 6 site package tensorflow python framework dtype py 525 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 541 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 542 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 543 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 544 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 545 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home ubuntu virtualenvs run lib python3 6 site package tensorboard compat tensorflow stub dtype py 550 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 tf graph tf graph 2019 11 08 09 13 06 526896 f tensorflow core framework op cc 200 non ok status registeralreadylocke defer I status invalid argument invalid name int do you use camelcase in opdef name int input arg name int description int type dt float type attr int number attr int type list attr int input arg name int description int type dt float type attr int number attr int type list attr int attr name int type int default value I 1 description int have minimum true minimum 1 attr name int type int default value s description int attr name int type int description int attr name int type int description int summary int description int be stateful true abort core dump |
tensorflowtensorflow | can t find tensorflow example tutorial | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow none os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary source tensorflow version use command below v1 12 1 17556 g63c45aacf3 2 0 0 python version python 3 7 4 bazel version if compile from source 0 27 1 gcc compiler version if compile from source 7 4 0 cuda cudnn version v10 1 243 gpu model and memory gp107gl quadro p1000 4031mib describe the current behavior traceback most recent call last file line 1 in modulenotfounderror no module name tensorflow example as the error sugest can t find the module be I miss something while generate the wheel package from bazel describe the expect behavior be there in the source code repo hence should work no monkey fix please I have try local copying the directory but then some other tensorflow module be miss tensorflow contrib and go on code to reproduce the issue python c from tensorflow example tutorial import input datum other info log bazel build verbose failure config monolithic tensorflow tool pip package build pip package command I use to create the wheel package from tensorflow source |
tensorflowtensorflow | can t apply map on dataset | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 manjaro tensorflow instal from source or binary pip repository binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 cuda cudnn version n a cpu gpu model and memory n a cpu describe the current behavior try to apply a function to any dataset create either from from tensor slice or from generator return the follow error typeerror fail to convert object of type to tensor content variantdataset shape 2 type tf int64 consider cast element to a support type describe the expect behavior the operation should return no error code to reproduce the issue import numpy as np import tensorflow as tf tf function def expand x return tf expand dim x axis 2 m np array 1 1 1 1 ds tf datum dataset from tensor slice m ds ds apply expand note try to apply any other function would return the same error other info log full stack trace typeerror traceback most recent call last in 9 1 1 10 ds tf datum dataset from tensor slice m 11 ds d apply expand usr lib python3 7 site package tensorflow core python data op dataset op py in apply self transformation func 1367 dataset 1368 1369 dataset transformation func self 1370 if not isinstance dataset datasetv2 1371 raise typeerror usr lib python3 7 site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace usr lib python3 7 site package tensorflow core python eager def function py in call self args kwd 501 this be the first call of call so we have to initialize 502 initializer map object identity objectidentitydictionary 503 self initialize args kwd add initializer to initializer map 504 finally 505 at this point we know that the initialization be complete or less usr lib python3 7 site package tensorflow core python eager def function py in initialize self args kwd add initializer to 406 self concrete stateful fn 407 self stateful fn get concrete function internal garbage collect pylint disable protect access 408 args kwd 409 410 def invalid creator scope unused args unused kwd usr lib python3 7 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1846 if self input signature 1847 args kwargs none none 1848 graph function self maybe define function args kwargs 1849 return graph function 1850 usr lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs usr lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2039 arg name arg name 2040 override flat arg shape override flat arg shape 2041 capture by value self capture by value 2042 self function attribute 2043 tell the concretefunction to clean up its graph once it go out of usr lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor usr lib python3 7 site package tensorflow core python eager def function py in wrap fn args kwd 356 wrap allow autograph to swap in a converted function we give 357 the function a weak reference to itself to avoid a reference cycle 358 return weak wrap fn wrap args kwd 359 weak wrap fn weakref ref wrap fn 360 usr lib python3 7 site package tensorflow core python framework func graph py in wrapper args kwargs 903 except exception as e pylint disable broad except 904 if hasattr e ag error metadata 905 raise e ag error metadata to exception e 906 else 907 raise typeerror in convert code 6 expand return tf expand dim x axis 2 usr lib python3 7 site package tensorflow core python util dispatch py 180 wrapper return target args kwargs usr lib python3 7 site package tensorflow core python op array op py 325 expand dim v2 return gen array op expand dim input axis name usr lib python3 7 site package tensorflow core python ops gen array op py 2465 expand dim expanddim input input dim axis name name usr lib python3 7 site package tensorflow core python framework op def library py 530 apply op helper raise err usr lib python3 7 site package tensorflow core python framework op def library py 527 apply op helper prefer dtype default dtype usr lib python3 7 site package tensorflow core python framework op py 1296 internal convert to tensor ret conversion func value dtype dtype name name as ref as ref usr lib python3 7 site package tensorflow core python framework constant op py 286 constant tensor conversion function return constant v dtype dtype name name usr lib python3 7 site package tensorflow core python framework constant op py 227 constant allow broadcast true usr lib python3 7 site package tensorflow core python framework constant op py 265 constant impl allow broadcast allow broadcast usr lib python3 7 site package tensorflow core python framework tensor util py 545 make tensor proto support type type value value typeerror fail to convert object of type to tensor content variantdataset shape 2 type tf int64 consider cast element to a support type |
tensorflowtensorflow | bug wrong device placement for tf constant with int32 | Bug | colab example remember to use gpu instance when dtype be int32 tf constant didn t place tensor on gpu but cpu python with tf device gpu 0 a tf constant 0 1 dtype tf float32 print a device job localhost replica 0 task 0 device gpu 0 with tf device gpu 0 b tf constant 0 1 dtype tf int32 print b device job localhost replica 0 task 0 device cpu 0 system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 2 0 0 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory |
tensorflowtensorflow | tf distribute mirroredstrategy crash | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 progress linux 5 engywuck backport linux debian buster tensorflow instal from source or binary binary problem happen from source aswell tensorflow version use command below v1 12 1 16854 g6778662 2 1 0 dev20191028 python version python 3 7 3 cuda cudnn version 10 0 7 0 gpu model and memory 2x asus geforxe rtx 2080 ti compute capability 7 5 no nvlink describe the current behavior since we be not allow to share our datum I try to reproduce our problem with a dataset from tensorflow dataset the current code might not make much sense but I be able to deliver a reproducible code with it training on only one gpu work without a problem with tf distribute mirroredstrategy it crash see dump what we already try build tensorflow from source build tensorflow from source against cuda10 1 use tensorflow via pip tensorflow gpu describe the expect behavior tf distribute mirroredstrategy should lead to similar result like training on one gpu only code to reproduce the issue I try to reproduce the problem use google colab but since only one gpu be provide it be not really reproducible I try it with two virtual gpu but it didn t lead to similar behavior like our problem on my set up I use follow code python code utf 8 import numpy as np from tensorflow keras application resnet50 import resnet50 import tensorflow as tf import tensorflow dataset as tfds length dataset 17509 num class 9 img shape 256 256 3 batch size 32 def mymap func feature return feature image feature label autotune tf datum experimental autotune create input pipeline dataset tfds load name deep weed split train dataset dataset map mymap func num parallel call tf data experimental autotune dataset dataset cache dataset dataset shuffle buffer size length dataset seed 42 reshuffle each iteration true dataset dataset batch batch size batch size drop remainder true repeat dataset dataset prefetch buffer size tf data experimental autotune create model img width img height 270 270 shape class img width img height 1 3 strategy tf distribute mirroredstrategy print number of device in strategy format strategy num replicas in sync with strategy scope model resnet50 include top true weight none input tensor none input shape img shape pool none class num class model compile optimizer tf optimizer adam loss sparse categorical crossentropy metric accuracy train step np ceil length dataset batch size history model fit x dataset epoch 10 verbose 1 step per epoch train step use multiprocesse false worker 8 other info log python dump of above script bash python src test multi gpu training colab py 2019 11 07 10 44 52 905250 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 11 07 10 44 52 950697 I tensorflow core common runtime gpu gpu device cc 1546 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 545 pcibusid 0000 86 00 0 2019 11 07 10 44 52 951317 I tensorflow core common runtime gpu gpu device cc 1546 find device 1 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 545 pcibusid 0000 af 00 0 2019 11 07 10 44 52 951554 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 07 10 44 52 952809 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 07 10 44 52 953947 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 07 10 44 52 954263 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 07 10 44 52 955764 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 07 10 44 52 956986 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 07 10 44 52 960430 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 07 10 44 52 962770 I tensorflow core common runtime gpu gpu device cc 1674 add visible gpu device 0 1 2019 11 07 10 44 52 963103 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 avx512f fma 2019 11 07 10 44 53 001056 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2100000000 hz 2019 11 07 10 44 53 008873 I tensorflow compiler xla service service cc 168 xla service 0x5460110 initialize for platform host this do not guarantee that xla will be use device 2019 11 07 10 44 53 008905 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2019 11 07 10 44 53 233141 I tensorflow compiler xla service service cc 168 xla service 0x5521500 initialize for platform cuda this do not guarantee that xla will be use device 2019 11 07 10 44 53 233177 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 2080 ti compute capability 7 5 2019 11 07 10 44 53 233185 I tensorflow compiler xla service service cc 176 streamexecutor device 1 geforce rtx 2080 ti compute capability 7 5 2019 11 07 10 44 53 234101 I tensorflow core common runtime gpu gpu device cc 1546 find device 0 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 545 pcibusid 0000 86 00 0 2019 11 07 10 44 53 234646 I tensorflow core common runtime gpu gpu device cc 1546 find device 1 with property name geforce rtx 2080 ti major 7 minor 5 memoryclockrate ghz 1 545 pcibusid 0000 af 00 0 2019 11 07 10 44 53 234685 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 07 10 44 53 234699 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 07 10 44 53 234711 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 07 10 44 53 234723 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 07 10 44 53 234736 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 07 10 44 53 234748 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 07 10 44 53 234760 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 07 10 44 53 237620 I tensorflow core common runtime gpu gpu device cc 1674 add visible gpu device 0 1 2019 11 07 10 44 53 237669 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 07 10 44 53 239881 I tensorflow core common runtime gpu gpu device cc 1087 device interconnect streamexecutor with strength 1 edge matrix 2019 11 07 10 44 53 239900 I tensorflow core common runtime gpu gpu device cc 1093 0 1 2019 11 07 10 44 53 239912 I tensorflow core common runtime gpu gpu device cc 1106 0 n n 2019 11 07 10 44 53 239922 I tensorflow core common runtime gpu gpu device cc 1106 1 n n 2019 11 07 10 44 53 242394 I tensorflow core common runtime gpu gpu device cc 1232 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10312 mb memory physical gpu device 0 name geforce rtx 2080 ti pci bus i d 0000 86 00 0 compute capability 7 5 2019 11 07 10 44 53 243757 I tensorflow core common runtime gpu gpu device cc 1232 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 10312 mb memory physical gpu device 1 name geforce rtx 2080 ti pci bus i d 0000 af 00 0 compute capability 7 5 number of device in strategy 2 train for 548 0 step epoch 1 10 2019 11 07 10 45 11 764680 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 07 10 45 13 747993 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 07 10 45 15 307911 e tensorflow stream executor cuda cuda driver cc 948 fail to synchronize the stop event cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 307949 e tensorflow stream executor gpu gpu timer cc 55 internal error destroy cuda event cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 307957 e tensorflow stream executor gpu gpu timer cc 60 internal error destroy cuda event cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 307992 I tensorflow stream executor cuda cuda driver cc 801 fail to allocate 8b 8 byte from device cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 308001 e tensorflow stream executor stream cc 5452 internal fail to enqueue async memset operation cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 308017 w tensorflow core kernel gpu util cc 68 fail to check cudnn convolution for out of bound read and write with an error message fail to load in memory cubin cuda error illegal instruction an illegal instruction be encounter skip this check this only mean that we win t check cudnn for out of bound read and write this message will only be print once 2019 11 07 10 45 15 308032 I tensorflow stream executor cuda cuda driver cc 801 fail to allocate 8b 8 byte from device cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 308044 I tensorflow stream executor stream cc 4963 stream 0x62f2a40 impl 0x62f1230 do not memzero gpu location source 0x7fcf977fbfd0 2019 11 07 10 45 15 308500 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort internal cudnn launch failure input shape 16 3 262 262 filter shape 7 7 3 64 node replica 1 resnet50 conv1 conv conv2d loss mul 10 2019 11 07 10 45 15 308571 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort internal cudnn launch failure input shape 16 3 262 262 filter shape 7 7 3 64 node replica 1 resnet50 conv1 conv conv2d metric accuracy div no nan addn 1 32 2019 11 07 10 45 15 308780 w tensorflow core common runtime base collective executor cc 217 basecollectiveexecutor startabort internal cudnn launch failure input shape 16 3 262 262 filter shape 7 7 3 64 node replica 1 resnet50 conv1 conv conv2d 2019 11 07 10 45 15 332853 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 2019 11 07 10 45 15 333219 e tensorflow stream executor cuda cuda dnn cc 329 could not create cudnn handle cudnn status internal error 1 548 eta 2 39 09traceback most recent call last file src test multi gpu training colab py line 81 in worker 8 file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python keras engine training py line 778 in fit use multiprocesse use multiprocesse file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python keras engine training v2 py line 338 in fit total epoch epoch file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python keras engine training v2 py line 128 in run one epoch batch out execution function iterator file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager def function py line 632 in call return self stateless fn args kwd file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager function py line 2339 in call return graph function filter call args kwargs pylint disable protect access file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager function py line 1589 in filter call self capture input file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager function py line 1670 in call flat ctx args cancellation manager cancellation manager file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager function py line 521 in call ctx ctx file home sam2 workspace python venvs tf 2 lib python3 7 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror 2 root error s find 0 internal cudnn launch failure input shape 16 3 262 262 filter shape 7 7 3 64 node replica 1 resnet50 conv1 conv conv2d define at usr lib python3 7 thread py 917 loss mul 10 1 internal cudnn launch failure input shape 16 3 262 262 filter shape 7 7 3 64 node replica 1 resnet50 conv1 conv conv2d define at usr lib python3 7 thread py 917 0 successful operation 1 derive error ignore op inference distribute function 36243 function call stack distribute function distribute function 2019 11 07 10 45 15 777040 I tensorflow stream executor stream cc 1990 stream 0x5ab9030 impl 0x62f1330 do not wait for stream 0x62f2a40 impl 0x62f1230 2019 11 07 10 45 15 777095 I tensorflow stream executor stream cc 4938 stream 0x5ab9030 impl 0x62f1330 do not memcpy host to device source 0x7fcf8007e000 2019 11 07 10 45 15 777129 e tensorflow stream executor stream cc 332 error recording event in stream error recording cuda event cuda error illegal instruction an illegal instruction be encounter not mark stream as bad as the event object may be at fault monitor for further error 2019 11 07 10 45 15 777161 e tensorflow stream executor cuda cuda event cc 29 error polling for event status fail to query event cuda error illegal instruction an illegal instruction be encounter 2019 11 07 10 45 15 777181 f tensorflow core common runtime gpu gpu event mgr cc 273 unexpected event status 1 the device interconnect matrix seem a bit odd but we don t know if that s an issue for a distribute strategy 2019 11 07 10 44 53 239881 I tensorflow core common runtime gpu gpu device cc 1087 device interconnect streamexecutor with strength 1 edge matrix 2019 11 07 10 44 53 239900 I tensorflow core common runtime gpu gpu device cc 1093 0 1 2019 11 07 10 44 53 239912 I tensorflow core common runtime gpu gpu device cc 1106 0 n n 2019 11 07 10 44 53 239922 I tensorflow core common runtime gpu gpu device cc 1106 1 n n it seem the low level driver work fine see the dump of nvidia smi nvidia smi topo cuda devicequery and nccl all reduce test nvidia smi topo txt nvidia smi txt nccl allreduce txt cuda device query txt |
tensorflowtensorflow | tf2 0 distribute training fail after add an embed layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow somewhat os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior try to train with multiworkermirroredstrategy a simple model with and without an embed layer in the former case the distribute training fail see below the follow code be execute on two worker with tf config cluster worker localhost 65535 localhost 65532 task index 0 type worker and tf config cluster worker localhost 65535 localhost 65532 task index 1 type worker respectively code to reproduce the issue import tensorflow as tf from tensorflow keras optimizer import adam from tensorflow keras import loss import numpy as np batch size 32 strategy tf distribute experimental multiworkermirroredstrategy num worker strategy num replicas in sync global batch size batch size num worker n cat 47 def some func args tf tensor tensor dict x tensor dict y for index in range 1 tensor dict x f input index 1 tf expand dim args index axis 1 tensor dict y f dense tf expand dim args index axis 1 return tensor dict x tensor dict y def read datum train datum np random randint 1 n cat size 9000 val datum np random randint 1 n cat size 999 dataset train tf datum dataset from tensor slice train datum prefetch 1 map map func some func batch batch size global batch size shuffle 1000 repeat dataset val tf datum dataset from tensor slice val datum prefetch 1 map map func some func batch batch size global batch size shuffle 1000 repeat return dataset train dataset val with strategy scope optimizer adam lr 0 1 loss loss sparse categorical crossentropy model build and compile model optimizer loss dataset train dataset val read data model fit x dataset train epoch 5 step per epoch 9000 batch size validation datum dataset val validation step 999 batch size where build and compile model def build and compile model optimizer loss my input tf keras layers input shape 1 my dense tf keras layer dense n cat my input model tf keras model my input my dense model compile optimizer optimizer loss loss return model the above case work fine with the output 2019 11 07 09 23 57 387629 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 07 09 23 57 410325 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2208000000 hz 2019 11 07 09 23 57 411228 I tensorflow compiler xla service service cc 168 xla service 0x562e2c78c6a0 execute computation on platform host device 2019 11 07 09 23 57 411241 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 07 09 23 57 413306 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 localhost 65535 1 localhost 65532 2019 11 07 09 23 57 414734 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 65535 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow modelcheckpoint callback be not provide worker will need to restart training if any fail 2019 11 07 09 24 09 204390 w tensorflow core grappler optimizer data auto shard cc 400 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication train for 281 step validate for 31 step epoch 1 5 2019 11 07 09 24 09 208710 w tensorflow core grappler optimizer data auto shard cc 400 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication 281 281 4s 13ms step loss 15 9838 val loss 16 1050 epoch 2 5 281 281 1s 3ms step loss 16 0496 val loss 16 1015 epoch 3 5 281 281 1s 3ms step loss 16 0210 val loss 16 1007 epoch 4 5 281 281 1s 3ms step loss 16 0517 val loss 16 0995 epoch 5 5 281 281 1s 3ms step loss 16 0147 val loss 16 0992 2019 11 07 09 24 16 090800 w tensorflow core common runtime eager context cc 290 unable to destroy server object so release instead server don t support clean shutdown process finish with exit code 0 however add just one embed layer so that build and compile model now be def build and compile model optimizer loss my input tf keras layers input shape 1 emb layer tf keras layer embed n cat 5 emb inp emb layer my input my dense tf keras layer dense n cat emb inp model tf keras model my input my dense model compile optimizer optimizer loss loss return model lead to the error 2019 11 07 09 26 45 812362 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 07 09 26 45 834388 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2208000000 hz 2019 11 07 09 26 45 835018 I tensorflow compiler xla service service cc 168 xla service 0x55ea66430da0 execute computation on platform host device 2019 11 07 09 26 45 835032 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 07 09 26 45 836899 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 localhost 65535 1 localhost 65532 2019 11 07 09 26 45 838638 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 65535 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow modelcheckpoint callback be not provide worker will need to restart training if any fail 2019 11 07 09 26 50 351284 w tensorflow core grappler optimizer data auto shard cc 400 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication train for 281 step validate for 31 step epoch 1 5 2019 11 07 09 26 50 355581 w tensorflow core grappler optimizer data auto shard cc 400 can not find shardable dataset add a shard node at the end of the dataset instead this may have performance implication 1 281 eta 9 05 loss 9 83752019 11 07 09 26 52 321935 e tensorflow core common runtime ring alg cc 279 aborting ringgather with internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 321956 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 321966 e tensorflow core common runtime ring alg cc 279 abort ringreduce with internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 321971 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 321976 e tensorflow core common runtime ring alg cc 279 aborting ringgather with internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 321996 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322010 e tensorflow core common runtime ring alg cc 279 abort ringreduce with internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322017 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322045 e tensorflow core common runtime ring alg cc 279 abort ringreduce with cancel derive cancel additional grpc error information create 1573115212 321980682 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message cancel grpc status 1 2019 11 07 09 26 52 322051 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort cancel derive cancel additional grpc error information create 1573115212 321980682 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message cancel grpc status 1 2019 11 07 09 26 52 322053 w tensorflow core framework op kernel cc 1622 op require fail at collective op cc 125 internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322054 w tensorflow core framework op kernel cc 1622 op require fail at collective op cc 234 internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322101 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 adam allreduce 1 collectivegather 2019 11 07 09 26 52 322097 w tensorflow core framework op kernel cc 1622 op require fail at collective op cc 125 internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 2019 11 07 09 26 52 322123 w tensorflow core framework op kernel cc 1622 op require fail at collective op cc 234 cancel derive cancel additional grpc error information create 1573115212 321980682 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message cancel grpc status 1 2019 11 07 09 26 52 322108 w tensorflow core framework op kernel cc 1622 op require fail at collective op cc 234 internal derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 grpc status 13 traceback most recent call last file pycharmce2019 2 config scratch scratch 4 py line 71 in validation step 999 batch size file lib python3 7 site package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file lib python3 7 site package tensorflow core python keras engine training distribute py line 789 in fit args kwargs file lib python3 7 site package tensorflow core python keras engine training distribute py line 776 in wrapper mode dc coordinatormode independent worker file lib python3 7 site package tensorflow core python distribute distribute coordinator py line 853 in run distribute coordinator task i d session config rpc layer file mostly engine tf20 p374 lib python3 7 site package tensorflow core python distribute distribute coordinator py line 360 in run single worker return worker fn strategy file mostly engine tf20 p374 lib python3 7 site package tensorflow core python keras engine training distribute py line 771 in worker fn return method model kwargs file mostly engine tf20 p374 lib python3 7 site package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file mostly engine tf20 p374 lib python3 7 site package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file mostly engine tf20 p374 lib python3 7 site package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file mostly engine tf20 p374 lib python3 7 site package tensorflow core python eager def function py line 457 in call result self call args kwd file mostly engine tf20 p374 lib python3 7 site package tensorflow core python eager def function py line 487 in call return self stateless fn args kwd pylint disable not callable file mostly engine tf20 p374 lib python3 7 site package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file mostly engine tf20 p374 lib python3 7 site package tensorflow core python eager function py line 1141 in filter call self capture input file lib python3 7 site package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file lib python3 7 site package tensorflow core python eager function py line 511 in call ctx ctx file lib python3 7 site package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl internalerror derive inconsistent output shape get 16 but expect be 64 node adam allreduce 1 collectivegather 1 define at miniconda3 envs mostly engine tf20 p374 lib python3 7 site package tensorflow core python framework op py 1751 additional grpc error information create 1573115212 321897522 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message derive inconsistent output shape get 16 but expect be 64 n t node adam allreduce 1 collectivegather 1 define at miniconda3 envs mostly engine tf20 p374 lib python3 7 site package tensorflow core python framework op py 1751 grpc status 13 adam allreduce 1 collectivegather op inference distribute function 888 function call stack distribute function 2019 11 07 09 26 52 540347 w tensorflow core common runtime eager context cc 290 unable to destroy server object so release instead server don t support clean shutdown process finish with exit code 1 |
tensorflowtensorflow | batchnorm generate nan move variance on gpu with fuse set to true for some input | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory the one on colab describe the current behavior I have an input of shape 1 1 1 num channel and I run it through a tf keras layer batchnormalization in training mode when run on cpu fuse or not or gpu not fuse the batch norm have the expect move variance of 0 99 0 99 when run on gpu with fuse true the move variance be nan nan describe the expect behavior the move variance should not be nan code to reproduce the issue check this gist other info log it work fine on cpu |
tensorflowtensorflow | model reset state do not work for bidirectional rnn in tf keras | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 6 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 4 gpu model and memory none macbook pro core i5 iris graphics 6100 1 5 gb describe the current behavior state handling in rnns with a bidirectional wrapper have change in tf kera from keras with tf 1 x in the old kera with tf 1 x use stateful true in a bidi rnn have no effect I e all bidi rnn model behave as if stateful false therefore model reset state do not do anything in the new tf keras stateful true in a bidi rnn do have an effect the fwd rnn be stateful and the bwd rnn be stateful this be a good change imo even though stateful bidi rnn be unusual this be the good way to implement however in tf keras the model reset state do not do anything for bidi rnn model simplernn gru lstm describe the expect behavior for the minimal example script provide below here be the output fwd non stateful 1 0 5 0 25 stateful 1 0 5 0 25 delta 0 0 0 bwd non stateful 1 0 0 stateful 1 0 0 delta 0 0 0 fwd non stateful 1 0 5 0 25 stateful 0 875 0 4375 0 21875 delta 0 125 0 0625 0 03125 bwd non stateful 1 0 0 stateful 0 875 0 25 0 5 delta 0 125 0 25 0 5 reset state in stateful model fwd non stateful 1 0 5 0 25 stateful 0 890625 0 4453125 0 22265625 delta 0 109375 0 0546875 0 02734375 bwd non stateful 1 0 0 stateful 0 890625 0 21875 0 4375 delta 0 109375 0 21875 0 4375 the result after the state reset should be the same as the first set of result I e the last third set of result should produce the same result for the stateful and non stateful model same as the first set of result code to reproduce the issue python import numpy as np tf2 true if tf2 currently there be a bug in tf keras model reset state do not work from tensorflow keras layers import input dense simplernn gru lstm bidirectional from tensorflow keras model import model else in the old keras bidi rnns with stateful true behave smae as stateful false from keras layers import input dense simplernn gru lstm bidirectional from keras model import model sequence length 3 feature dim 1 feature in input batch shape 1 sequence length feature dim rnn out bidirectional simplernn 1 activation none use bias false return sequence true return state false stateful false feature in stateless model model input feature in output rnn out stateful rnn out bidirectional simplernn 1 activation none use bias false return sequence true return state false stateful true feature in stateful model model input feature in output stateful rnn out toy weight np asarray 1 0 dtype np float32 np asarray 0 5 dtype np float32 np asarray 1 0 dtype np float32 np asarray 0 5 dtype np float32 stateless model set weight toy weight stateful model set weight toy weight x in np zeros sequence length x in 0 1 x in x in reshape 1 sequence length feature dim def print bidi out non stateful out stateful out fb fwd bwd for I in range 2 print fb I print f non stateful non stateful out t I print f stateful stateful out t I print f delta stateful out t I non stateful out t I non stateful out stateless model predict x in reshape sequence length 2 stateful out stateful model predict x in reshape sequence length 2 print bidi out non stateful out stateful out non stateful out stateless model predict x in reshape sequence length 2 stateful out stateful model predict x in reshape sequence length 2 print bidi out non stateful out stateful out print n reset state in stateful model n stateful model reset states non stateful out stateless model predict x in reshape sequence length 2 stateful out stateful model predict x in reshape sequence length 2 print bidi out non stateful out stateful out |
tensorflowtensorflow | unsupported numpy type npy longlong | Bug | system information linux mint 19 dell xps 7590 laptop python 3 7 4 anaconda tensorflow version v2 0 0 rc2 26 g64c3d38 2 0 0 pip tensorflow gpu with gpu disabled numpy version 1 17 3 describe the current behavior tensorflow 2 0 raise a valueerror if a numpy array of type np longlong be convert to a tf tensor describe the expect behavior a np array dtype np longlong should automatically convert to tf tensor dtype np int64 as be the case in tensorflow 1 15 code to reproduce the issue python import numpy as np import tensorflow as tf x np one 10 dtype np longlong x tf tf convert to tensor x traceback most recent call last file home wibble conda envs drum model lib python3 7 site package ipython core interactiveshell py line 3326 in run code exec code obj self user global ns self user n file line 1 in tf constant x file home wibble conda envs drum model lib python3 7 site package tensorflow core python framework constant op py line 227 in constant allow broadcast true file home wibble conda envs drum model lib python3 7 site package tensorflow core python framework constant op py line 235 in constant impl t convert to eager tensor value ctx dtype file home wibble conda envs drum model lib python3 7 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype valueerror fail to convert a numpy array to a tensor unsupported numpy type npy longlong |
tensorflowtensorflow | tf datum dataset fix size batch with subsequent map under tf distribute mirroredstrategy lead to a crash | Bug | system information the same environment as in code to reproduce the issue it take I a few week of debug to reproduce important I do not think it will reproduce in colab you need at least 2 gpu python usr bin env python3 import sys import tensorflow as tf def main strategy tf distribute mirroredstrategy batch size 12 feature shape 372 558 3 label 10 sample tf random uniform feature shape def batch print b l tf print shape b shape tf shape b tf print b 10 crash here return b l ds train tf datum dataset from tensor sample map lambda s tf squeeze s tf one label repeat batch batch size drop remainder true map batch print ds val tf datum dataset from tensor sample map lambda s tf squeeze s tf one label repeat batch batch size drop remainder true take 10 import tensorflow core python keras backend original input tensorflow core python keras layers input def create input args kwargs return original input args batch size batch size kwargs monkey patch the input layer to ensure the fix tensor shape tensorflow core python keras layers input create input with strategy scope model tf keras application densenet121 weight none input shape feature shape class label model build batch size feature shape model summary optimizer tf keras optimizer rmsprop learn rate 0 001 cross entropy tf keras loss categoricalcrossentropy label smooth 0 1 model compile optimizer optimizer loss cross entropy metric accuracy model fit ds train validation datum ds val epoch 1 step per epoch 100 if name main sys exit main as you see I be feed a tf datum dataset pipeline to a keras model under tf distribute mirroredstrategy in my case there be 4 gpu here be the log which indicate a crash full log 2019 11 06 11 09 37 077575 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2 3 2019 11 06 11 09 37 077858 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutorwith strength 1 edge matrix 2019 11 06 11 09 37 077880 I tensorflow core common runtime gpu gpu device cc 1165 0 1 2 3 2019 11 06 11 09 37 077894 I tensorflow core common runtime gpu gpu device cc 1178 0 n y n n 2019 11 06 11 09 37 077904 I tensorflow core common runtime gpu gpu device cc 1178 1 y n n n 2019 11 06 11 09 37 077914 I tensorflow core common runtime gpu gpu device cc 1178 2 n n n y 2019 11 06 11 09 37 077923 I tensorflow core common runtime gpu gpu device cc 1178 3 n n y n 2019 11 06 11 09 37 084775 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 0 with 10470 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 02 00 0 compute capability 6 1 2019 11 06 11 09 37 086075 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 1 with 10470 mb memory physical gpu device 1 name geforce gtx 1080 ti pci bus i d 0000 03 00 0 compute capability 6 1 2019 11 06 11 09 37 087140 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 2 with 10470 mb memory physical gpu device 2 name geforce gtx 1080 ti pci bus i d 0000 82 00 0 compute capability 6 1 2019 11 06 11 09 37 088126 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device device gpu 3 with 10470 mb memory physical gpu device 3 name geforce gtx 1080 ti pci bus i d 0000 83 00 0 compute capability 6 1 model densenet121 layer type output shape param connect to input 1 inputlayer 3 372 558 3 0 zero padding2d zeropadding2d 3 378 564 3 0 input 1 0 0 conv1 conv conv2d 3 186 279 64 9408 zero padding2d 0 0 conv1 bn batchnormalization 3 186 279 64 256 conv1 conv 0 0 conv1 relu activation 3 186 279 64 0 conv1 bn 0 0 zero padding2d 1 zeropadding2d 3 188 281 64 0 conv1 relu 0 0 pool1 maxpooling2d 3 93 140 64 0 zero padding2d 1 0 0 conv2 block1 0 bn batchnormali 3 93 140 64 256 pool1 0 0 conv2 block1 0 relu activation 3 93 140 64 0 conv2 block1 0 bn 0 0 conv2 block1 1 conv conv2d 3 93 140 128 8192 conv2 block1 0 relu 0 0 conv2 block1 1 bn batchnormali 3 93 140 128 512 conv2 block1 1 conv 0 0 conv2 block1 1 relu activation 3 93 140 128 0 conv2 block1 1 bn 0 0 conv2 block1 2 conv conv2d 3 93 140 32 36864 conv2 block1 1 relu 0 0 conv2 block1 concat concatenat 3 93 140 96 0 pool1 0 0 conv2 block1 2 conv 0 0 conv2 block2 0 bn batchnormali 3 93 140 96 384 conv2 block1 concat 0 0 conv2 block2 0 relu activation 3 93 140 96 0 conv2 block2 0 bn 0 0 conv2 block2 1 conv conv2d 3 93 140 128 12288 conv2 block2 0 relu 0 0 conv2 block2 1 bn batchnormali 3 93 140 128 512 conv2 block2 1 conv 0 0 conv2 block2 1 relu activation 3 93 140 128 0 conv2 block2 1 bn 0 0 conv2 block2 2 conv conv2d 3 93 140 32 36864 conv2 block2 1 relu 0 0 conv2 block2 concat concatenat 3 93 140 128 0 conv2 block1 concat 0 0 conv2 block2 2 conv 0 0 conv2 block3 0 bn batchnormali 3 93 140 128 512 conv2 block2 concat 0 0 conv2 block3 0 relu activation 3 93 140 128 0 conv2 block3 0 bn 0 0 conv2 block3 1 conv conv2d 3 93 140 128 16384 conv2 block3 0 relu 0 0 conv2 block3 1 bn batchnormali 3 93 140 128 512 conv2 block3 1 conv 0 0 conv2 block3 1 relu activation 3 93 140 128 0 conv2 block3 1 bn 0 0 conv2 block3 2 conv conv2d 3 93 140 32 36864 conv2 block3 1 relu 0 0 conv2 block3 concat concatenat 3 93 140 160 0 conv2 block2 concat 0 0 conv2 block3 2 conv 0 0 conv2 block4 0 bn batchnormali 3 93 140 160 640 conv2 block3 concat 0 0 conv2 block4 0 relu activation 3 93 140 160 0 conv2 block4 0 bn 0 0 conv2 block4 1 conv conv2d 3 93 140 128 20480 conv2 block4 0 relu 0 0 conv2 block4 1 bn batchnormali 3 93 140 128 512 conv2 block4 1 conv 0 0 conv2 block4 1 relu activation 3 93 140 128 0 conv2 block4 1 bn 0 0 conv2 block4 2 conv conv2d 3 93 140 32 36864 conv2 block4 1 relu 0 0 conv2 block4 concat concatenat 3 93 140 192 0 conv2 block3 concat 0 0 conv2 block4 2 conv 0 0 conv2 block5 0 bn batchnormali 3 93 140 192 768 conv2 block4 concat 0 0 conv2 block5 0 relu activation 3 93 140 192 0 conv2 block5 0 bn 0 0 conv2 block5 1 conv conv2d 3 93 140 128 24576 conv2 block5 0 relu 0 0 conv2 block5 1 bn batchnormali 3 93 140 128 512 conv2 block5 1 conv 0 0 conv2 block5 1 relu activation 3 93 140 128 0 conv2 block5 1 bn 0 0 conv2 block5 2 conv conv2d 3 93 140 32 36864 conv2 block5 1 relu 0 0 conv2 block5 concat concatenat 3 93 140 224 0 conv2 block4 concat 0 0 conv2 block5 2 conv 0 0 conv2 block6 0 bn batchnormali 3 93 140 224 896 conv2 block5 concat 0 0 conv2 block6 0 relu activation 3 93 140 224 0 conv2 block6 0 bn 0 0 conv2 block6 1 conv conv2d 3 93 140 128 28672 conv2 block6 0 relu 0 0 conv2 block6 1 bn batchnormali 3 93 140 128 512 conv2 block6 1 conv 0 0 conv2 block6 1 relu activation 3 93 140 128 0 conv2 block6 1 bn 0 0 conv2 block6 2 conv conv2d 3 93 140 32 36864 conv2 block6 1 relu 0 0 conv2 block6 concat concatenat 3 93 140 256 0 conv2 block5 concat 0 0 conv2 block6 2 conv 0 0 pool2 bn batchnormalization 3 93 140 256 1024 conv2 block6 concat 0 0 pool2 relu activation 3 93 140 256 0 pool2 bn 0 0 pool2 conv conv2d 3 93 140 128 32768 pool2 relu 0 0 pool2 pool averagepooling2d 3 46 70 128 0 pool2 conv 0 0 conv3 block1 0 bn batchnormali 3 46 70 128 512 pool2 pool 0 0 conv3 block1 0 relu activation 3 46 70 128 0 conv3 block1 0 bn 0 0 conv3 block1 1 conv conv2d 3 46 70 128 16384 conv3 block1 0 relu 0 0 conv3 block1 1 bn batchnormali 3 46 70 128 512 conv3 block1 1 conv 0 0 conv3 block1 1 relu activation 3 46 70 128 0 conv3 block1 1 bn 0 0 conv3 block1 2 conv conv2d 3 46 70 32 36864 conv3 block1 1 relu 0 0 conv3 block1 concat concatenat 3 46 70 160 0 pool2 pool 0 0 conv3 block1 2 conv 0 0 conv3 block2 0 bn batchnormali 3 46 70 160 640 conv3 block1 concat 0 0 conv3 block2 0 relu activation 3 46 70 160 0 conv3 block2 0 bn 0 0 conv3 block2 1 conv conv2d 3 46 70 128 20480 conv3 block2 0 relu 0 0 conv3 block2 1 bn batchnormali 3 46 70 128 512 conv3 block2 1 conv 0 0 conv3 block2 1 relu activation 3 46 70 128 0 conv3 block2 1 bn 0 0 conv3 block2 2 conv conv2d 3 46 70 32 36864 conv3 block2 1 relu 0 0 conv3 block2 concat concatenat 3 46 70 192 0 conv3 block1 concat 0 0 conv3 block2 2 conv 0 0 conv3 block3 0 bn batchnormali 3 46 70 192 768 conv3 block2 concat 0 0 conv3 block3 0 relu activation 3 46 70 192 0 conv3 block3 0 bn 0 0 conv3 block3 1 conv conv2d 3 46 70 128 24576 conv3 block3 0 relu 0 0 conv3 block3 1 bn batchnormali 3 46 70 128 512 conv3 block3 1 conv 0 0 conv3 block3 1 relu activation 3 46 70 128 0 conv3 block3 1 bn 0 0 conv3 block3 2 conv conv2d 3 46 70 32 36864 conv3 block3 1 relu 0 0 conv3 block3 concat concatenat 3 46 70 224 0 conv3 block2 concat 0 0 conv3 block3 2 conv 0 0 conv3 block4 0 bn batchnormali 3 46 70 224 896 conv3 block3 concat 0 0 conv3 block4 0 relu activation 3 46 70 224 0 conv3 block4 0 bn 0 0 conv3 block4 1 conv conv2d 3 46 70 128 28672 conv3 block4 0 relu 0 0 conv3 block4 1 bn batchnormali 3 46 70 128 512 conv3 block4 1 conv 0 0 conv3 block4 1 relu activation 3 46 70 128 0 conv3 block4 1 bn 0 0 conv3 block4 2 conv conv2d 3 46 70 32 36864 conv3 block4 1 relu 0 0 conv3 block4 concat concatenat 3 46 70 256 0 conv3 block3 concat 0 0 conv3 block4 2 conv 0 0 conv3 block5 0 bn batchnormali 3 46 70 256 1024 conv3 block4 concat 0 0 conv3 block5 0 relu activation 3 46 70 256 0 conv3 block5 0 bn 0 0 conv3 block5 1 conv conv2d 3 46 70 128 32768 conv3 block5 0 relu 0 0 conv3 block5 1 bn batchnormali 3 46 70 128 512 conv3 block5 1 conv 0 0 conv3 block5 1 relu activation 3 46 70 128 0 conv3 block5 1 bn 0 0 conv3 block5 2 conv conv2d 3 46 70 32 36864 conv3 block5 1 relu 0 0 conv3 block5 concat concatenat 3 46 70 288 0 conv3 block4 concat 0 0 conv3 block5 2 conv 0 0 conv3 block6 0 bn batchnormali 3 46 70 288 1152 conv3 block5 concat 0 0 conv3 block6 0 relu activation 3 46 70 288 0 conv3 block6 0 bn 0 0 conv3 block6 1 conv conv2d 3 46 70 128 36864 conv3 block6 0 relu 0 0 conv3 block6 1 bn batchnormali 3 46 70 128 512 conv3 block6 1 conv 0 0 conv3 block6 1 relu activation 3 46 70 128 0 conv3 block6 1 bn 0 0 conv3 block6 2 conv conv2d 3 46 70 32 36864 conv3 block6 1 relu 0 0 conv3 block6 concat concatenat 3 46 70 320 0 conv3 block5 concat 0 0 conv3 block6 2 conv 0 0 conv3 block7 0 bn batchnormali 3 46 70 320 1280 conv3 block6 concat 0 0 conv3 block7 0 relu activation 3 46 70 320 0 conv3 block7 0 bn 0 0 conv3 block7 1 conv conv2d 3 46 70 128 40960 conv3 block7 0 relu 0 0 conv3 block7 1 bn batchnormali 3 46 70 128 512 conv3 block7 1 conv 0 0 conv3 block7 1 relu activation 3 46 70 128 0 conv3 block7 1 bn 0 0 conv3 block7 2 conv conv2d 3 46 70 32 36864 conv3 block7 1 relu 0 0 conv3 block7 concat concatenat 3 46 70 352 0 conv3 block6 concat 0 0 conv3 block7 2 conv 0 0 conv3 block8 0 bn batchnormali 3 46 70 352 1408 conv3 block7 concat 0 0 conv3 block8 0 relu activation 3 46 70 352 0 conv3 block8 0 bn 0 0 conv3 block8 1 conv conv2d 3 46 70 128 45056 conv3 block8 0 relu 0 0 conv3 block8 1 bn batchnormali 3 46 70 128 512 conv3 block8 1 conv 0 0 conv3 block8 1 relu activation 3 46 70 128 0 conv3 block8 1 bn 0 0 conv3 block8 2 conv conv2d 3 46 70 32 36864 conv3 block8 1 relu 0 0 conv3 block8 concat concatenat 3 46 70 384 0 conv3 block7 concat 0 0 conv3 block8 2 conv 0 0 conv3 block9 0 bn batchnormali 3 46 70 384 1536 conv3 block8 concat 0 0 conv3 block9 0 relu activation 3 46 70 384 0 conv3 block9 0 bn 0 0 conv3 block9 1 conv conv2d 3 46 70 128 49152 conv3 block9 0 relu 0 0 conv3 block9 1 bn batchnormali 3 46 70 128 512 conv3 block9 1 conv 0 0 conv3 block9 1 relu activation 3 46 70 128 0 conv3 block9 1 bn 0 0 conv3 block9 2 conv conv2d 3 46 70 32 36864 conv3 block9 1 relu 0 0 conv3 block9 concat concatenat 3 46 70 416 0 conv3 block8 concat 0 0 conv3 block9 2 conv 0 0 conv3 block10 0 bn batchnormal 3 46 70 416 1664 conv3 block9 concat 0 0 conv3 block10 0 relu activatio 3 46 70 416 0 conv3 block10 0 bn 0 0 conv3 block10 1 conv conv2d 3 46 70 128 53248 conv3 block10 0 relu 0 0 conv3 block10 1 bn batchnormal 3 46 70 128 512 conv3 block10 1 conv 0 0 conv3 block10 1 relu activatio 3 46 70 128 0 conv3 block10 1 bn 0 0 conv3 block10 2 conv conv2d 3 46 70 32 36864 conv3 block10 1 relu 0 0 conv3 block10 concat concatena 3 46 70 448 0 conv3 block9 concat 0 0 conv3 block10 2 conv 0 0 conv3 block11 0 bn batchnormal 3 46 70 448 1792 conv3 block10 concat 0 0 conv3 block11 0 relu activatio 3 46 70 448 0 conv3 block11 0 bn 0 0 conv3 block11 1 conv conv2d 3 46 70 128 57344 conv3 block11 0 relu 0 0 conv3 block11 1 bn batchnormal 3 46 70 128 512 conv3 block11 1 conv 0 0 conv3 block11 1 relu activatio 3 46 70 128 0 conv3 block11 1 bn 0 0 conv3 block11 2 conv conv2d 3 46 70 32 36864 conv3 block11 1 relu 0 0 conv3 block11 concat concatena 3 46 70 480 0 conv3 block10 concat 0 0 conv3 block11 2 conv 0 0 conv3 block12 0 bn batchnormal 3 46 70 480 1920 conv3 block11 concat 0 0 conv3 block12 0 relu activatio 3 46 70 480 0 conv3 block12 0 bn 0 0 conv3 block12 1 conv conv2d 3 46 70 128 61440 conv3 block12 0 relu 0 0 conv3 block12 1 bn batchnormal 3 46 70 128 512 conv3 block12 1 conv 0 0 conv3 block12 1 relu activatio 3 46 70 128 0 conv3 block12 1 bn 0 0 conv3 block12 2 conv conv2d 3 46 70 32 36864 conv3 block12 1 relu 0 0 conv3 block12 concat concatena 3 46 70 512 0 conv3 block11 concat 0 0 conv3 block12 2 conv 0 0 pool3 bn batchnormalization 3 46 70 512 2048 conv3 block12 concat 0 0 pool3 relu activation 3 46 70 512 0 pool3 bn 0 0 pool3 conv conv2d 3 46 70 256 131072 pool3 relu 0 0 pool3 pool averagepooling2d 3 23 35 256 0 pool3 conv 0 0 conv4 block1 0 bn batchnormali 3 23 35 256 1024 pool3 pool 0 0 conv4 block1 0 relu activation 3 23 35 256 0 conv4 block1 0 bn 0 0 conv4 block1 1 conv conv2d 3 23 35 128 32768 conv4 block1 0 relu 0 0 conv4 block1 1 bn batchnormali 3 23 35 128 512 conv4 block1 1 conv 0 0 conv4 block1 1 relu activation 3 23 35 128 0 conv4 block1 1 bn 0 0 conv4 block1 2 conv conv2d 3 23 35 32 36864 conv4 block1 1 relu 0 0 conv4 block1 concat concatenat 3 23 35 288 0 pool3 pool 0 0 conv4 block1 2 conv 0 0 conv4 block2 0 bn batchnormali 3 23 35 288 1152 conv4 block1 concat 0 0 conv4 block2 0 relu activation 3 23 35 288 0 conv4 block2 0 bn 0 0 conv4 block2 1 conv conv2d 3 23 35 128 36864 conv4 block2 0 relu 0 0 conv4 block2 1 bn batchnormali 3 23 35 128 512 conv4 block2 1 conv 0 0 conv4 block2 1 relu activation 3 23 35 128 0 conv4 block2 1 bn 0 0 conv4 block2 2 conv conv2d 3 23 35 32 36864 conv4 block2 1 relu 0 0 conv4 block2 concat concatenat 3 23 35 320 0 conv4 block1 concat 0 0 conv4 block2 2 conv 0 0 conv4 block3 0 bn batchnormali 3 23 35 320 1280 conv4 block2 concat 0 0 conv4 block3 0 relu activation 3 23 35 320 0 conv4 block3 0 bn 0 0 conv4 block3 1 conv conv2d 3 23 35 128 40960 conv4 block3 0 relu 0 0 conv4 block3 1 bn batchnormali 3 23 35 128 512 conv4 block3 1 conv 0 0 conv4 block3 1 relu activation 3 23 35 128 0 conv4 block3 1 bn 0 0 conv4 block3 2 conv conv2d 3 23 35 32 36864 conv4 block3 1 relu 0 0 conv4 block3 concat concatenat 3 23 35 352 0 conv4 block2 concat 0 0 conv4 block3 2 conv 0 0 conv4 block4 0 bn batchnormali 3 23 35 352 1408 conv4 block3 concat 0 0 conv4 block4 0 relu activation 3 23 35 352 0 conv4 block4 0 bn 0 0 conv4 block4 1 conv conv2d 3 23 35 128 45056 conv4 block4 0 relu 0 0 conv4 block4 1 bn batchnormali 3 23 35 128 512 conv4 block4 1 conv 0 0 conv4 block4 1 relu activation 3 23 35 128 0 conv4 block4 1 bn 0 0 conv4 block4 2 conv conv2d 3 23 35 32 36864 conv4 block4 1 relu 0 0 conv4 block4 concat concatenat 3 23 35 384 0 conv4 block3 concat 0 0 conv4 block4 2 conv 0 0 conv4 block5 0 bn batchnormali 3 23 35 384 1536 conv4 block4 concat 0 0 conv4 block5 0 relu activation 3 23 35 384 0 conv4 block5 0 bn 0 0 conv4 block5 1 conv conv2d 3 23 35 128 49152 conv4 block5 0 relu 0 0 conv4 block5 1 bn batchnormali 3 23 35 128 512 conv4 block5 1 conv 0 0 conv4 block5 1 relu activation 3 23 35 128 0 conv4 block5 1 bn 0 0 conv4 block5 2 conv conv2d 3 23 35 32 36864 conv4 block5 1 relu 0 0 conv4 block5 concat concatenat 3 23 35 416 0 conv4 block4 concat 0 0 conv4 block5 2 conv 0 0 conv4 block6 0 bn batchnormali 3 23 35 416 1664 conv4 block5 concat 0 0 conv4 block6 0 relu activation 3 23 35 416 0 conv4 block6 0 bn 0 0 conv4 block6 1 conv conv2d 3 23 35 128 53248 conv4 block6 0 relu 0 0 conv4 block6 1 bn batchnormali 3 23 35 128 512 conv4 block6 1 conv 0 0 conv4 block6 1 relu activation 3 23 35 128 0 conv4 block6 1 bn 0 0 conv4 block6 2 conv conv2d 3 23 35 32 36864 conv4 block6 1 relu 0 0 conv4 block6 concat concatenat 3 23 35 448 0 conv4 block5 concat 0 0 conv4 block6 2 conv 0 0 conv4 block7 0 bn batchnormali 3 23 35 448 1792 conv4 block6 concat 0 0 conv4 block7 0 relu activation 3 23 35 448 0 conv4 block7 0 bn 0 0 conv4 block7 1 conv conv2d 3 23 35 128 57344 conv4 block7 0 relu 0 0 conv4 block7 1 bn batchnormali 3 23 35 128 512 conv4 block7 1 conv 0 0 conv4 block7 1 relu activation 3 23 35 128 0 conv4 block7 1 bn 0 0 conv4 block7 2 conv conv2d 3 23 35 32 36864 conv4 block7 1 relu 0 0 conv4 block7 concat concatenat 3 23 35 480 0 conv4 block6 concat 0 0 conv4 block7 2 conv 0 0 conv4 block8 0 bn batchnormali 3 23 35 480 1920 conv4 block7 concat 0 0 conv4 block8 0 relu activation 3 23 35 480 0 conv4 block8 0 bn 0 0 conv4 block8 1 conv conv2d 3 23 35 128 61440 conv4 block8 0 relu 0 0 conv4 block8 1 bn batchnormali 3 23 35 128 512 conv4 block8 1 conv 0 0 conv4 block8 1 relu activation 3 23 35 128 0 conv4 block8 1 bn 0 0 conv4 block8 2 conv conv2d 3 23 35 32 36864 conv4 block8 1 relu 0 0 conv4 block8 concat concatenat 3 23 35 512 0 conv4 block7 concat 0 0 conv4 block8 2 conv 0 0 conv4 block9 0 bn batchnormali 3 23 35 512 2048 conv4 block8 concat 0 0 conv4 block9 0 relu activation 3 23 35 512 0 conv4 block9 0 bn 0 0 conv4 block9 1 conv conv2d 3 23 35 128 65536 conv4 block9 0 relu 0 0 conv4 block9 1 bn batchnormali 3 23 35 128 512 conv4 block9 1 conv 0 0 conv4 block9 1 relu activation 3 23 35 128 0 conv4 block9 1 bn 0 0 conv4 block9 2 conv conv2d 3 23 35 32 36864 conv4 block9 1 relu 0 0 conv4 block9 concat concatenat 3 23 35 544 0 conv4 block8 concat 0 0 conv4 block9 2 conv 0 0 conv4 block10 0 bn batchnormal 3 23 35 544 2176 conv4 block9 concat 0 0 conv4 block10 0 relu activatio 3 23 35 544 0 conv4 block10 0 bn 0 0 conv4 block10 1 conv conv2d 3 23 35 128 69632 conv4 block10 0 relu 0 0 conv4 block10 1 bn batchnormal 3 23 35 128 512 conv4 block10 1 conv 0 0 conv4 block10 1 relu activatio 3 23 35 128 0 conv4 block10 1 bn 0 0 conv4 block10 2 conv conv2d 3 23 35 32 36864 conv4 block10 1 relu 0 0 conv4 block10 concat concatena 3 23 35 576 0 conv4 block9 concat 0 0 conv4 block10 2 conv 0 0 conv4 block11 0 bn batchnormal 3 23 35 576 2304 conv4 block10 concat 0 0 conv4 block11 0 relu activatio 3 23 35 576 0 conv4 block11 0 bn 0 0 conv4 block11 1 conv conv2d 3 23 35 128 73728 conv4 block11 0 relu 0 0 conv4 block11 1 bn batchnormal 3 23 35 128 512 conv4 block11 1 conv 0 0 conv4 block11 1 relu activatio 3 23 35 128 0 conv4 block11 1 bn 0 0 conv4 block11 2 conv conv2d 3 23 35 32 36864 conv4 block11 1 relu 0 0 conv4 block11 concat concatena 3 23 35 608 0 conv4 block10 concat 0 0 conv4 block11 2 conv 0 0 conv4 block12 0 bn batchnormal 3 23 35 608 2432 conv4 block11 concat 0 0 conv4 block12 0 relu activatio 3 23 35 608 0 conv4 block12 0 bn 0 0 conv4 block12 1 conv conv2d 3 23 35 128 77824 conv4 block12 0 relu 0 0 conv4 block12 1 bn batchnormal 3 23 35 128 512 conv4 block12 1 conv 0 0 conv4 block12 1 relu activatio 3 23 35 128 0 conv4 block12 1 bn 0 0 conv4 block12 2 conv conv2d 3 23 35 32 36864 conv4 block12 1 relu 0 0 conv4 block12 concat concatena 3 23 35 640 0 conv4 block11 concat 0 0 conv4 block12 2 conv 0 0 conv4 block13 0 bn batchnormal 3 23 35 640 2560 conv4 block12 concat 0 0 conv4 block13 0 relu activatio 3 23 35 640 0 conv4 block13 0 bn 0 0 conv4 block13 1 conv conv2d 3 23 35 128 81920 conv4 block13 0 relu 0 0 conv4 block13 1 bn batchnormal 3 23 35 128 512 conv4 block13 1 conv 0 0 conv4 block13 1 relu activatio 3 23 35 128 0 conv4 block13 1 bn 0 0 conv4 block13 2 conv conv2d 3 23 35 32 36864 conv4 block13 1 relu 0 0 conv4 block13 concat concatena 3 23 35 672 0 conv4 block12 concat 0 0 conv4 block13 2 conv 0 0 conv4 block14 0 bn batchnormal 3 23 35 672 2688 conv4 block13 concat 0 0 conv4 block14 0 relu activatio 3 23 35 672 0 conv4 block14 0 bn 0 0 conv4 block14 1 conv conv2d 3 23 35 128 86016 conv4 block14 0 relu 0 0 conv4 block14 1 bn batchnormal 3 23 35 128 512 conv4 block14 1 conv 0 0 conv4 block14 1 relu activatio 3 23 35 128 0 conv4 block14 1 bn 0 0 conv4 block14 2 conv conv2d 3 23 35 32 36864 conv4 block14 1 relu 0 0 conv4 block14 concat concatena 3 23 35 704 0 conv4 block13 concat 0 0 conv4 block14 2 conv 0 0 conv4 block15 0 bn batchnormal 3 23 35 704 2816 conv4 block14 concat 0 0 conv4 block15 0 relu activatio 3 23 35 704 0 conv4 block15 0 bn 0 0 conv4 block15 1 conv conv2d 3 23 35 128 90112 conv4 block15 0 relu 0 0 conv4 block15 1 bn batchnormal 3 23 35 128 512 conv4 block15 1 conv 0 0 conv4 block15 1 relu activatio 3 23 35 128 0 conv4 block15 1 bn 0 0 conv4 block15 2 conv conv2d 3 23 35 32 36864 conv4 block15 1 relu 0 0 conv4 block15 concat concatena 3 23 35 736 0 conv4 block14 concat 0 0 conv4 block15 2 conv 0 0 conv4 block16 0 bn batchnormal 3 23 35 736 2944 conv4 block15 concat 0 0 conv4 block16 0 relu activatio 3 23 35 736 0 conv4 block16 0 bn 0 0 conv4 block16 1 conv conv2d 3 23 35 128 94208 conv4 block16 0 relu 0 0 conv4 block16 1 bn batchnormal 3 23 35 128 512 conv4 block16 1 conv 0 0 conv4 block16 1 relu activatio 3 23 35 128 0 conv4 block16 1 bn 0 0 conv4 block16 2 conv conv2d 3 23 35 32 36864 conv4 block16 1 relu 0 0 conv4 block16 concat concatena 3 23 35 768 0 conv4 block15 concat 0 0 conv4 block16 2 conv 0 0 conv4 block17 0 bn batchnormal 3 23 35 768 3072 conv4 block16 concat 0 0 conv4 block17 0 relu activatio 3 23 35 768 0 conv4 block17 0 bn 0 0 conv4 block17 1 conv conv2d 3 23 35 128 98304 conv4 block17 0 relu 0 0 conv4 block17 1 bn batchnormal 3 23 35 128 512 conv4 block17 1 conv 0 0 conv4 block17 1 relu activatio 3 23 35 128 0 conv4 block17 1 bn 0 0 conv4 block17 2 conv conv2d 3 23 35 32 36864 conv4 block17 1 relu 0 0 conv4 block17 concat concatena 3 23 35 800 0 conv4 block16 concat 0 0 conv4 block17 2 conv 0 0 conv4 block18 0 bn batchnormal 3 23 35 800 3200 conv4 block17 concat 0 0 conv4 block18 0 relu activatio 3 23 35 800 0 conv4 block18 0 bn 0 0 conv4 block18 1 conv conv2d 3 23 35 128 102400 conv4 block18 0 relu 0 0 conv4 block18 1 bn batchnormal 3 23 35 128 512 conv4 block18 1 conv 0 0 conv4 block18 1 relu activatio 3 23 35 128 0 conv4 block18 1 bn 0 0 conv4 block18 2 conv conv2d 3 23 35 32 36864 conv4 block18 1 relu 0 0 conv4 block18 concat concatena 3 23 35 832 0 conv4 block17 concat 0 0 conv4 block18 2 conv 0 0 conv4 block19 0 bn batchnormal 3 23 35 832 3328 conv4 block18 concat 0 0 conv4 block19 0 relu activatio 3 23 35 832 0 conv4 block19 0 bn 0 0 conv4 block19 1 conv conv2d 3 23 35 128 106496 conv4 block19 0 relu 0 0 conv4 block19 1 bn batchnormal 3 23 35 128 512 conv4 block19 1 conv 0 0 conv4 block19 1 relu activatio 3 23 35 128 0 conv4 block19 1 bn 0 0 conv4 block19 2 conv conv2d 3 23 35 32 36864 conv4 block19 1 relu 0 0 conv4 block19 concat concatena 3 23 35 864 0 conv4 block18 concat 0 0 conv4 block19 2 conv 0 0 conv4 block20 0 bn batchnormal 3 23 35 864 3456 conv4 block19 concat 0 0 conv4 block20 0 relu activatio 3 23 35 864 0 conv4 block20 0 bn 0 0 conv4 block20 1 conv conv2d 3 23 35 128 110592 conv4 block20 0 relu 0 0 conv4 block20 1 bn batchnormal 3 23 35 128 512 conv4 block20 1 conv 0 0 conv4 block20 1 relu activatio 3 23 35 128 0 conv4 block20 1 bn 0 0 conv4 block20 2 conv conv2d 3 23 35 32 36864 conv4 block20 1 relu 0 0 conv4 block20 concat concatena 3 23 35 896 0 conv4 block19 concat 0 0 conv4 block20 2 conv 0 0 conv4 block21 0 bn batchnormal 3 23 35 896 3584 conv4 block20 concat 0 0 conv4 block21 0 relu activatio 3 23 35 896 0 conv4 block21 0 bn 0 0 conv4 block21 1 conv conv2d 3 23 35 128 114688 conv4 block21 0 relu 0 0 conv4 block21 1 bn batchnormal 3 23 35 128 512 conv4 block21 1 conv 0 0 conv4 block21 1 relu activatio 3 23 35 128 0 conv4 block21 1 bn 0 0 conv4 block21 2 conv conv2d 3 23 35 32 36864 conv4 block21 1 relu 0 0 conv4 block21 concat concatena 3 23 35 928 0 conv4 block20 concat 0 0 conv4 block21 2 conv 0 0 conv4 block22 0 bn batchnormal 3 23 35 928 3712 conv4 block21 concat 0 0 conv4 block22 0 relu activatio 3 23 35 928 0 conv4 block22 0 bn 0 0 conv4 block22 1 conv conv2d 3 23 35 128 118784 conv4 block22 0 relu 0 0 conv4 block22 1 bn batchnormal 3 23 35 128 512 conv4 block22 1 conv 0 0 conv4 block22 1 relu activatio 3 23 35 128 0 conv4 block22 1 bn 0 0 conv4 block22 2 conv conv2d 3 23 35 32 36864 conv4 block22 1 relu 0 0 conv4 block22 concat concatena 3 23 35 960 0 conv4 block21 concat 0 0 conv4 block22 2 conv 0 0 conv4 block23 0 bn batchnormal 3 23 35 960 3840 conv4 block22 concat 0 0 conv4 block23 0 relu activatio 3 23 35 960 0 conv4 block23 0 bn 0 0 conv4 block23 1 conv conv2d 3 23 35 128 122880 conv4 block23 0 relu 0 0 conv4 block23 1 bn batchnormal 3 23 35 128 512 conv4 block23 1 conv 0 0 conv4 block23 1 relu activatio 3 23 35 128 0 conv4 block23 1 bn 0 0 conv4 block23 2 conv conv2d 3 23 35 32 36864 conv4 block23 1 relu 0 0 conv4 block23 concat concatena 3 23 35 992 0 conv4 block22 concat 0 0 conv4 block23 2 conv 0 0 conv4 block24 0 bn batchnormal 3 23 35 992 3968 conv4 block23 concat 0 0 conv4 block24 0 relu activatio 3 23 35 992 0 conv4 block24 0 bn 0 0 conv4 block24 1 conv conv2d 3 23 35 128 126976 conv4 block24 0 relu 0 0 conv4 block24 1 bn batchnormal 3 23 35 128 512 conv4 block24 1 conv 0 0 conv4 block24 1 relu activatio 3 23 35 128 0 conv4 block24 1 bn 0 0 conv4 block24 2 conv conv2d 3 23 35 32 36864 conv4 block24 1 relu 0 0 conv4 block24 concat concatena 3 23 35 1024 0 conv4 block23 concat 0 0 conv4 block24 2 conv 0 0 pool4 bn batchnormalization 3 23 35 1024 4096 conv4 block24 concat 0 0 pool4 relu activation 3 23 35 1024 0 pool4 bn 0 0 pool4 conv conv2d 3 23 35 512 524288 pool4 relu 0 0 pool4 pool averagepooling2d 3 11 17 512 0 pool4 conv 0 0 conv5 block1 0 bn batchnormali 3 11 17 512 2048 pool4 pool 0 0 conv5 block1 0 relu activation 3 11 17 512 0 conv5 block1 0 bn 0 0 conv5 block1 1 conv conv2d 3 11 17 128 65536 conv5 block1 0 relu 0 0 conv5 block1 1 bn batchnormali 3 11 17 128 512 conv5 block1 1 conv 0 0 conv5 block1 1 relu activation 3 11 17 128 0 conv5 block1 1 bn 0 0 conv5 block1 2 conv conv2d 3 11 17 32 36864 conv5 block1 1 relu 0 0 conv5 block1 concat concatenat 3 11 17 544 0 pool4 pool 0 0 conv5 block1 2 conv 0 0 conv5 block2 0 bn batchnormali 3 11 17 544 2176 conv5 block1 concat 0 0 conv5 block2 0 relu activation 3 11 17 544 0 conv5 block2 0 bn 0 0 conv5 block2 1 conv conv2d 3 11 17 128 69632 conv5 block2 0 relu 0 0 conv5 block2 1 bn batchnormali 3 11 17 128 512 conv5 block2 1 conv 0 0 conv5 block2 1 relu activation 3 11 17 128 0 conv5 block2 1 bn 0 0 conv5 block2 2 conv conv2d 3 11 17 32 36864 conv5 block2 1 relu 0 0 conv5 block2 concat concatenat 3 11 17 576 0 conv5 block1 concat 0 0 conv5 block2 2 conv 0 0 conv5 block3 0 bn batchnormali 3 11 17 576 2304 conv5 block2 concat 0 0 conv5 block3 0 relu activation 3 11 17 576 0 conv5 block3 0 bn 0 0 conv5 block3 1 conv conv2d 3 11 17 128 73728 conv5 block3 0 relu 0 0 conv5 block3 1 bn batchnormali 3 11 17 128 512 conv5 block3 1 conv 0 0 conv5 block3 1 relu activation 3 11 17 128 0 conv5 block3 1 bn 0 0 conv5 block3 2 conv conv2d 3 11 17 32 36864 conv5 block3 1 relu 0 0 conv5 block3 concat concatenat 3 11 17 608 0 conv5 block2 concat 0 0 conv5 block3 2 conv 0 0 conv5 block4 0 bn batchnormali 3 11 17 608 2432 conv5 block3 concat 0 0 conv5 block4 0 relu activation 3 11 17 608 0 conv5 block4 0 bn 0 0 conv5 block4 1 conv conv2d 3 11 17 128 77824 conv5 block4 0 relu 0 0 conv5 block4 1 bn batchnormali 3 11 17 128 512 conv5 block4 1 conv 0 0 conv5 block4 1 relu activation 3 11 17 128 0 conv5 block4 1 bn 0 0 conv5 block4 2 conv conv2d 3 11 17 32 36864 conv5 block4 1 relu 0 0 conv5 block4 concat concatenat 3 11 17 640 0 conv5 block3 concat 0 0 conv5 block4 2 conv 0 0 conv5 block5 0 bn batchnormali 3 11 17 640 2560 conv5 block4 concat 0 0 conv5 block5 0 relu activation 3 11 17 640 0 conv5 block5 0 bn 0 0 conv5 block5 1 conv conv2d 3 11 17 128 81920 conv5 block5 0 relu 0 0 conv5 block5 1 bn batchnormali 3 11 17 128 512 conv5 block5 1 conv 0 0 conv5 block5 1 relu activation 3 11 17 128 0 conv5 block5 1 bn 0 0 conv5 block5 2 conv conv2d 3 11 17 32 36864 conv5 block5 1 relu 0 0 conv5 block5 concat concatenat 3 11 17 672 0 conv5 block4 concat 0 0 conv5 block5 2 conv 0 0 conv5 block6 0 bn batchnormali 3 11 17 672 2688 conv5 block5 concat 0 0 conv5 block6 0 relu activation 3 11 17 672 0 conv5 block6 0 bn 0 0 conv5 block6 1 conv conv2d 3 11 17 128 86016 conv5 block6 0 relu 0 0 conv5 block6 1 bn batchnormali 3 11 17 128 512 conv5 block6 1 conv 0 0 conv5 block6 1 relu activation 3 11 17 128 0 conv5 block6 1 bn 0 0 conv5 block6 2 conv conv2d 3 11 17 32 36864 conv5 block6 1 relu 0 0 conv5 block6 concat concatenat 3 11 17 704 0 conv5 block5 concat 0 0 conv5 block6 2 conv 0 0 conv5 block7 0 bn batchnormali 3 11 17 704 2816 conv5 block6 concat 0 0 conv5 block7 0 relu activation 3 11 17 704 0 conv5 block7 0 bn 0 0 conv5 block7 1 conv conv2d 3 11 17 128 90112 conv5 block7 0 relu 0 0 conv5 block7 1 bn batchnormali 3 11 17 128 512 conv5 block7 1 conv 0 0 conv5 block7 1 relu activation 3 11 17 128 0 conv5 block7 1 bn 0 0 conv5 block7 2 conv conv2d 3 11 17 32 36864 conv5 block7 1 relu 0 0 conv5 block7 concat concatenat 3 11 17 736 0 conv5 block6 concat 0 0 conv5 block7 2 conv 0 0 conv5 block8 0 bn batchnormali 3 11 17 736 2944 conv5 block7 concat 0 0 conv5 block8 0 relu activation 3 11 17 736 0 conv5 block8 0 bn 0 0 conv5 block8 1 conv conv2d 3 11 17 128 94208 conv5 block8 0 relu 0 0 conv5 block8 1 bn batchnormali 3 11 17 128 512 conv5 block8 1 conv 0 0 conv5 block8 1 relu activation 3 11 17 128 0 conv5 block8 1 bn 0 0 conv5 block8 2 conv conv2d 3 11 17 32 36864 conv5 block8 1 relu 0 0 conv5 block8 concat concatenat 3 11 17 768 0 conv5 block7 concat 0 0 conv5 block8 2 conv 0 0 conv5 block9 0 bn batchnormali 3 11 17 768 3072 conv5 block8 concat 0 0 conv5 block9 0 relu activation 3 11 17 768 0 conv5 block9 0 bn 0 0 conv5 block9 1 conv conv2d 3 11 17 128 98304 conv5 block9 0 relu 0 0 conv5 block9 1 bn batchnormali 3 11 17 128 512 conv5 block9 1 conv 0 0 conv5 block9 1 relu activation 3 11 17 128 0 conv5 block9 1 bn 0 0 conv5 block9 2 conv conv2d 3 11 17 32 36864 conv5 block9 1 relu 0 0 conv5 block9 concat concatenat 3 11 17 800 0 conv5 block8 concat 0 0 conv5 block9 2 conv 0 0 conv5 block10 0 bn batchnormal 3 11 17 800 3200 conv5 block9 concat 0 0 conv5 block10 0 relu activatio 3 11 17 800 0 conv5 block10 0 bn 0 0 conv5 block10 1 conv conv2d 3 11 17 128 102400 conv5 block10 0 relu 0 0 conv5 block10 1 bn batchnormal 3 11 17 128 512 conv5 block10 1 conv 0 0 conv5 block10 1 relu activatio 3 11 17 128 0 conv5 block10 1 bn 0 0 conv5 block10 2 conv conv2d 3 11 17 32 36864 conv5 block10 1 relu 0 0 conv5 block10 concat concatena 3 11 17 832 0 conv5 block9 concat 0 0 conv5 block10 2 conv 0 0 conv5 block11 0 bn batchnormal 3 11 17 832 3328 conv5 block10 concat 0 0 conv5 block11 0 relu activatio 3 11 17 832 0 conv5 block11 0 bn 0 0 conv5 block11 1 conv conv2d 3 11 17 128 106496 conv5 block11 0 relu 0 0 conv5 block11 1 bn batchnormal 3 11 17 128 512 conv5 block11 1 conv 0 0 conv5 block11 1 relu activatio 3 11 17 128 0 conv5 block11 1 bn 0 0 conv5 block11 2 conv conv2d 3 11 17 32 36864 conv5 block11 1 relu 0 0 conv5 block11 concat concatena 3 11 17 864 0 conv5 block10 concat 0 0 conv5 block11 2 conv 0 0 conv5 block12 0 bn batchnormal 3 11 17 864 3456 conv5 block11 concat 0 0 conv5 block12 0 relu activatio 3 11 17 864 0 conv5 block12 0 bn 0 0 conv5 block12 1 conv conv2d 3 11 17 128 110592 conv5 block12 0 relu 0 0 conv5 block12 1 bn batchnormal 3 11 17 128 512 conv5 block12 1 conv 0 0 conv5 block12 1 relu activatio 3 11 17 128 0 conv5 block12 1 bn 0 0 conv5 block12 2 conv conv2d 3 11 17 32 36864 conv5 block12 1 relu 0 0 conv5 block12 concat concatena 3 11 17 896 0 conv5 block11 concat 0 0 conv5 block12 2 conv 0 0 conv5 block13 0 bn batchnormal 3 11 17 896 3584 conv5 block12 concat 0 0 conv5 block13 0 relu activatio 3 11 17 896 0 conv5 block13 0 bn 0 0 conv5 block13 1 conv conv2d 3 11 17 128 114688 conv5 block13 0 relu 0 0 conv5 block13 1 bn batchnormal 3 11 17 128 512 conv5 block13 1 conv 0 0 conv5 block13 1 relu activatio 3 11 17 128 0 conv5 block13 1 bn 0 0 conv5 block13 2 conv conv2d 3 11 17 32 36864 conv5 block13 1 relu 0 0 conv5 block13 concat concatena 3 11 17 928 0 conv5 block12 concat 0 0 conv5 block13 2 conv 0 0 conv5 block14 0 bn batchnormal 3 11 17 928 3712 conv5 block13 concat 0 0 conv5 block14 0 relu activatio 3 11 17 928 0 conv5 block14 0 bn 0 0 conv5 block14 1 conv conv2d 3 11 17 128 118784 conv5 block14 0 relu 0 0 conv5 block14 1 bn batchnormal 3 11 17 128 512 conv5 block14 1 conv 0 0 conv5 block14 1 relu activatio 3 11 17 128 0 conv5 block14 1 bn 0 0 conv5 block14 2 conv conv2d 3 11 17 32 36864 conv5 block14 1 relu 0 0 conv5 block14 concat concatena 3 11 17 960 0 conv5 block13 concat 0 0 conv5 block14 2 conv 0 0 conv5 block15 0 bn batchnormal 3 11 17 960 3840 conv5 block14 concat 0 0 conv5 block15 0 relu activatio 3 11 17 960 0 conv5 block15 0 bn 0 0 conv5 block15 1 conv conv2d 3 11 17 128 122880 conv5 block15 0 relu 0 0 conv5 block15 1 bn batchnormal 3 11 17 128 512 conv5 block15 1 conv 0 0 conv5 block15 1 relu activatio 3 11 17 128 0 conv5 block15 1 bn 0 0 conv5 block15 2 conv conv2d 3 11 17 32 36864 conv5 block15 1 relu 0 0 conv5 block15 concat concatena 3 11 17 992 0 conv5 block14 concat 0 0 conv5 block15 2 conv 0 0 conv5 block16 0 bn batchnormal 3 11 17 992 3968 conv5 block15 concat 0 0 conv5 block16 0 relu activatio 3 11 17 992 0 conv5 block16 0 bn 0 0 conv5 block16 1 conv conv2d 3 11 17 128 126976 conv5 block16 0 relu 0 0 conv5 block16 1 bn batchnormal 3 11 17 128 512 conv5 block16 1 conv 0 0 conv5 block16 1 relu activatio 3 11 17 128 0 conv5 block16 1 bn 0 0 conv5 block16 2 conv conv2d 3 11 17 32 36864 conv5 block16 1 relu 0 0 conv5 block16 concat concatena 3 11 17 1024 0 conv5 block15 concat 0 0 conv5 block16 2 conv 0 0 bn batchnormalization 3 11 17 1024 4096 conv5 block16 concat 0 0 relu activation 3 11 17 1024 0 bn 0 0 avg pool globalaveragepooling2 3 1024 0 relu 0 0 fc1000 dense 3 10 10250 avg pool 0 0 total param 7 047 754 trainable param 6 964 106 non trainable param 83 648 train for 100 step validate for 10 step 2019 11 06 11 12 15 235702 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcubla so 10 0 shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 481528 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 485747 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 488817 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound 2019 11 06 11 12 15 489183 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 identity 4 188 2019 11 06 11 12 15 489398 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 494247 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound 2019 11 06 11 12 15 887854 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 replica 2 metric accuracy assignaddvariableop 1 39 1 100 eta 3 57 11traceback most recent call last file user vmarkovtsev image efficientoffice efficientoffice shape bug py line 45 in sys exit main file user vmarkovtsev image efficientoffice efficientoffice shape bug py line 41 in main model fit ds train validation datum ds val epoch 1 step per epoch 100 file usr local lib python3 6 dist package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file usr local lib python3 6 dist package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file usr local lib python3 6 dist package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file usr local lib python3 6 dist package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file usr local lib python3 6 dist package tensorflow core python eager def function py line 457 in call result self call args kwd file usr local lib python3 6 dist package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file usr local lib python3 6 dist package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file usr local lib python3 6 dist package tensorflow core python eager function py line 1141 in filter call self capture input file usr local lib python3 6 dist package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 6 dist package tensorflow core python eager function py line 511 in call ctx ctx file usr local lib python3 6 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror 4 root error s find 0 invalid argument slice index 10 of dimension 0 out of bound node stride slice define at local lib python3 6 dist package tensorflow core python framework op py 1751 multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 identity 4 188 1 invalid argument slice index 10 of dimension 0 out of bound node stride slice define at local lib python3 6 dist package tensorflow core python framework op py 1751 multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 2 cancel 3 cancel 0 successful operation 1 derive error ignore op inference distribute function 166689 function call stack distribute function distribute function distribute function distribute function distribute function distribute function this be how the log end the crash train for 100 step validate for 10 step 2019 11 06 11 12 15 235702 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcubla so 10 0 shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 481528 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 485747 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 488817 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound 2019 11 06 11 12 15 489183 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 identity 4 188 2019 11 06 11 12 15 489398 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 shape tensorshape 12 372 558 3 12 372 558 3 2019 11 06 11 12 15 494247 w tensorflow core framework op kernel cc 1622 op require fail at strided slice op cc 108 invalid argument slice index 10 of dimension 0 out of bound 2019 11 06 11 12 15 887854 w tensorflow core common runtime base collective executor cc 216 basecollectiveexecutor startabort invalid argument function node inference dataset map batch print 35 slice index 10 of dimension 0 out of bound node strided slice multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 replica 2 metric accuracy assignaddvariableop 1 39 1 100 eta 3 57 11traceback most recent call last file user vmarkovtsev image efficientoffice efficientoffice shape bug py line 45 in sys exit main file user vmarkovtsev image efficientoffice efficientoffice shape bug py line 41 in main model fit ds train validation datum ds val epoch 1 step per epoch 100 file usr local lib python3 6 dist package tensorflow core python keras engine training py line 728 in fit use multiprocesse use multiprocesse file usr local lib python3 6 dist package tensorflow core python keras engine training v2 py line 324 in fit total epoch epoch file usr local lib python3 6 dist package tensorflow core python keras engine training v2 py line 123 in run one epoch batch out execution function iterator file usr local lib python3 6 dist package tensorflow core python keras engine training v2 util py line 86 in execution function distribute function input fn file usr local lib python3 6 dist package tensorflow core python eager def function py line 457 in call result self call args kwd file usr local lib python3 6 dist package tensorflow core python eager def function py line 520 in call return self stateless fn args kwd file usr local lib python3 6 dist package tensorflow core python eager function py line 1823 in call return graph function filter call args kwargs pylint disable protect access file usr local lib python3 6 dist package tensorflow core python eager function py line 1141 in filter call self capture input file usr local lib python3 6 dist package tensorflow core python eager function py line 1224 in call flat ctx args cancellation manager cancellation manager file usr local lib python3 6 dist package tensorflow core python eager function py line 511 in call ctx ctx file usr local lib python3 6 dist package tensorflow core python eager execute py line 67 in quick execute six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror 4 root error s find 0 invalid argument slice index 10 of dimension 0 out of bound node stride slice define at local lib python3 6 dist package tensorflow core python framework op py 1751 multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 identity 4 188 1 invalid argument slice index 10 of dimension 0 out of bound node stride slice define at local lib python3 6 dist package tensorflow core python framework op py 1751 multideviceiteratorgetnextfromshard remotecall iteratorgetnext 2 2 cancel 3 cancel 0 successful operation 1 derive error ignore op inference distribute function 166689 function call stack distribute function distribute function distribute function distribute function distribute function distribute function this be why the bug be so spicy both the static and dynamic shape be 12 but if you try to access an element under index 3 3 12 4 you crash I be really interested in why if you remove drop remainder true the code work |
tensorflowtensorflow | tf 2 0 distribution strategy throw invalid argument error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow somewhat os platform and distribution e g linux ubuntu 16 04 dgx mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n z tensorflow instal from source or binary docker image tensorflow version use command below 2 0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory telsa v100 sxm2 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior from the doc with tfdistributestrategy python tf debugging set log device placement true strategy tf distribute mirroredstrategy with strategy scope input tf keras layers input shape 1 prediction tf keras layer dense 1 input model tf keras model model inputs input output prediction model compile loss mse optimizer tf keras optimizer sgd learn rate 0 2 I adapt this I hope correctly for multiple gpu python gpu tf config experimental list physical device gpu gpu to use gpu 3 if gpu restrict tensorflow to only allocate 1 gb of memory on the first gpu try tf config experimental set visible device gpu to use gpu for gpu in gpu to use tf config experimental set memory growth gpu true gb 1024 tf config experimental set virtual device configuration gpu tf config experimental virtualdeviceconfiguration memory limit 12 gb logical gpu tf config experimental list logical device gpu print len gpu physical gpu len logical gpu logical gpu except runtimeerror as e virtual device must be set before gpu have be initialize print e which print 8 physical gpu 3 logical gpu as expect then call just this line python strategy tf distribute mirroredstrategy throw invalidargumenterror traceback most recent call last in 1 strategy tf distribute mirroredstrategy usr local lib python3 6 dist package tensorflow core python distribute mirror strategy py in init self device cross device op 354 def init self device none cross device op none 355 extend mirroredextende 356 self device device cross device op cross device op 357 super mirroredstrategy self init extend 358 usr local lib python3 6 dist package tensorflow core python distribute mirror strategy py in init self container strategy device cross device op 394 any local device 395 self cross device op cross device op 396 self initialize strategy device 397 398 todo b 128995245 enable last partial batch support in graph mode usr local lib python3 6 dist package tensorflow core python distribute mirror strategy py in initialize strategy self device 408 no duplicate allow in device argument s device 409 if be device list local device 410 self initialize local device 411 else 412 self initialize multi worker device usr local lib python3 6 dist package tensorflow core python distribute mirror strategy py in initialize local self device 418 self input worker input lib inputworker self device map 419 self infer cross device op none if self cross device op else 420 cross device op lib choose the good device 421 self host input device numpy dataset singledevice cpu 0 422 usr local lib python3 6 dist package tensorflow core python distribute cross device op py in choose the good device session config 1194 1195 request device set device util canonicalize d for d in device 1196 machine device device lib list local device session config session config 1197 use device set 1198 for d in machine device usr local lib python3 6 dist package tensorflow core python client device lib py in list local device session config 39 return 40 convert s 41 for s in pywrap tensorflow list device session config session config 42 usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal py in list device session config 2247 return listdeviceswithsessionconfig session config serializetostre 2248 else 2249 return listdevice 2250 2251 invalidargumenterror device cuda 0 not support by xla service while set up xla gpu jit device number 0 describe the expect behavior it just work as in the doc code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see above docker image tensorflow tensorflow 2 0 0 gpu py3 jupyter with nvidia docker other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tensorflow lite currently doesn t support control flow op merge switch | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary source tensorflow version or github sha if from source 1 15 0 provide the text output from tflite convert 2019 11 06 14 51 12 818885 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx warn tensorflow from home mannam pc local lib python3 6 site package tensorflow core lite python convert save model py 60 load from tensorflow python save model loader impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model loader load or tf compat v1 save model load there will be a new function for import savedmodel in tensorflow 2 0 2019 11 06 14 51 14 238827 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 11 06 14 51 14 239039 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 06 14 51 14 342213 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 11 06 14 51 14 342263 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0 003ms 2019 11 06 14 51 14 342276 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0ms 2019 11 06 14 51 17 338864 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2019 11 06 14 51 17 338999 I tensorflow core grappler cluster single machine cc 356 start new session 2019 11 06 14 51 17 758703 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 11 06 14 51 17 758752 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 1632 node 1290 2168 edge 1454 time 256 269ms 2019 11 06 14 51 17 758768 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 1632 node 0 2168 edge 0 time 73 402ms traceback most recent call last file tflite converter py line 40 in tflite model quant converter convert file home mannam pc local lib python3 6 site package tensorflow core lite python lite py line 983 in convert converter kwargs file home mannam pc local lib python3 6 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file home mannam pc local lib python3 6 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2019 11 06 14 51 20 478994 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479075 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479089 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479101 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479122 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479135 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479196 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 479212 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479236 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 479245 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479256 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479270 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479279 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479287 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479299 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479309 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479320 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 479334 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479345 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation loopcond 2019 11 06 14 51 20 479366 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 479425 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 479439 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 479472 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation parsesingleexample 2019 11 06 14 51 20 479499 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 479541 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 479552 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479564 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 479574 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479586 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 479595 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479606 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479622 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479632 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479642 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479654 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479664 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479673 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479685 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479695 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 479707 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 479726 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 479740 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation loopcond 2019 11 06 14 51 20 479763 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 479776 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 479804 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 479817 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 479827 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 479846 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 479875 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 479889 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 479917 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 479984 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decoderaw 2019 11 06 14 51 20 480005 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation substr 2019 11 06 14 51 20 480058 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decodejpeg 2019 11 06 14 51 20 480074 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation substr 2019 11 06 14 51 20 480086 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation substr 2019 11 06 14 51 20 480138 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation substr 2019 11 06 14 51 20 480161 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decodejpeg 2019 11 06 14 51 20 480195 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decodepng 2019 11 06 14 51 20 480262 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation substr 2019 11 06 14 51 20 480284 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decodegif 2019 11 06 14 51 20 480303 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation decodebmp 2019 11 06 14 51 20 480349 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 481401 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481417 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481430 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481441 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481453 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481463 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481473 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481483 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481495 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481504 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481515 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481525 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481536 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481546 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481557 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481566 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481577 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481587 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481598 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayv3 2019 11 06 14 51 20 481607 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481618 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481633 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481644 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481653 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 481665 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481675 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481685 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 481696 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481706 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481716 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 481727 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481737 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481746 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481756 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481766 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481777 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481787 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481797 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481808 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481823 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481833 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481844 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481855 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481865 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481877 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481886 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481896 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481907 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481917 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 20 2019 11 06 14 51 20 481929 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 481943 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481955 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481967 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 481998 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 482011 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation loopcond 2019 11 06 14 51 20 482039 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 482053 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 482065 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 482077 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 482089 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation exit 2019 11 06 14 51 20 482462 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 482478 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 482490 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 482502 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 482513 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 482524 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 482534 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 482544 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 482555 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraysizev3 2019 11 06 14 51 20 482565 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayscatterv3 2019 11 06 14 51 20 482612 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation enter 2019 11 06 14 51 20 482629 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 482642 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 482655 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 482667 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 482680 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraygatherv3 2019 11 06 14 51 20 482691 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarrayreadv3 2019 11 06 14 51 20 482749 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation nonmaxsuppressionv5 2019 11 06 14 51 20 482798 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 11 06 14 51 20 482893 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation size 2019 11 06 14 51 20 482945 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 483080 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 483096 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 483113 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 483125 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorarraywritev3 2019 11 06 14 51 20 509188 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 1122 operator 2035 array 0 quantize 2019 11 06 14 51 20 575870 I tensorflow lite toco graph transformation graph transformation cc 39 after remove unused op pass 1 1085 operator 1966 array 0 quantize 2019 11 06 14 51 20 648615 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 1085 operator 1966 array 0 quantize 2019 11 06 14 51 20 729953 I tensorflow lite toco graph transformation graph transformation cc 39 after general graph transformation pass 1 583 operator 1191 array 0 quantize 2019 11 06 14 51 20 751767 I tensorflow lite toco graph transformation graph transformation cc 39 before group bidirectional sequence lstm rnn 583 operator 1191 array 0 quantize 2019 11 06 14 51 20 769313 I tensorflow lite toco graph transformation graph transformation cc 39 before dequantization graph transformation 583 operator 1191 array 0 quantize 2019 11 06 14 51 20 801282 I tensorflow lite toco allocate transient array cc 345 total transient array allocate size 896 byte theoretical optimal value 896 byte 2019 11 06 14 51 20 804680 I tensorflow lite toco toco tooling cc 454 number of parameter 4600257 2019 11 06 14 51 20 806064 w tensorflow lite toco tflite operator cc 2706 op decoderaw be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806081 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806102 w tensorflow lite toco tflite operator cc 2706 op decodejpeg be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806111 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806120 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806131 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806141 w tensorflow lite toco tflite operator cc 2706 op decodejpeg be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806150 w tensorflow lite toco tflite operator cc 2706 op decodepng be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806161 w tensorflow lite toco tflite operator cc 2706 op decodegif be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806343 w tensorflow lite toco tflite operator cc 2706 op nonmaxsuppressionv5 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806600 w tensorflow lite toco tflite operator cc 2706 op decoderaw be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806614 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806635 w tensorflow lite toco tflite operator cc 2706 op decodejpeg be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806644 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806653 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806664 w tensorflow lite toco tflite operator cc 2706 op substr be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806674 w tensorflow lite toco tflite operator cc 2706 op decodejpeg be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806684 w tensorflow lite toco tflite operator cc 2706 op decodepng be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806696 w tensorflow lite toco tflite operator cc 2706 op decodegif be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 806892 w tensorflow lite toco tflite operator cc 2706 op nonmaxsuppressionv5 be a valid tensorflow op but have not be whiteliste for the tensorflow lite flex op set 2019 11 06 14 51 20 807143 e tensorflow lite toco toco tooling cc 481 we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the following tensorflow lite currently doesn t support control flow op merge switch we be work on support control flow op please see github issue at some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d depthwise conv 2d equal exp expand dim fill gather great great equal less logical and logical or logistic maximum minimum mul pack pad range reshape resize bilinear select shape slice split squeeze stride slice sub sum tile topk v2 transpose unpack where zero like here be a list of operator for which you will need custom implementation decodegif decodejpeg decodepng decoderaw nonmaxsuppressionv5 substr traceback most recent call last file home mannam pc local bin toco from protos line 8 in sys exit main file home mannam pc local lib python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main app run main execute argv sys argv 0 unparse file home mannam pc local lib python3 6 site package tensorflow core python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home mannam pc local lib python3 6 site package absl app py line 299 in run run main main args file home mannam pc local lib python3 6 site package absl app py line 250 in run main sys exit main argv file home mannam pc local lib python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute enable mlir converter exception we be continually in the process of add support to tensorflow lite for more op it would be helpful if you could inform we of how this conversion go by open a github issue at and paste the following tensorflow lite currently doesn t support control flow op merge switch we be work on support control flow op please see github issue at some of the operator in the model be not support by the standard tensorflow lite runtime and be not recognize by tensorflow if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast concatenation conv 2d depthwise conv 2d equal exp expand dim fill gather great great equal less logical and logical or logistic maximum minimum mul pack pad range reshape resize bilinear select shape slice split squeeze stride slice sub sum tile topk v2 transpose unpack where zero like here be a list of operator for which you will need custom implementation decodegif decodejpeg decodepng decoderaw nonmaxsuppressionv5 substr also please include a link to a graphdef or the model if possible include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be my source code |
tensorflowtensorflow | error on save rnn layer with recurrent dropout parameter as save model | Bug | layer input input shape 10 100 layer bi rnn bidirectional gru unit 10 recurrent dropout 0 2 return sequence true layer input layer dense timedistribute dense 5 layer bi rnn layer act activation softmax layer dense model model layer input layer act model compile loss categorical crossentropy save this model as save model on tf nightly 2 1 0 dev20191104 raise bug as attempt to save a function b inference forward lstm 1 layer call fn 19037 which reference a symbolic tensor tensor dropout mul 1 0 shape none 256 dtype float32 that be not a simple constant this be not support after try to change some parameter I get the conclusion that this issue happen because of recurrent dropout parameter any suggestion |
tensorflowtensorflow | understanding warn 5 out of the last 5 call to trigger tf function retrace | Bug | system information have I write custom code yes below linux ubuntu 16 04 tensorflow 2 1 0 dev20191103 binary install python 3 6 cuda 10 0 cndnn 7 6 4 4 nvidia titan x 12 gb I define a very simple training script with a custom loss function and fit as below the loss fn be very simple and I think every time it take tensor of the same shape and type but I m get the follow warn message interesting be that I m get the message only when train with multiple gpu be it a bug be it harmful be it really affect the computational cost warning message warn tensorflow 5 out of the last 5 call to trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental re lax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail code python from future import absolute import division print function unicode literal import numpy as np import tensorflow as tf from tensorflow import kera from tensorflow keras layer import conv2d conv3d dense tf function def loss fn y pre y true return tf reduce mean tf math square y pre y true if name main batch size per sync 4 strategy tf distribute mirroredstrategy num gpu strategy num replicas in sync global batch size batch size per sync num gpu print num gpu global batch size format num gpu global batch size fake datum fakea np random rand global batch size 10 200 200 128 astype np float32 target np random rand global batch size 200 200 14 astype np float32 fakea tf constant fakea target tf constant target tf dataset def gen while true yield fakea target dataset tf datum dataset from generator gen tf float32 tf float32 tf tensorshape fakea shape tf tensorshape target shape dataset dataset prefetch buffer size tf datum experimental autotune training callback tf keras callbacks tensorboard log dir log training true with strategy scope va keras input shape 10 200 200 128 dtype tf float32 name va x conv3d 64 kernel size 3 stride 1 padding same va x conv3d 64 kernel size 3 stride 1 padding same x x conv3d 64 kernel size 3 stride 1 padding same x x tf reduce max x axis 1 name maxpool b conv2d 14 kernel size 3 padding same x model keras model input va output b name net optimizer keras optimizer rmsprop model compile optimizer optimizer loss loss fn model fit x dataset epoch 10 step per epoch 100 callback callback |
tensorflowtensorflow | group convolution not work in tf 2 0 0 | Bug | the merge pr 25818 enable group convolution by allow the input s depth to be multiple of the filter s in depth parameter rather than exactly equal however as you can see below whenever I attempt to perform a group convolution it result in an ambiguous error no algorithm find same issue if I make the filter s in depth 1 make the convolution filter in depth parameter equal to the channel of the input 16 it work regular convolution be I miss something or be this not support for eager execution tensorflow instal via pip3 install upgrade tensorflow gpu os ubuntu 18 04 1 kernel 5 0 0 15 lowlatency gtx 1080ti cuda 10 1 cudnn 7 6 0 I leave tensorflow initialization log in case it provide relevant information python 3 7 4 default aug 13 2019 20 35 49 gcc 7 3 0 anaconda inc on linux type help copyright credit or license for more information import tensorflow as tf print tf version git version tf version version v2 0 0 rc2 26 g64c3d38 2 0 0 tf nn conv2d tf random normal 11 13 17 16 tf random normal 3 5 16 2 7 padding same stride 1 1 1 1 shape 2019 11 05 18 00 39 800225 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 11 05 18 00 39 841613 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 842211 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 6705 pcibusid 0000 1d 00 0 2019 11 05 18 00 39 842279 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 843083 I tensorflow core common runtime gpu gpu device cc 1618 find device 1 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 645 pcibusid 0000 1e 00 0 2019 11 05 18 00 39 843414 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 18 00 39 844684 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 05 18 00 39 846034 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 05 18 00 39 847021 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 05 18 00 39 849473 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 05 18 00 39 850970 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 05 18 00 39 855126 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 05 18 00 39 855289 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 856005 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 856903 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 857518 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 39 858304 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2019 11 05 18 00 39 858525 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 11 05 18 00 39 877459 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3699425000 hz 2019 11 05 18 00 39 878231 I tensorflow compiler xla service service cc 168 xla service 0x563f2440c8b0 execute computation on platform host device 2019 11 05 18 00 39 878267 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 11 05 18 00 40 042173 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 068620 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 069774 I tensorflow compiler xla service service cc 168 xla service 0x563f244af3e0 execute computation on platform cuda device 2019 11 05 18 00 40 069799 I tensorflow compiler xla service service cc 175 streamexecutor device 0 geforce gtx 1080 ti compute capability 6 1 2019 11 05 18 00 40 069819 I tensorflow compiler xla service service cc 175 streamexecutor device 1 geforce gtx 1080 ti compute capability 6 1 2019 11 05 18 00 40 071588 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 072539 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 6705 pcibusid 0000 1d 00 0 2019 11 05 18 00 40 072670 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 073607 I tensorflow core common runtime gpu gpu device cc 1618 find device 1 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 645 pcibusid 0000 1e 00 0 2019 11 05 18 00 40 073652 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 18 00 40 073666 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 11 05 18 00 40 073678 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 11 05 18 00 40 073688 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 11 05 18 00 40 073699 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 11 05 18 00 40 073710 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 11 05 18 00 40 073720 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 05 18 00 40 073771 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 074334 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 074891 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 075465 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 075960 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 1 2019 11 05 18 00 40 075991 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 11 05 18 00 40 077335 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 11 05 18 00 40 077350 I tensorflow core common runtime gpu gpu device cc 1165 0 1 2019 11 05 18 00 40 077358 I tensorflow core common runtime gpu gpu device cc 1178 0 n y 2019 11 05 18 00 40 077364 I tensorflow core common runtime gpu gpu device cc 1178 1 y n 2019 11 05 18 00 40 077488 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 078794 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 080681 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 081210 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10478 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 1d 00 0 compute capability 6 1 2019 11 05 18 00 40 081733 I tensorflow stream executor cuda cuda gpu executor cc 1006 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 11 05 18 00 40 082722 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 10479 mb memory physical gpu device 1 name geforce gtx 1080 ti pci bus i d 0000 1e 00 0 compute capability 6 1 2019 11 05 18 00 40 562956 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 11 05 18 00 41 155546 w tensorflow core framework op kernel cc 1622 op require fail at conv op cc 1069 not find no algorithm work traceback most recent call last file line 1 in file home bahaa miniconda3 7 lib python3 7 site package tensorflow core python op nn op py line 1913 in conv2d v2 name name file home bahaa miniconda3 7 lib python3 7 site package tensorflow core python op nn op py line 2010 in conv2d name name file home bahaa miniconda3 7 lib python3 7 site package tensorflow core python ops gen nn op py line 1039 in conv2d six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror no algorithm work op conv2d tf nn conv2d tf random normal 11 13 17 16 tf random normal 3 5 16 7 padding same stride 1 1 1 1 shape tensorshape 11 13 17 7 |
tensorflowtensorflow | comment on the custom model fn with tf 2 0 symbol section of the migrate your tensorflow 1 code to tensorflow 2 guide | Bug | url s with the issue please provide a link to the documentation entry for example custom model fn with tf 20 symbol description of issue what need change my comment be about this piece of code def my model fn feature label mode model make model training mode tf estimator modekeys train loss obj tf keras loss sparsecategoricalcrossentropy prediction model feature training training get both the unconditional loss the none part and the input conditional loss the feature part reg loss model get loss for none model get loss for feature total loss loss obj label prediction tf math add n reg loss upgrade to tf keras metric accuracy obj tf keras metric accuracy name acc obj accuracy accuracy obj update state y true label y pre tf math argmax prediction axis 1 train op none if training upgrade to tf keras optimizer optimizer tf keras optimizer adam manually assign tf compat v1 global step variable to optimizer iteration to make tf compat v1 train global step increase correctly this assignment be a must for any tf train sessionrunhook specify in estimator as sessionrunhook rely on global step optimizer iteration tf compat v1 train get or create global step get both the unconditional update the none part and the input conditional update the feature part update op model get update for none model get update for feature compute the minimize op minimize op optimizer get update total loss model trainable variable 0 train op tf group minimize op update op return tf estimator estimatorspec mode mode prediction prediction loss total loss train op train op eval metric op accuracy accuracy obj create the estimator train estimator tf estimator estimator model fn my model fn tf estimator train and evaluate estimator train spec eval spec my first comment be why write accuracy accuracy obj update state y true label y pre tf math argmax prediction axis 1 instead of just accuracy obj update state y true label y pre tf math argmax prediction axis 1 first accuracy be never use second this may lead the reader to believe that the tf keras metric metric update state output the accuracy value just like the tf keras metric metric result method whereas the output of update state be accuracy obj count see on the code below import tensorflow as tf accuracy obj tf keras metric accuracy name acc obj accuracy accuracy obj update state y true 0 1 y pre tf math argmax 0 3 0 7 0 3 0 7 axis 1 tf print accuracy 2 tf print accuracy obj result 0 5 tf print accuracy obj count 2 my other comment be that we have the line train op tf group minimize op update op whereas in the custom model fn with minimal change section custom model fn with minimal change the correspond line be train op tf group minimize op update op without the why be that be this a mistake |
tensorflowtensorflow | documentation unclear tf batch to space | Bug | url s with the issue description of issue what need change 1 example need to be add 2 crop part in args section be very difficult to understand need to be well format put into code clear description documentation be unclear no example be show also there be one extremely long paragraph with code in between in the crop section under args parameter define be all parameter define and format correctly not well format return define be return value define yes raise list and define be the error define no usage example be there a usage example no submit a pull request no |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.