repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf2 tensorarray with multi dimensional tensor write in it will be stack into none shape tensor in autograph mode | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux lz 5 4 0 48 generic 52 18 04 1 ubuntu tensorflow instal from source or binary in conda env pip install tensorflow gpu I tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 8 cuda cudnn version conda install cudatoolkit 10 1 cudnn 7 6 5 gpu model and memory 4 x titan xp 12 gb describe the current behavior I use tf tensorarray to dynamically store the specified number of multi dimensional tensor and try to convert the tensorarray into tensor which will be feed into the next network layer I use tf function decorator to convert pythonic code into tf s autograph mode to accelerate the training process specifically take the follow code as an example python import tensorflow as tf class testmodel tf keras model def init self n super testmodel self init self conv first tf keras layer conv2d 4 3 3 self nframe n def call self x align fea tf tensorarray dtype tf float32 size 0 dynamic size true def cond I n fea col return I n def body I n fea col fea col fea col write I x I tf add I 1 return I n fea col align fea tf while loop cond body 0 self nframe align fea tf print alige fea shape align fea size t align fea stack tf print t shape t shape without these two line of reshape the stack tensor coercively the t will have no first dimension tt tf reshape t self nframe 8 4 6 3 tf print tt shape tt shape return t tf function def foo tm x tf one 8 4 6 3 dtype tf float32 output tm x nframe 10 tm testmodel nframe foo tm the output of the above code be alige fea shape 10 t shape tensorshape none 8 4 6 3 tt shape tensorshape 10 8 4 6 3 describe the expect behavior I have test the code in eager mode that be to remove the decorator tf function everything go well but error will be throw when the autograph mode start since the output tensor doesn t have get the first dimension of itself shape and can not be feed into the next layer of the network moreover if I write scalar into tensorarray whether I use eager or autograph mode the output of the shape of tensor will be fine with true number my problem be the tensorarray whatever it contain scalar or multi dimensional tensor should automatically detect the size of itself and stack into the tensor with full shape in both eager and graph mode and the output should be as follow so to speak alige fea shape 10 t shape tensorshape 10 8 4 6 3 tt shape tensorshape 10 8 4 6 3 one workaround to deal with this problem be just to reshape the stack tensor after tensorarray stack but it be not robust for my code thus I wonder if this be a bug or just the feature to fit something unknown I be struggle with this for a long time appreciate your reply in advance |
tensorflowtensorflow | wrong maximum value pass in the positional encoding for the transformer model | Bug | url s with the issue description of issue what need change when create positional encoding vector set we specify maximum position encoding which should be equal to the maximum possible length of the sequence feed into the model so it must be equal to max length which be 40 in your sample it s equal to the vocabulary length or 10000 clear description when max length be so big the difference between neighboring element pos encoding be very subtle and it s hard for the model to grasp it and logically the value be incorrect correct link not applicable parameter define not applicable return define not applicable raise list and define no error just typo in the sample code usage example no request visual if applicable no submit a pull request no |
tensorflowtensorflow | compile client code with mavx2 lead to eigen error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 14 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary compile in ci transfer to client system tensorflow version use command below v2 3 0 python version n a bazel version if compile from source 3 1 0 gcc compiler version if compile from source apple clang 11 0 0 for the client code not sure which compiler do the tf build but likely the same or 10 0 0 cuda cudnn version n a gpu model and memory n a describe the current behavior we have compile the c library from 2 3 0 and be now use it from our code a line such as include lead to a compilation failure when mavx2 be use in file include from build hpp 20 in file include from build libs tensorflow include tensorflow cc save model loader h 27 in file include from build libs tensorflow include tensorflow core public session h 24 in file include from build libs tensorflow include tensorflow core framework tensor h 23 in file include from build libs tensorflow include tensorflow core framework allocator h 26 in file include from build libs tensorflow include tensorflow core framework numeric type h 24 in file include from build libs tensorflow include third party eigen3 unsupported eigen cxx11 fixedpoint 41 build libs tensorflow include third party eigen3 unsupported eigen cxx11 src fixedpoint packetmathavx2 h 30 9 error no template name eigen packet wrapper typedef eigen packet wrapper m256i 20 packet32q8i inspect that file show that the template be indeed not define other eigen header for neon or sse consider usr local include eigen3 eigen src core arch sse packetmath h in eigen 3 3 7 do include this template describe the expect behavior client c program compile against tf 2 3 0 when mavx2 or march native be enable standalone code to reproduce the issue test cpp include int main int argc char argv return 0 this work clang std c 14 l libs tensorflow lib isystem usr local include eigen3 isystem libs tensorflow include third party eigen3 isystem libs tensorflow include ltensorflow cc lstdc test cpp this do not work clang mavx2 std c 14 l libs tensorflow lib isystem usr local include eigen3 isystem libs tensorflow include third party eigen3 isystem libs tensorflow include ltensorflow cc lstdc test cpp tensorflow directory tree l 2 lib tensorflow libs tensorflow include absl external tensorflow third party lib libtensorflow cc so libtensorflow cc so 2 libtensorflow cc so this problem for instance prevent I from use some library who s cmake configuration bring the march native or mavx2 flag into the build |
tensorflowtensorflow | tensorflow keras backend dot do not work as expect when second argument be 3 dimensional or high | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes but reproduce this bug require very minimal code os platform and distribution e g linux ubuntu 16 04 manjaro linux rolling release tensorflow instal from source or binary binary tensorflow version use command below git version v2 3 0 rc2 23 gb36436b087 2 3 0 python version python 3 8 describe the current behavior tensorflow keras backend dot do not compute dot product along expect axis it compute dot product along the last axis of the first argument and along the next to last axis of the second argument describe the expect behavior I expect dot product to be compute along the last axis of the first argument and the first axis of the second argument note that this only differ from the current behavior if the second argument be 3 dimensional or high note this behavior be not incorrect per se rather it be unconventional and unexpected and therefore it can cause extremely mysterious error at the very least this behavior should be include in the documentation it be not currently document with a few more example to illustrate perhaps this be the good solution to avoid break the code of people who have figure out this subtlety and use it extensively standalone code to reproduce the issue example 1 import tensorflow kera backend as k x k placeholder shape 1 2 y k placeholder shape 2 3 4 print k dot x y I expect this to print a placeholder tensor of shape 1 3 4 but instead I get a long traceback the last line of which be valueerror dimension must be equal but be 2 and 3 for node matmul 3 matmul t dt float transpose a false transpose b false reshape 7 reshape 8 with input shape 1 2 3 8 see below for full traceback example 2 x k placeholder shape 1 2 y k placeholder shape 3 2 4 print k dot x y I would expect this to raise an error because the last dimension of x do not match the first dimension of y instead this print a placeholder tensor of shape 1 3 4 indicate that the dot product be compute along the last axis of x and the next to last axis of y example 3 x k placeholder shape 1 2 y k placeholder shape 3 4 5 6 7 2 8 print k dot x y again I expect this to raise an error but the dot product be compute without complaint other info log traceback from example 1 above invalidargumenterror traceback most recent call last local lib python3 8 site package tensorflow python framework op py in create c op graph node def input control input op def 1811 try 1812 c op pywrap tf session tf finishoperation op desc 1813 except error invalidargumenterror as e invalidargumenterror dimension must be equal but be 2 and 3 for node matmul 5 matmul t dt float transpose a false transpose b false reshape 12 reshape 13 with input shape 1 2 3 8 during handling of the above exception another exception occur valueerror traceback most recent call last in 1 k dot x y local lib python3 8 site package tensorflow python util dispatch py in wrapper args kwargs 199 call target and fall back on dispatcher if there be a typeerror 200 try 201 return target args kwargs 202 except typeerror valueerror 203 note convert to eager tensor currently raise a valueerror not a local lib python3 8 site package tensorflow python keras backend py in dot x y 1825 array op transpose y perm y permute dim y shape 2 1 1826 return array op reshape 1827 math op matmul xt yt x shape 1 y shape 2 y shape 1 1828 if be sparse x 1829 out sparse op sparse tensor dense matmul x y local lib python3 8 site package tensorflow python util dispatch py in wrapper args kwargs 199 call target and fall back on dispatcher if there be a typeerror 200 try 201 return target args kwargs 202 except typeerror valueerror 203 note convert to eager tensor currently raise a valueerror not a local lib python3 8 site package tensorflow python op math ops py in matmul a b transpose a transpose b adjoint a adjoint b a be sparse b be sparse name 3252 return ret 3253 else 3254 return gen math op mat mul 3255 a b transpose a transpose a transpose b transpose b name name 3256 local lib python3 8 site package tensorflow python ops gen math op py in mat mul a b transpose a transpose b name 5638 transpose b false 5639 transpose b execute make bool transpose b transpose b 5640 op output op def library apply op helper 5641 matmul a a b b transpose a transpose a transpose b transpose b 5642 name name local lib python3 8 site package tensorflow python framework op def library py in apply op helper op type name name keyword 740 add op to graph 741 pylint disable protect access 742 op g create op internal op type name input dtype none 743 name scope input type input type 744 attrs attr proto op def op def local lib python3 8 site package tensorflow python framework func graph py in create op internal self op type input dtype input type name attrs op def compute device 589 inp self capture inp 590 input I inp 591 return super funcgraph self create op internal pylint disable protect access 592 op type input dtype input type name attrs op def 593 compute device local lib python3 8 site package tensorflow python framework op py in create op internal self op type input dtype input type name attrs op def compute device 3475 session run call can not occur between create and mutate the op 3476 with self mutation lock 3477 ret operation 3478 node def 3479 self local lib python3 8 site package tensorflow python framework op py in init self node def g input output type control input input type original op op def 1972 if op def be none 1973 op def self graph get op def node def op 1974 self c op create c op self graph node def input 1975 control input op op def 1976 name compat as str node def name local lib python3 8 site package tensorflow python framework op py in create c op graph node def input control input op def 1813 except error invalidargumenterror as e 1814 convert to valueerror for backwards compatibility 1815 raise valueerror str e 1816 1817 return c op valueerror dimension must be equal but be 2 and 3 for node matmul 5 matmul t dt float transpose a false transpose b false reshape 12 reshape 13 with input shape 1 2 3 8 |
tensorflowtensorflow | bug in tf | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf feature column embed column wrong dtype in lookup variable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device no tensorflow instal from source or binary pip install u force reinstall tf nightly tensorflow version use command below 2 4 0 dev20200712 python version python version python 3 7 7 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version reproducable on cpu gpu model and memory reproducable on cpu exact command to reproduce describe the problem the tf feature column embed column of my tensorflow version be internally work with int32 instead of int64 and thus there be error when export the model to save model and compile it with I e triton server image the strided slice operation output a int64 but the follow operation be accept int32 only reproducable be the error with the use of the jupyter notebook above source code log inferenceserverexception derive function node inference signature wrapper 3345 function node inference signature wrapper 3345 input segment ids 0 expect type int32 int64 the type of sequential 1 dense feature 1 test col embed test col embed weight embed lookup sparse stride slice output 0 0 in node sequential 1 dense feature 1 test col embed test col embed weight embed lookup sparse statefulpartitionedcall statefulpartitionedcall 1 |
tensorflowtensorflow | post quant kera model with upsample2d after post quantize get a false output shape | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version or github sha if from source 2 3 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook convert tf lite tfliteconvert from keras model model convert target op tf lite opsset tflite builtins int8 convert optimization tf lite optimize default convert representative the output from the converter invocation get a false tflite model ps the up layer be 1x10x10x64 with an upsample2d in the tflite I get a 1x1x1x64 output also please include a link to the save model or graphdef put link here or attach to the issue failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | compile tf2 3 with mkl on osx do not progress beyond first epoch | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 osx 10 15 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 2 3 0 python version 3 8 5 bazel version if compile from source 3 1 0 gcc compiler version if compile from source clang 10 0 1 cuda cudnn version no gpu model and memory no describe the current behavior when execute code vae fit minst digits epoch 30 batch size 128 it do not progress beyond the epoch 1 and seem to hang even after wait several hour to see if the training progress beyond epoch 1 I notice that the cpu load be show 2 thread execute at 100 describe the expect behavior need to be able to progress beyond 1st epoch and start train standalone code to reproduce the issue please find the code to run the test here vae 0 1 py other info log basel build info bazel build action env cc usr local opt llvm bin clang config noaws config nogcp config mkl config opt c opt tensorflow tool pip package build pip package console output after export mkldnn verbose 1 2020 09 26 08 52 10 680539 I tensorflow compiler xla service service cc 168 xla service 0x7ffc4db8dd80 initialize for platform host this do not guarantee that xla will be use device 2020 09 26 08 52 10 680577 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 09 26 08 52 10 680685 I tensorflow core common runtime process util cc 146 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance model encoder layer type output shape param connect to input 1 inputlayer none 28 28 1 0 conv2d conv2d none 14 14 32 320 input 1 0 0 conv2d 1 conv2d none 7 7 64 18496 conv2d 0 0 flatten flatten none 3136 0 conv2d 1 0 0 dense dense none 16 50192 flatten 0 0 z mean dense none 2 34 dense 0 0 z log var dense none 2 34 dense 0 0 sample sample none 2 0 z mean 0 0 z log var 0 0 total param 69 076 trainable param 69 076 non trainable param 0 model decoder layer type output shape param input 2 inputlayer none 2 0 dense 1 dense none 3136 9408 reshape reshape none 7 7 64 0 conv2d transpose conv2dtran none 14 14 64 36928 conv2d transpose 1 conv2dtr none 28 28 32 18464 conv2d transpose 2 conv2dtr none 28 28 1 289 total param 65 089 trainable param 65 089 non trainable param 0 epoch 1 30 dnnl verbose info onednn v1 4 0 commit n a dnnl verbose info cpu runtime openmp dnnl verbose info cpu isa intel avx2 dnnl verbose info gpu runtime none dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block acdb8a f0 32x1x3x3 0 00488281 dnnl verbose exec cpu convolution jit avx2 forward train src f32 block abcd f0 wei f32 block acdb8a f0 bia f32 block a f0 dst f32 block abcd8b f0 post op eltwise relu alg convolution direct mb128 ic1oc32 ih28oh14kh3sh2dh0ph0 iw28ow14kw3sw2dw0pw0 1 90991 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block abcd8b8a f0 64x32x3x3 0 0571289 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x32x14x14 1 68311 dnnl verbose exec cpu convolution jit avx2 forward train src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia f32 block a f0 dst f32 block abcd8b f0 post op eltwise relu alg convolution direct mb128 ic32oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 3 87109 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x64x7x7 0 25 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x64x7x7 0 225098 dnnl verbose exec cpu inner product gemm jit forward inference src f32 block ab f0 wei f32 block ba f0 bia f32 block a f0 dst f32 block ab f0 post op eltwise relu mb128ic3136oc16 15 105 dnnl verbose exec cpu inner product gemm jit forward inference src f32 block ab f0 wei f32 block ba f0 bia f32 block a f0 dst f32 block ab f0 mb128ic16oc2 0 0161133 dnnl verbose exec cpu inner product gemm jit forward inference src f32 block ab f0 wei f32 block ba f0 bia f32 block a f0 dst f32 block ab f0 mb128ic16oc2 0 00512695 dnnl verbose exec cpu inner product gemm jit forward inference src f32 block ab f0 wei f32 block ba f0 bia f32 block a f0 dst f32 block ab f0 post op eltwise relu mb128ic2oc3136 2 96802 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block abcd8a8b f0 64x64x3x3 0 0251465 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x7x7 0 787109 dnnl verbose exec cpu convolution jit avx2 backward datum src f32 block abcd8b f0 wei f32 block abcd8a8b f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic64oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 9 69995 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x64x14x14 2 89893 dnnl verbose exec cpu eltwise jit avx2 forward train datum f32 block abcd f0 diff undef undef f0 alg eltwise relu alpha 0 beta 0 128x14x14x64 0 614014 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block abcd8a8b f0 64x32x3x3 0 0151367 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x14x14 3 52783 dnnl verbose exec cpu convolution jit avx2 backward datum src f32 block abcd8b f0 wei f32 block abcd8a8b f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic32oc64 ih28oh14kh3sh2dh0ph0 iw28ow14kw3sw2dw0pw0 12 1011 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x32x28x28 2 14697 dnnl verbose exec cpu eltwise jit avx2 forward train datum f32 block abcd f0 diff undef undef f0 alg eltwise relu alpha 0 beta 0 128x28x28x32 1 00903 dnnl verbose exec cpu reorder simple any undef src f32 block cdba f0 dst f32 p block abcd8a8b f0 32x1x3x3 0 00488281 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x32x28x28 6 56982 dnnl verbose exec cpu convolution jit avx2 backward datum src f32 p block abcd8b f0 wei f32 p block abcd8a8b f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic1oc32 ih28oh28kh3sh1dh0ph1 iw28ow28kw3sw1dw0pw1 6 87988 dnnl verbose exec cpu reorder simple any undef src f32 p block abcd8b f0 dst f32 block acdb f0 128x1x28x28 0 666992 dnnl verbose exec cpu sum simple any undef src f32 block abcd f0 src f32 block abcd f0 src f32 block abcd f0 src f32 block abcd f0 dst f32 block abcd f0 128x28x28x1 0 0817871 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block acdb8a f0 32x1x3x3 0 00292969 dnnl verbose exec cpu convolution jit avx2 forward train src f32 block abcd f0 wei f32 block acdb8a f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic1oc32 ih28oh28kh3sh1dh0ph1 iw28ow28kw3sw1dw0pw1 1 34204 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x32x28x28 3 01611 dnnl verbose exec cpu reorder simple any undef src f32 block acdb f0 dst f32 p block abcd8b f0 128x1x28x28 0 464111 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block abcd8b f0 diff f32 block abcd8b f0 alg eltwise relu alpha 0 beta 0 128x32x28x28 2 26904 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x14x14 1 04614 dnnl verbose exec cpu convolution jit avx2 backward weight src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic32oc64 ih28oh14kh3sh2dh0ph0 iw28ow14kw3sw2dw0pw0 2 33911 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b8a f0 dst f32 block cdba f0 64x32x3x3 0 0158691 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block abcd8b8a f0 64x32x3x3 0 013916 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x32x28x28 7 34912 dnnl verbose exec cpu convolution jit avx2 forward train src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic32oc64 ih28oh14kh3sh2dh0ph0 iw28ow14kw3sw2dw0pw0 11 5352 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x14x14 0 933105 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block abcd8b f0 diff f32 block abcd8b f0 alg eltwise relu alpha 0 beta 0 128x64x14x14 0 945068 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x7x7 0 780029 dnnl verbose exec cpu convolution jit avx2 backward weight src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic64oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 1 15698 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b8a f0 dst f32 block cdba f0 64x64x3x3 0 0258789 dnnl verbose exec cpu reorder jit uni undef src f32 block cdba f0 dst f32 block abcd8b8a f0 64x64x3x3 0 0200195 dnnl verbose exec cpu convolution jit avx2 forward train src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic64oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 5 6731 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b f0 dst f32 block acdb f0 128x64x7x7 0 27002 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block ab f0 diff f32 block ab f0 alg eltwise relu alpha 0 beta 0 128x3136 0 307129 dnnl verbose exec cpu sum simple any undef src f32 block ab f0 src f32 block ab f0 src f32 block ab f0 dst f32 block ab f0 128x2 0 00195312 dnnl verbose exec cpu sum simple any undef src f32 block ab f0 src f32 block ab f0 dst f32 block ab f0 128x2 0 000976562 dnnl verbose exec cpu sum simple any undef src f32 block ab f0 src f32 block ab f0 dst f32 block ab f0 128x16 0 000976562 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block ab f0 diff f32 block ab f0 alg eltwise relu alpha 0 beta 0 128x16 0 00390625 dnnl verbose exec cpu reorder jit uni undef src f32 block acdb f0 dst f32 block abcd8b f0 128x64x7x7 0 231201 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block abcd8b f0 diff f32 block abcd8b f0 alg eltwise relu alpha 0 beta 0 128x64x7x7 0 28418 dnnl verbose exec cpu convolution jit avx2 backward weight src f32 block abcd8b f0 wei f32 block abcd8b8a f0 bia f32 block a f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic32oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 0 559082 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b8a f0 dst f32 block cdba f0 64x32x3x3 0 0100098 dnnl verbose exec cpu reorder jit uni undef src f32 block abcd8b8a f0 dst f32 block abcd8a8b f0 64x32x3x3 0 012207 dnnl verbose exec cpu convolution jit avx2 backward datum src f32 block abcd8b f0 wei f32 block abcd8a8b f0 bia undef undef f0 dst f32 block abcd8b f0 alg convolution direct mb128 ic32oc64 ih14oh7kh3sh2dh0ph0 iw14ow7kw3sw2dw0pw0 3 57495 dnnl verbose exec cpu eltwise jit avx2 backward datum datum f32 block abcd8b f0 diff f32 block abcd8b f0 alg eltwise relu alpha 0 beta 0 128x32x14x14 0 571045 dnnl verbose exec cpu reorder simple any undef src f32 block acdb f0 dst f32 p block abcd8b f0 128x1x28x28 0 368164 |
tensorflowtensorflow | detection postprocess operator miss | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source 9aff666a8db689168e2d3aaef17b8a252791ada7 target platform e g arm mbe os arduino nano 33 etc describe the problem detection postprocess operator be miss from tflu it be need for e g ssd mobilenet v2 please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | tf1 keras model error on loading use tf2 indexerror list index out of range | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 python version 3 7 3 describe the current behavior when when try to load the sequential model here use tf keras model load model in tf 2 3 1 an error be throw at the follow location bash local lib python3 7 site package tensorflow python keras engine functional py in should skip first node layer 1031 return isinstance layer functional and 1032 filter out sequential model without an input shape 1033 isinstance layer layer 0 input layer module inputlayer 1034 1035 indexerror list index out of range the model be believe to be train use kera and under tf1 9 and the structure of the model can be find here and here s the code for training here you can find the full stack trace and run code under tf 2 3 1 then I downgrade to tf 2 2 and 2 1 with the same code above it throw the error just as 35934 then I downgrade to tf 2 0 the code be execute indefinitely finally I have to manually stop it bash opt conda lib python3 6 site package tensorflow core python pywrap tensorflow internal py in ismappe o 2569 2570 2571 return pywrap tensorflow internal ismappe o 2572 2573 def ismappingview o keyboardinterrupt here you can find the full stack trace when I stop the code under tf 2 0 then I have no choice but to use tf 1 15 4 and kera 2 3 1 and finally it work out fine input output summary etc be all parse correctly as well as be able to run datum through the model describe the expect behavior I hope the bug can be resolve so that tf2 can enhance the support for old model |
tensorflowtensorflow | spurious tf function retracing warning when develop keras layer in colab | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior develop in colab with tf keras I always get these random seemingly unrelated warning describe the expect behavior no warning standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook code that generate issue import tensorflow as tf for ii in range 10 notice class definition inside the loop class mylayer tf keras layers layer def init self kwargs super mylayer self init kwargs def call self input training none return 2 input i1 tf keras input 1 logit mylayer i1 model tf keras model inputs i1 output logit model predict x 0 0 1 0 3 0 batch size 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | add int16 support to the cmsis nn version of softmax | Bug | tensorflow micro the softmax reference kernel have be update with int16 support and the cmsis nn version should to update with int16 support as well |
tensorflowtensorflow | tf numpy function do not support symbolic tensor | Bug | system information I have write custom code see below red hat enterprise linux server release 7 7 maipo tensorflow instal from binary pip tensorflow version v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 6 9 describe the current behavior call tf numpy function with symbolic tensor crash see below this prevent tf numpy function from be directly use when build a keras model with the functional api describe the expect behavior call tf numpy function with symbolic tensor should return the symbolic tensor which be the result of the operation standalone code to reproduce the issue python import tensorflow as tf numpy as np tf version np version 2 3 0 1 18 5 create a symbolic tensor x tf keras layers input shape 2 x call it on tf sin give a symbolic tensor as output tf sin x call it on tf numpy function with np sin fail tf numpy function np sin x tf float32 traceback most recent call last file line 1 in file home loic venv tf lib64 python3 6 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home loic venv tf lib64 python3 6 site package tensorflow python op script op py line 632 in numpy function return py func common func inp tout stateful true name name file home loic venv tf lib64 python3 6 site package tensorflow python op script op py line 519 in py func common result func np array x for x in inp file home loic venv tf lib64 python3 6 site package tensorflow python op script op py line 519 in result func np array x for x in inp file home loic venv tf lib64 python3 6 site package tensorflow python framework op py line 848 in array a numpy call which be not support format self name notimplementederror can not convert a symbolic tensor input 2 0 to a numpy array this error may indicate that you re try to pass a tensor to a numpy call which be not support other info log tf numpy function work as expect under other circumstance python with an eager tensor x tf constant 0 1 x tf sin x tf numpy function np sin x tf float32 python when eager execution be disabled import tensorflow as tf numpy as np tf python framework op disable eager execution x tf keras layers input shape 2 tf numpy function np sin x tf float32 dtype float32 python when wrap tf numpy function inside a keras lambda layer import tensorflow as tf numpy as np x tf keras layers input shape 2 tf keras layers lambda lambda t tf numpy function np sin t tf float32 x dtype float32 |
tensorflowtensorflow | nnapi delegate bug | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device xiaomi 9pro snapdragon 855 android q tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 python version 3 7 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I have several question 1 when I run the model mobilenet v1 1 0 224 quant tflite in tflite example and I select nnapi device there be log output as following in logcat all layer of the model be support by nnapi so what do these output mean 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 028 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal average pooling 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal reshape tensor quant8 be not support 2020 09 24 20 54 57 029 12125 12127 e hta unnhal softmax tensor quant8 be not support 2020 09 24 20 54 57 031 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal average pooling 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal reshape tensor quant8 be not support 2020 09 24 20 54 57 032 12125 12127 e hta unnhal softmax tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal depthwise convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal average pooling 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal convolution 2d tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal reshape tensor quant8 be not support 2020 09 24 20 54 57 034 12125 12127 e hta unnhal softmax tensor quant8 be not support 2 when I run the model efficientnet lite0 int8 tflite in the example I find that the performance use nnapi be much slow than use cpu however all the layer be support by nnapi so why 3 about nnapi the follow two apis can not be set in the meantime otherwise the error log occur so which api be valid or both be valid but can not be set in the meantime tfliteoption adddelegate nnapidelegate tfliteoption setusennapi true internal error fail to apply delegate modifygraphwithdelegate be disallow when graph be immutable 4 how can I set gpu inference use fp16 or fp32 I find the follow api have be deprecate setallowfp16precisionforfp32 boolean allow 5 I have try to quantize my model to 8bit use the follow code but I find that some conv op s weight be still fp32 some conv op s weight be int8 how can I solve it converter tf lite tfliteconverter from save model save model dir converter optimization tf lite optimize optimize for size tflite quant model converter convert describe the expect behavior 1 solve the not support log output in logcat 2 nnapi s performance be not slow than cpu 3 use proper api to enable nnapi delegate 4 I can set to use fp16 or fp32 in gpu delegate 5 quantize model to 8bit successfully thank very much look forward to your reply |
tensorflowtensorflow | ethos u scratch tensor not allocate | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source b023b033299a2e88a8f25980e234144657df5fc2 target platform e g arm mbe os arduino nano 33 etc ethos u describe the problem ethos u rely on differentiate between have operator input tensor as subgraph input or not it be depend on the offline planner and be use one or two additional operator input tensor similar to scratch tensor but part of the tflite file we want these to remain as input tensor to the operator but not as input to the subgraph there have be a workaround for this see pr however it do not work if there be any cpu operator before the ethos u operator also a more general solution be want please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | duplicate node name in graph one | Bug | hi I have the follow error test the yolov3 example 2020 09 24 11 28 23 492471 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 traceback most recent call last file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework op py line 1610 in create c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror duplicate node name in graph one during handling of the above exception another exception occur traceback most recent call last file c program file jetbrain pycharm community edition 2017 3 helper pydev pydevd py line 1683 in main file c program file jetbrain pycharm community edition 2017 3 helper pydev pydevd py line 1677 in main global debugg run setup file none none be module file c program file jetbrain pycharm community edition 2017 3 helper pydev pydevd py line 1087 in run pydev import execfile file global local execute the script file c program file jetbrain pycharm community edition 2017 3 helper pydev pydev imp pydev execfile py line 18 in execfile exec compile content n file exec glob loc file c user c81441 document 75 git python photogrammetry deep learn 20 tensorflow 2 x yolov3 master detection demo py line 22 in yolo load yolo model file c user c81441 document 75 git python photogrammetry deep learn 20 tensorflow 2 x yolov3 master yolov3 util py line 84 in load yolo model yolo create yolo input size yolo input size class yolo coco class file c user c81441 document 75 git python photogrammetry deep learn 20 tensorflow 2 x yolov3 master yolov3 yolov4 py line 398 in create yolo pre tensor decode conv tensor num class I file c user c81441 document 75 git python photogrammetry deep learn 20 tensorflow 2 x yolov3 master yolov3 yolov4 py line 427 in decode xy grid tf meshgrid tf range output size tf range output size file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python op array op py line 2954 in meshgrid mult fact one shape output dtype file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python op array op py line 2583 in one output fill shape constant one dtype dtype name name file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python op array op py line 171 in fill result gen array op fill dim value name name file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python ops gen array op py line 3601 in fill fill dim dim value value name name file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework op def library py line 793 in apply op helper op def op def file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework func graph py line 548 in create op compute device file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework op py line 3429 in create op internal op def op def file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework op py line 1773 in init control input op file c user c81441 appdata local continuum anaconda3 64 2020 02 envs tensor flow env test lib site package tensorflow core python framework op py line 1613 in create c op raise valueerror str e valueerror duplicate node name in graph one system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 7 64bits tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 I have test with different tf python version and I get the same error python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version na gpu model and memory na |
tensorflowtensorflow | tf tensorscatterupdate conversion to tflite | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary python binary tensorflow version or github sha if from source 2 3 provide the text output from tflite convert error tf tensorscatterupdate op be neither a custom op nor a flex op I m already use tf op copy and paste here standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras save model give different accuracy compare to the original model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 8 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version nvidia driver v450 66 cuda 11 0 gpu model and memory 2080ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior the follow code be suppose to output the same accuracy from model and loaded model but somehow they re different if I run model predict instead they re consistent though import tensorflow as tf import numpy as np from sklearn model selection import train test split x np random rand 10000 10 y np random choice 0 1 10000 x train x test y train y test train test split x y test size 0 3 model tf keras sequential tf keras layer dense 128 activation relu tf keras layer dense 64 activation relu tf keras layer dense 32 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparsecategoricalcrossentropy metric accuracy model fit x train y train epoch 10 model save test model load model tf keras model load model test model print model evaluate x test y test print load model evaluate x test y test standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | valueerror unknown layer when I be load of the model use tf keras model load model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow true os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below late hi I be face a model loading issue use tf keras model load model I save custom keras model name custommodel model save model h5 and then I try to load 1 new model keras model load model model h5 custom object custommodel custommodel error usr local lib python3 6 dist package tensorflow python keras util generic util py in class and config for serialized keras object config module object custom object printable module name 294 cls get register object class name custom object module object 295 if cls be none 296 raise valueerror unknown printable module name class name 297 298 cls config config config valueerror unknown layer mean 2 new model keras model load model model h5 custom object custommodel custommodel mean kera metric mean usr local lib python3 6 dist package tensorflow python keras save hdf5 format py in load weight from hdf5 group f layer 684 contain str len layer name 685 layer into a model with str len filter layer 686 layer 687 688 we batch weight value assignment in a single backend call valueerror you be try to load a weight file contain 2 layer into a model with 3 layer I make a dummy notebook it look like a bug in tf keras model load model |
tensorflowtensorflow | can not convert centernet keypoint model zoo tf2 model to tflite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 lts tensorflow instal from source or binary binary tf nightly tensorflow version use command below 2 4 0 dev20200907 python version 3 6 9 cuda cudnn version 11 0 8 0 describe the current behavior when I try to convert a model from tensorflow 2 detection model zoo into a tflite version I get some error edit this be only the case for model use centernet and keypoint all other model be ok describe the expect behavior convertion be successfull standalone code to reproduce the issue download centernet resnet50 v1 fpn keypoint 512x512 from uncompress the archive run tflite convert save model dir centernet resnet50 v1 fpn 512x512 kpts coco17 tpu 8 save model output file model tflite I get the follow error loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall error require element shape to be 1d tensor during tf lite transformation pass loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall error fail to legalize operation tf tensorlistreserve that be explicitly mark illegal traceback most recent call last file home biroute local lib python3 6 site package tensorflow lite python convert py line 199 in toco convert protos enable mlir converter file home biroute local lib python3 6 site package tensorflow lite python wrap toco py line 38 in wrap toco convert enable mlir converter exception 0 error loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall require element shape to be 1d tensor during tf lite transformation pass 0 note loc statefulpartitionedcall call from 0 note loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall see current operation 456 tf tensorlistreserve 67 77 device tensor tensor tensor 0 error loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall fail to legalize operation tf tensorlistreserve that be explicitly mark illegal 0 note loc statefulpartitionedcall call from 0 note loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall see current operation 456 tf tensorlistreserve 67 77 device tensor tensor tensor during handling of the above exception another exception occur traceback most recent call last file home biroute local bin tflite convert line 8 in sys exit main file home biroute local lib python3 6 site package tensorflow lite python tflite convert py line 640 in main app run main run main argv sys argv 1 file home biroute local lib python3 6 site package tensorflow python platform app py line 40 in run run main main argv argv flag parser parse flag tolerate undef file home biroute local lib python3 6 site package absl app py line 300 in run run main main args file home biroute local lib python3 6 site package absl app py line 251 in run main sys exit main argv file home biroute local lib python3 6 site package tensorflow lite python tflite convert py line 623 in run main convert tf2 model tflite flag file home biroute local lib python3 6 site package tensorflow lite python tflite convert py line 239 in convert tf2 model tflite model converter convert file home biroute local lib python3 6 site package tensorflow lite python lite py line 726 in convert output tensor file home biroute local lib python3 6 site package tensorflow lite python lite py line 643 in convert converter kwargs file home biroute local lib python3 6 site package tensorflow lite python convert py line 573 in toco convert impl enable mlir converter enable mlir converter file home biroute local lib python3 6 site package tensorflow lite python convert py line 202 in toco convert protos raise convertererror str e tensorflow lite python convert convertererror 0 error loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall require element shape to be 1d tensor during tf lite transformation pass 0 note loc statefulpartitionedcall call from 0 note loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall see current operation 456 tf tensorlistreserve 67 77 device tensor tensor tensor 0 error loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall fail to legalize operation tf tensorlistreserve that be explicitly mark illegal 0 note loc statefulpartitionedcall call from 0 note loc callsite callsite map tensorarrayv2 2 inference call 13919 at statefulpartitionedcall inference signature wrapper 15800 at statefulpartitionedcall see current operation 456 tf tensorlistreserve 67 77 device tensor tensor tensor |
tensorflowtensorflow | bug in tf math rsqrt | Bug | hello on tf 2 3 about the function tf math rsqrt x name none the documentation say x a tf tensor must be one of the follow type bfloat16 half float32 float64 int32 when run with int32 type import tensorflow as tf x tf constant 2 0 2 tf math rsqrt x I get the follow error tensorflow python framework error impl invalidargumenterror value for attr t of int32 be not in the list of allow value bfloat16 half float double complex64 complex128 nodedef node rsqrt op y t attr t type allow dt bfloat16 dt half dt float dt double dt complex64 dt complex128 op rsqrt the list raise by the error and the list give in the documentation be not the same then either it be an error in the documentation or a bug in the implementation of the function thank |
tensorflowtensorflow | tf dynamic stitch give wrong shape and output raw memory | Bug | system information linux ubuntu 20 04 tensorflow 2 3 0 instal from binary python version 3 8 2 default jul 16 2020 14 00 26 gcc 9 3 0 no gpu available describe the current behavior when stitch empty partition the output shape be wrong python tf dynamic stitch 1 1 2 the 2 4e 44 be from uninitialized memory and will typically vary per execution there be nothing special about 1 2 describe the expect behavior from documentation merge index m datum m so we would expect python tf dynamic stitch 1 1 2 extra I don t know if it s a security concern but the amount of datum read can be arbitrarily large python size 8 tf dynamic stitch 1 tf zero 1 size 0 edit simplify the code |
tensorflowtensorflow | cmsis nn conv unit test case fail for non unity dilation case | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 712fd5cfe4a5de420a4226c874664423eba4cb1a target platform e g arm mbe os arduino nano 33 etc stm32f4 describe the problem the unit test case which have dilation parameter not equal to 1 fail for cmsis nn please provide the exact sequence of command step when you run into the problem 1 remove conv test in exclusion filter list in target makefile for stm32f4 2 make f tensorflow lite micro tool make makefile tag cmsis nn target stm32f4 test kernel fully connected test |
tensorflowtensorflow | unexpected accuracy output from tf keras model evaluate call on a save and loaded model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory n a run with x86 64 cpu describe the current behavior a model be train for a multiclass classification task label be a class vector method evaluate evaluate call on the train model output an accuracy of 0 9579 after save the model and load the save model call method evaluate on the loaded model output an accuracy of 0 0997 describe the expect behavior I would have expect the same accuracy output 0 9579 on the loaded model standalone code to reproduce the issue other info log |
tensorflowtensorflow | a commit message should be unit64 | Bug | in the commit message diff f0675f2568ff9470bcfc1f2bc79c5386 I find that it say int64 but when I check the api documentation it seem that it should be unit64 please look at parameter description tf2 3 mod tf2 2 mod |
tensorflowtensorflow | tf 2 3 concrete function spec issue list index out of range | Bug | in tf 2 3 if we define a model in class it error out with follow error when run predict usr local lib python3 6 dist package tensorflow python eager function py in structured signature check arg type self args kwargs 1778 arg spec kwarg spec self structure input signature 1779 for I arg spec in enumerate zip args arg spec 1780 name self function spec arg name I 1781 self structured signature check arg type arg spec name 1782 for name arg in kwargs item indexerror list index out of range here s a small code sample to reproduce this issue scrollto uerkkqrpntbg uniqifier 1 note this issue be only in tf 2 3 if you change the tf version first line to pip install tensorflow 2 2 this code snippet run without any issue |
tensorflowtensorflow | gpudelegate produce incorrect result for reduce sum | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 arch linux kernel version 5 8 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung s10e android 10 tensorflow instal from source or binary source tensorflow version use command below v1 12 1 41975 g46b6537110 2 4 0 python version 3 8 5 bazel version if compile from source 3 5 0 gcc compiler version if compile from source 10 2 0 cuda cudnn version cuda 11 0 3 cudnn 8 0 2 39 gpu model and memory 2080 ti 11 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when use gpu delegate on mobile with reduce sum the result be incorrect the absolute difference be large than 1e 1 by contrast if gpu be not use the result be correct describe the expect behavior the result difference should be relatively very small standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook the code to generate the graph python import tensorflow as tf def generate buggy graph batch size with tf graph as default as graph tf compat v1 session graph graph as session source tf compat v1 placeholder tf float32 shape batch size 100 target tf transpose tf reduce sum tf transpose source axis 1 target tf reduce sum source axis 0 converter tf compat v1 lite tfliteconverter from session session source target with open buggy graph tflite wb as writer writer write converter convert generate buggy graph 256 to push the generate model to the mobile I use the following shell adb push buggy graph tflite storage emulate 0 android datum com example mobilenn file model buggygraph buggy graph tflite the code to run the graph on mobile buggymodel java java package com example mobilenn lite model import android util log import org tensorflow lite interpreter import org tensorflow lite gpu gpudelegate import org tensorflow lite nnapi nnapidelegate import java io file import java nio bytebuffer import java nio byteorder import java util locale public class buggymodel public static void run file basepath interpreter option interpreteroption new interpreter option interpreteroption adddelegate new nnapidelegate interpreteroption adddelegate new gpudelegate interpreter interpreter new interpreter new file basepath buggy graph tflite interpreteroption bytebuffer input bytebuffer allocatedirect interpreter getinputtensor 0 numbyte order byteorder nativeorder bytebuffer output bytebuffer allocatedirect interpreter getoutputtensor 0 numbyte order byteorder nativeorder float stdanswer new float 100 for int batchid 0 batchid 256 batchid for int channelid 0 channelid 100 channelid float value float math random stdanswer channelid value input putfloat value interpreter run input output for int channelid 0 channelid 100 channelid log d buggymodel string format locale getdefault channel d real 3f correct 3f diff 3f channelid output getfloat channelid float byte stdanswer channelid output getfloat channelid float byte stdanswer channelid mainactivity java java package com example mobilenn import android os bundle import androidx appcompat app appcompatactivity import com example mobilenn lite model buggymodel import java io file public class mainactivity extend appcompatactivity override protect void oncreate bundle savedinstancestate super oncreate savedinstancestate setcontentview r layout activity main buggymodel run new file getexternalfilesdir model buggygraph other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach when use with gpu delegate plain 2020 09 20 18 32 39 361 24389 24389 com example mobilenn d buggymodel channel 0 real 135 750 correct 135 681 diff 0 069 2020 09 20 18 32 39 361 24389 24389 com example mobilenn d buggymodel channel 1 real 131 625 correct 131 705 diff 0 080 2020 09 20 18 32 39 361 24389 24389 com example mobilenn d buggymodel channel 2 real 128 500 correct 128 509 diff 0 009 2020 09 20 18 32 39 361 24389 24389 com example mobilenn d buggymodel channel 3 real 127 500 correct 127 556 diff 0 056 2020 09 20 18 32 39 362 24389 24389 com example mobilenn d buggymodel channel 4 real 126 438 correct 126 508 diff 0 071 2020 09 20 18 32 39 362 24389 24389 com example mobilenn d buggymodel channel 5 real 130 125 correct 130 037 diff 0 088 2020 09 20 18 32 39 362 24389 24389 com example mobilenn d buggymodel channel 6 real 123 938 correct 123 990 diff 0 052 2020 09 20 18 32 39 362 24389 24389 com example mobilenn d buggymodel channel 7 real 128 625 correct 128 717 diff 0 092 2020 09 20 18 32 39 363 24389 24389 com example mobilenn d buggymodel channel 8 real 127 063 correct 127 131 diff 0 069 2020 09 20 18 32 39 363 24389 24389 com example mobilenn d buggymodel channel 9 real 122 500 correct 122 633 diff 0 133 when use without any delegate or with nnapi delegate plain 2020 09 20 18 39 55 368 24973 24973 com example mobilenn d buggymodel channel 0 real 135 449 correct 135 449 diff 0 000 2020 09 20 18 39 55 368 24973 24973 com example mobilenn d buggymodel channel 1 real 130 922 correct 130 922 diff 0 000 2020 09 20 18 39 55 369 24973 24973 com example mobilenn d buggymodel channel 2 real 128 056 correct 128 056 diff 0 000 2020 09 20 18 39 55 369 24973 24973 com example mobilenn d buggymodel channel 3 real 133 586 correct 133 586 diff 0 000 2020 09 20 18 39 55 369 24973 24973 com example mobilenn d buggymodel channel 4 real 139 664 correct 139 664 diff 0 000 2020 09 20 18 39 55 370 24973 24973 com example mobilenn d buggymodel channel 5 real 130 863 correct 130 863 diff 0 000 2020 09 20 18 39 55 370 24973 24973 com example mobilenn d buggymodel channel 6 real 127 332 correct 127 332 diff 0 000 2020 09 20 18 39 55 370 24973 24973 com example mobilenn d buggymodel channel 7 real 130 422 correct 130 422 diff 0 000 2020 09 20 18 39 55 371 24973 24973 com example mobilenn d buggymodel channel 8 real 130 867 correct 130 867 diff 0 000 2020 09 20 18 39 55 371 24973 24973 com example mobilenn d buggymodel channel 9 real 122 770 correct 122 770 diff 0 000 |
tensorflowtensorflow | tf lite optimize default hybrid model be not support on tflite micro | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 macos catalina 10 15 6 tensorflow instal from source or binary 2 3 0 tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc esp32 describe the problem I m try to convert a model for tflite but keep hit hybrid model be not support on tflite micro please provide the exact sequence of command step when you run into the problem here be my model model sequential conv2d 4 3 padding same activation relu input shape img width img height 1 name conv layer1 maxpooling2d name max pooling1 conv2d 4 3 padding same activation relu name conv layer2 maxpooling2d name max pooling2 pool size 2 2 flatten dense 20 activation relu name hide layer dense 1 activation sigmoid name output and here be the code I be use to convert the mode converter tf lite tfliteconverter from save model checkpoint model converter optimization tf lite optimize default model converter convert open convert model tflite wb write model I have also try tf lite optimize optimize for size which have the same issue remove all optimisation let I be there any way to avoid trigger this error with my model ideally I would like to optimize my model to make it small |
tensorflowtensorflow | different gradient in tf2 when eager mode be enable compare to graph mode | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 python version v3 8 5 cuda cudnn version cuda v11 0 cudnn v9 1 85 gpu model and memory gpu nvidia quadro p1000 intel uhd graphic 630 memory 2x samsung m471a4g43mb1 ctd you can collect some of this information use our environment capture tf env txt describe the current behaviour I m currently port several tensorflow v1 x legacy repository over to tf2 3 with eager execution enable I use the step in the documentation to do this unfortunately one of the rl agent which be base on the lyapunov actor critic architecture of han et al 2019 be not train when eager execution be enable I do some debugging and it look like there be a problem with compute the gradient of the squashed gaussian actor network python class squashedgaussianactor tf keras model def init self obs dim act dim hide size name seed none kwargs squash gaussian actor network args obs dim int the dimension of the observation space act dim int the dimension of the action space hide size list array contain the size of the hidden layer name str the keras module name seed list optional the random seed use for the weight initialization and the sample weight seed sample seed default to none none super init name name kwargs get class parameter self s dim obs dim self a dim act dim self seed seed 0 self initializer tf keras initializers glorotuniform seed self seed seed weight initializer self tfp seed seed 1 create fully connect layer self net tf keras sequential tf keras layers inputlayer dtype tf float32 input shape self s dim name name input for I hide size I in enumerate hide size self net add tf keras layer dense hide size I activation relu name name l format I 1 kernel initializer self initializer create mu and log sigma output layer self mu tf keras sequential tf keras layers inputlayer dtype tf float32 input shape hide size 1 tf keras layer dense act dim activation none name name mu kernel initializer self initializer self log sigma tf keras sequential tf keras layers inputlayer dtype tf float32 input shape hide size 1 tf keras layer dense act dim activation none name name log sigma kernel initializer self initializer tf function def call self input perform forward pass retrieve input obs input perform forward pass through fully connect layer net out self net obs calculate mu and log sigma mu self mu net out log sigma self log sigma net out log sigma tf clip by value log sigma log sigma min max 0 log sigma min max 1 perform re parameterization trick sigma tf exp log sigma create bijector use in the re parameterization trick squash bijector squashbijector affine bijector tfp bijector shift mu tfp bijector scale sigma sample from the normal distribution and calculate the action batch size tf shape input obs 0 base distribution tfp distribution multivariatenormaldiag loc tf zeros self a dim scale diag tf one self a dim epsilon base distribution sample batch size seed self tfp seed raw action affine bijector forward epsilon clip a squash bijector forward raw action transform distribution back to the original policy distribution reparm trick bijector tfp bijector chain squash bijector affine bijector distribution tfp distribution transformeddistribution distribution base distribution bijector reparm trick bijector clip mu squash bijector forward mu return network output and noise sample return clip a clip mu distribution log prob clip a epsilon although gradient be compute for this network these be very different in eager mode as compare to when the tf compat v1 disable eager execution flag be use this be strange since the loss function random seed weight bias and input be equal furthermore also the output of a forward pass through the network be identical in both eager and legacy graph mode this problem do not seem to exist for accompany lyapunov critic a modify version of a deep q network I first want to post it here before post on stackoverflow as I be unsure whether this a translation issue or a bug I try search for possible cause but I do not find a possible solution to my problem describe the expect behaviour I expect the gradient to be equal in both the script in which eager mode be enable and the one in which eager mode disable instead although the actor loss be equal 1084 2743 the gradient be different result eager mode python grad l1 weight 13 408794 2 178104 11 570398 108 1129 0 46 277092 30 369755 0 23067771 16 505516 87 53128 0 22 498222 grad l1 bias 47 802944 0 4118477 26 826544 185 92018 0 60 14121 grad l2 weight 12 058103 0 0 33448434 14 862488 0 1 8673693 0 30301327 0 0 18 134262 0 0 58 740616 0 1 113349 119 78081 0 2 120421 24 779123 0 0 4045168 66 25852 0 0 24545981 0 0 0 0 0 0 3 3655543 0 0 41 697414 0 0 grad l2 bias 98 14556 0 1 5422362 206 91672 0 4 5077815 grad mu weight 4 81568050e 00 2 37027216e 00 0 00000000e 00 0 00000000e 00 1 84061453e 01 1 09151885e 01 3 29189644e 01 2 93549042e 01 0 00000000e 00 0 00000000e 00 4 64032125e 03 6 64837938e 03 grad mu bias 110 85972 87 11501 grad log sigma weight 5 8719816e 00 5 4135699e 00 0 0000000e 00 0 0000000e 00 2 2099029e 01 3 2953596e 01 7 5656143e 01 6 9250507e 00 0 0000000e 00 0 0000000e 00 6 7091000e 04 9 1812750e 03 grad log sigma bias 238 21657 47 906483 result graph mode python grad l1 weight 1 1570635 2 2042375 11 598429 88 867676 0 48 9553 11 854488 0 2453581 16 115313 49 29876 0 24 385675 grad l1 bias 29 577682 0 16548407 29 669703 144 31357 0 64 56108 grad l2 weight 9 905338 0 4 7838783 3 8737621 0 3 4614413 0 30002874 0 0 18 411116 0 0 53 221275 0 5 242466 79 01278 0 3 936808 23 196867 0 0 53559065 51 048943 0 0 45809162 0 0 0 0 0 0 4 0506964 0 0 42 10633 0 0 grad l2 bias 98 10234 0 16 95784 155 67645 0 8 705088 grad mu weight 1 62482891e 01 1 15455017e 01 0 00000000e 00 0 00000000e 00 2 01459303e 01 6 31435871e 01 1 06661575e 02 4 60808792e 01 0 00000000e 00 0 00000000e 00 3 92461056e 03 9 14150383e 03 grad mu bias 372 89713 178 99115 grad log sigma weight 7 4654112e 00 1 1838960e 01 0 0000000e 00 0 0000000e 00 3 2749507e 01 6 5140498e 01 8 7392433e 01 4 8770226e 01 0 0000000e 00 0 0000000e 00 1 4014873e 03 9 9075940e 03 grad log sigma bias 289 77673 198 32103 code to reproduce the problem I place two small stand alone example script tf2 val grad py and tf2 val grad eager py in my repository that can be use to reproduce the problem the repository also contain a small readme md which explain how to run the code example other info log tf2 val grad eager terminal output txt tf2 val grad terminal output txt possible relate issue 27827 far debug step trim the call function down to see where the gradient converge |
tensorflowtensorflow | undocumented notfounderror occur when use tf io gfile glob to retrieve file in a non existent directory | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 chrome os linux 5 4 40 04224 g891a6cce2d44 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary instal from source pip3 install tensoflow tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when globbe file in a directory that do not exist a non document tf error notfounderror be throw if we try to wildcard glob file in a non existent directory this be not currently document here describe the expect behavior we expect that either a valueerror should be throw or the an empty list return since no file be match standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf file tf io gfile glob non existent dir other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach issue be first discover through this pr discussion r490576170 for tfx where we be expect a valueerror instead of a notfounderror |
tensorflowtensorflow | model and layer set to non trainable but weight still adjust | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow custom code os platform and distribution e g linux ubuntu 16 04 google colab tensorflow instal from source or binary google colab tensorflow version use command below google colab 2 3 0 python version 3 6 9 default jul 17 2020 12 50 27 n gcc 8 4 0 gpu model and memory google colab gpu I want to freeze a model that mean set it to non trainable I do so by set model trainable and the individual layer to non trainable however when I then fit this model with just 1 epoch I can see afterwards that the model weight change I can see this by check the model evaluate output and by compare the model weight with print model trainable variable prior and afterwards the model fit standalone code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds import numpy as np from tensorflow kera preprocesse text import tokenizer from tensorflow kera preprocesse sequence import pad sequence train x train label test x test label tf keras dataset imdb load datum num word 10000 x train pad pad sequences train x maxlen 500 x test pad pad sequences test x maxlen 500 model tf keras sequential tf keras layer embed 10000 128 input length 500 tf keras layer conv1d 128 5 activation relu tf keras layers globalaveragepooling1d tf keras layer dense 64 activation relu tf keras layer dense 1 model compile loss tf keras loss binarycrossentropy from logit true optimizer adam metric tf metric binaryaccuracy threshold 0 0 name accuracy history model fit x x train pad y train label validation datum x test pad test label epoch 4 batch size 128 this give the output epoch 4 4 196 196 9s 48ms step loss 0 1202 accuracy 0 9578 val loss 0 3663 val accuracy 0 8669 I can confirm this with model evaluate x test pad test label 782 782 3s 4ms step loss 0 3663 accuracy 0 8669 0 36633849143981934 0 866919994354248 now I set the model and individual layer to non trainable model trainable false for layer in model layer layer trainable false I check with model summary that trainable param 0 and fit the model again history model fit x x train pad y train label validation datum x test pad test label epoch 1 batch size 128 this give 196 196 10 52ms step loss 0 0911 accuracy 0 9677 val loss 0 4298 val accuracy 0 8648 I can again check this with model evaluate x test pad test label 782 782 4s 5ms step loss 0 4298 accuracy 0 8648 0 4298333525657654 0 8648399710655212 which clearly show that the model be adjust the val loss and val accuracy be not equal to loss 0 3663 accuracy 0 8669 moreover when I check the model weight with print model trainable variable before and afterwards I can see that the model weight get adjust however actually all parameter should be set to non trainable |
tensorflowtensorflow | should we remove deprecate and ignore field version in graph proto | Bug | we notice that the version field in graph proto have be deprecate since 5 year ago when we try to generate graphdef scala code by sbt protoc from this proto file it produce annoying deprecation warning during the compilation stage should we remove it from tensorflow s codebase l24 l27 accord to the description in the codebase deprecate single version field use version above instead since all graphdef change before version be introduce be forward compatible this field be entirely ignore this field seem to be completely not use no matter old or new version we appreciate any feedback thank |
tensorflowtensorflow | crash when attempt to use xla conv with complex input | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 10 4 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 41557 gae0a324182 2 4 0 dev20200915 python version python 3 8 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior tf throw an exception when try to use tensorflow compiler tf2xla python xla conv with complex input and the stack trace indicate it may be a bug in the hlo llvm ir lower standalone code to reproduce the issue python import numpy as np from tensorflow compiler tf2xla python import xla as tfxla from tensorflow compiler xla import xla datum pb2 proto xla datum pb2 convolutiondimensionnumber proto input batch dimension 0 proto input feature dimension 1 proto output batch dimension 0 proto output feature dimension 1 proto kernel output feature dimension 0 proto kernel input feature dimension 1 proto input spatial dimension extend 2 3 proto kernel spatial dimension extend 2 3 proto output spatial dimension extend 2 3 lhs np one 2 3 9 10 dtype np complex64 rh np one 3 3 4 5 dtype np complex64 pad 0 0 0 0 window stride lhs dilation rh dilation 1 1 1 1 1 1 feature group count 1 tfxla conv lhs rhs window stride padding lhs dilation rh dilation proto feature group count feature group count precision config none other info log stack trace 2020 09 17 16 13 20 385677 e tensorflow compiler xla status macros cc 56 internal ret check failure tensorflow compiler xla service cpu cpu compiler cc 501 llvm verifymodule llvm module err stream invalid llvm ir before optimization float point arithmetic operator only work with float point type 60 fmul reassoc nsz contract complex64 57 59 float point arithmetic operator only work with float point type 62 fadd reassoc nsz contract complex64 61 60 this probably indicate a bug in the hlo llvm ir lower rerun with xla dump to to get the ir begin stack trace xla status macro makeerrorstream impl getstatus xla cpu cpucompiler runbackend std unique ptr stream executor streamexecutor stream executor devicememoryallocator xla service buildexecutable xla hlomoduleproto const std unique ptr xla backend stream executor streamexecutor stream executor devicememoryallocator xla localservice compileexecutable xla xlacomputation const absl lts 2020 02 25 span xla executablebuildoption const xla localclient compile xla xlacomputation const absl lts 2020 02 25 span xla executablebuildoption const tensorflow xlacompilationcache buildexecutable tensorflow xlacompiler option const tensorflow xlacompilationresult const std unique ptr tensorflow xlacompilationcache compileimpl tensorflow xlacompiler option const tensorflow nameattrlist const absl lts 2020 02 25 span std function const absl lts 2020 02 25 optional tensorflow xlacompilationresult const xla localexecutable tensorflow xlacompilationcache compilesingleop tensorflow xlacompiler option const absl lts 2020 02 25 span tensorflow opkernelcontext tensorflow xlacompiler compileoption const tensorflow xlacompilationresult const xla localexecutable tensorflow xlacompileondemandop compile tensorflow opkernelcontext tensorflow xlacompilationresult const tensorflow xlacompilationcache absl lts 2020 02 25 flat hash map absl lts 2020 02 25 hash internal hash std equal to std allocator xla localexecutable tensorflow xlacompileondemandop compute tensorflow opkernelcontext tensorflow kernelanddeviceop run tensorflow scopedstepcontainer tensorflow eagerkernelarg const std vector std allocator tensorflow cancellationmanager absl lts 2020 02 25 optional const tensorflow eagerkernelexecute tensorflow eagercontext absl lts 2020 02 25 inlinedvector const absl lts 2020 02 25 optional const std unique ptr const tensorflow graphcollector tensorflow cancellationmanager absl lts 2020 02 25 span tensorflow executenode run tensorflow eagerexecutor syncexecute tensorflow eagernode tensorflow eagerexecute tensorflow eageroperation tensorflow tensorhandle int tensorflow eageroperation execute absl lts 2020 02 25 span int tfe execute tfe py executecancelable tfe context char const char const absl lts 2020 02 25 inlinedvector object tfe cancellationmanager absl lts 2020 02 25 inlinedvector tf status pyobject maketpcall pyeval evalframedefault pyeval evalcodewithname pyfunction vectorcall pyeval evalframedefault pyeval evalcodewithname pyfunction vectorcall pyeval evalframedefault pyeval evalcodewithname pyfunction vectorcall pyeval evalframedefault pyeval evalcodewithname pyfunction vectorcall pyeval evalframedefault pyeval evalcodewithname pyeval evalcode pyrun fileexflag pyrun simplefileexflag py bytesmain libc start main start end stack trace |
tensorflowtensorflow | usage of intermediate layer cause an exception | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 python version 3 7 5 cuda cudnn version 10 1 gpu model and memory rtx 2080 ti describe the current behavior in toy example provide below there be an example of usage intermediate nn layer in loss function and in accuracy validation logic on tf 2 3 it cause the follow exeption traceback most recent call last file c pyproject tf issue main py line 53 in hit tf reduce sum tf where output target batch x slice 0 01 1 0 numpy attributeerror tensor object have no attribute numpy I mark the line which cause the error without usage of intermediate layer it work please note these be two line in the script below hit tf reduce sum tf where output target batch x slice 0 01 1 0 numpy this line cause the error hit tf reduce sum tf where output target batch 0 01 1 0 numpy if x slice be omit script work fine comment out the first one and uncomment the second to get the script work standalone code to reproduce the issue import tensorflow as tf batch size 128 input tf keras input shape none 1 x tf keras layer dense 1 input output tf keras layer dense 1 x model tf keras model input input output output a toy dataset of point around 3 x 2 num example 2000 input tf random normal num example noise tf random normal num example output input 3 2 noise training input tf reshape input 1500 1500 1 training output tf reshape output 1500 1500 1 training input tf datum dataset from tensor slice training input batch batch size training output tf datum dataset from tensor slice training output batch batch size test input tf reshape input 1500 500 1 test output tf reshape output 1500 500 1 test input tf datum dataset from tensor slice test input batch batch size test output tf datum dataset from tensor slice test output batch batch size def loss model input target output model input output output 0 take the first output in general model can have several output global x x slice x 0 error output target x slice return tf reduce mean tf square error optimizer tf keras optimizer sgd learn rate 0 01 epoch 3 for I in range epoch for input batch target batch in zip training input training output with tf gradienttape as tape loss value loss model input batch target batch grad tape gradient loss value model trainable variable optimizer apply gradient zip grad model trainable variable print epoch I hit 0 total 0 for input batch target batch in zip test input test output output model input batch output output 0 take the first output in general model can have several output x slice x 0 hit tf reduce sum tf where output target batch x slice 0 01 1 0 numpy this line cause the error hit tf reduce sum tf where output target batch 0 01 1 0 numpy if x slice be omit script work fine total input batch shape 0 print hit print accuracy hit total any idea how to fix the issue thank in advance |
tensorflowtensorflow | fail to run on the give interpreter regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 62011 flexsize fail to prepare | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip install tf nightly tensorflow version or github sha if from source 2 4 0 dev20200916 provide the text output from tflite convert for android studio project that give the follow error java lang illegalargumentexception internal error fail to run on the give interpreter regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 62011 flexsize fail to prepare at org tensorflow lite nativeinterpreterwrapper run native method at org tensorflow lite nativeinterpreterwrapper run nativeinterpreterwrapper java 163 at org tensorflow lite interpreter runformultipleinputsoutput interpreter java 360 at edu ilab covid i d localize tflite tfliteobjectdetectionapimodel recognizeimage tfliteobjectdetectionapimodel java 202 at edu ilab covid i d ir connectfliractivity 7 run connectfliractivity java 673 at android os handler handlecallback handler java 789 at android os handler dispatchmessage handler java 98 at android os looper loop looper java 164 at android os handlerthread run handlerthread java 65 copy and paste here standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the colab use to convert the save model to tflite here be the drive with the save model to tflite here be the java code which use the tflite model connectfliractivity zip |
tensorflowtensorflow | tflite converter abort dump core in broadcastadd4dslow | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux pop os 20 04 lt tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 3 0 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook when run on my pop os system under gdb I get the backtrace gdb python gdb run tflite py gdb where 0 gi raise sig sig entry 6 at sysdep unix sysv linux raise c 50 1 0x00007ffff7ddb859 in gi abort at abort c 79 2 0x00007ffbf82ccb1f in tflite reference op broadcastadd4dslow tflite arithmeticparam const tflite runtimeshape const float const tflite runtimeshape const float const tflite runtimeshape const float from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 3 0x00007ffbf82d54ec in void tflite op builtin add evaladd tflite op builtin add kerneltype 2 tflitecontext tflitenode tfliteaddparam tflite op builtin add opdata const tflitetensor const tflitetensor const tflitetensor from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 4 0x00007ffbf82d846d in tflitestatus tflite op builtin add eval tflite op builtin add kerneltype 2 tflitecontext tflitenode from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 5 0x00007ffbf82ab1bf in tflite optimize calibration anonymous namespace loggingeval tflitecontext tflitenode from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 6 0x00007ffbf850f20b in tflite impl subgraph invoke from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 7 0x00007ffbf85121c0 in tflite impl interpreter invoke from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 8 0x00007ffbf827a581 in tflite calibration wrapper calibrationwrapper feedtensor object from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 9 0x00007ffbf8274285 in pybind11 cpp function initialize pybind11 init pywrap tensorflow lite calibration wrapper pybind11 module lambda tflite calibration wrapper calibrationwrapper pybind11 handle 4 pybind11 object tflite calibration wrapper calibrationwrapper pybind11 handle pybind11 name const pybind11 be method const pybind11 sible const lambda pybind11 detail function call 3 fun pybind11 detail function call from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 10 0x00007ffbf8271fe7 in pybind11 cpp function dispatcher object object object from home juan local lib python3 8 site package tensorflow lite python optimize pywrap tensorflow lite calibration wrapper so 11 0x00000000005f17e5 in pycfunction call 12 0x00000000005f2406 in pyobject maketpcall 13 0x000000000050795f in 14 0x000000000056c475 in pyeval evalframedefault 15 0x0000000000565972 in pyeval evalcodewithname 16 0x00000000005f1d85 in pyfunction vectorcall 17 0x00000000005677c7 in pyeval evalframedefault 18 0x0000000000565972 in pyeval evalcodewithname 19 0x00000000005f1d85 in pyfunction vectorcall 20 0x0000000000507729 in 21 0x00000000005f1107 in pyobject call 22 0x0000000000568e1f in pyeval evalframedefault 23 0x000000000050712e in 24 0x000000000056c475 in pyeval evalframedefault 25 0x0000000000565972 in pyeval evalcodewithname 26 0x00000000005f1d85 in pyfunction vectorcall type for more q to quit c to continue without page 27 0x00000000005677c7 in pyeval evalframedefault 28 0x0000000000565972 in pyeval evalcodewithname 29 0x0000000000686053 in pyeval evalcode 30 0x00000000006753d1 in 31 0x000000000067544f in 32 0x0000000000675507 in pyrun fileexflag 33 0x000000000067758a in pyrun simplefileexflag 34 0x00000000006ae99e in py runmain 35 0x00000000006aed29 in py bytesmain 36 0x00007ffff7ddd0b3 in libc start main main 0x4ebd20 argc 2 argv 0x7fffffffe2c8 init fini rtld fini stack end 0x7fffffffe2b8 at csu libc start c 308 37 0x00000000005f62ee in start |
tensorflowtensorflow | rnn can not get save tflite model | Bug | system information os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version tf nightly 2 2 0 command use to run the converter or code if you re use the python api convert pb2 tflite txt the output log from the converter invocation log txt failure detail my pb model be train and save in tensorflow 1 15 0 and I want to convert it to tflite to deploy on cpu for well performance when I run the uploaded file convert pb2 tflite txt py actually the log attach above seem to include no warning and error but there be no tflite file in my save path what s probably go wrong thank you |
tensorflowtensorflow | unexpected behaviour for model evaluate inside keras callback | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary colab tensorflow version use command below 2 3 0 python version 3 6 9 describe the current behavior inside a keras callback callback self model evaluate return result for the validation datum regardless of what be pass in the issue seem to not be present if validation datum be none describe the expect behavior self model evaluate should evaluate what be pass in standalone code to reproduce the issue colab notebook which illustrate the issue |
tensorflowtensorflow | self attention on word embedding use half precision with mask zero set to true crash train | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 5 lts tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 41557 gae0a324182 2 4 0 dev20200915 python version 3 6 9 cuda cudnn version bug reproducible without gpu but the same bug occur on gpu gpu model and memory bug reproducible without gpu but the same bug occur on gpu describe the current behavior when apply self attention to word embedding with half precision if mask zero be set to true then train crash if it be set to false then train complete without crash describe the expect behavior train with mask zero set to true without crash standalone code to reproduce the issue reproduce bug in tf nightly here other info log when mask zero be set to true the error be valueerror traceback most recent call last usr local lib python3 6 dist package tensorflow python op math ops py in binary op wrapper x y 1135 1136 def binary op wrapper x y 1137 with op name scope none op name x y as name 13 frame valueerror tensor conversion request dtype float32 for tensor with dtype float16 during handling of the above exception another exception occur typeerror traceback most recent call last usr local lib python3 6 dist package tensorflow python framework op def library py in apply op helper op type name name keyword 504 505 if we do not match an allow dtype try again with the default 506 dtype this could be because we have an empty tensor and thus we 507 pick the wrong type 508 if infer be not none and infer dtype in allow list typeerror input y of sub op have type float32 that do not match type float16 of argument x |
tensorflowtensorflow | mobilenetv3 small large not fully quantize after full integer post training quantization run with no error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 n a use google colaboratory tensorflow version use the command in the template v1 12 1 39569 gcb34190201 2 4 0 dev20200818 describe the current behavior the model be not fully quantize after full integer quantization even though no error occur during the quantization describe the expect behavior I try use the quantize version in the example android app but it crash every time so I want to see if the quantize version work on coral device but then it fail to compile with the error model be not fully quantize standalone code to reproduce the issue |
tensorflowtensorflow | textvectorization not work on tpu with tf nightly | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below tf nightly 2 4 0 dev20200915 use tpu yes I be try to run textvectorization on tpu def get vectorize layer text vocab size max seq special token mask vectorize layer preprocesse textvectorization max tokens vocab size output mode int standardize none output sequence length max seq vectorize layer adapt text vocab vectorize layer get vocabulary vocab vocab 2 vocab size len special token mask vectorize layer set vocabulary vocab return vectorize layer vectorize layer get vectorize layer datum text value tolist 20000 196 special token mask error notfounderror optimizedatasetv2 be neither a type of a primitive operation nor a name of a function register in binary run on n 86c78cbc w 0 make sure the operation or function be register in the binary run in this process tensorflow 2 4 0 dev20200915 reference notebook |
tensorflowtensorflow | feature request mlir base tflite converter support for 16bit conv2d | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 1 tensorflow instal from source or binary source tensorflow version use command below 2 4 0 with git hash 4a896124e344adcf94e30ed335b59900b578e53e python version 3 6 9 bazel version if compile from source 3 1 0 describe the problem I have tf quantization fake quant with min max args tf conv2d and use tf lite tfliteconverter to convert it into tflite quantize operator this work for I if I set both input and weight to 8 bit but the same process doesn t work for 16 bit activation I realize I could optionally use the old and soon to be deprecate toco converter by set converter experimental new converter false and converter target spec support op tf lite opsset experimental tflite builtin activation int16 weight int8 this do generate tflite but that s not well legalize into one tfl conv2d with native tfl qint16 input and output tensor |
tensorflowtensorflow | broadcast not work for divide and divide no nan | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos catalina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary pip binary tensorflow version use command below 2 3 0 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version no gpu gpu model and memory no gpu describe the current behavior broadcast not work for divide and divide no nan op I have 2 tensor of shape 5 140 280 3 2 and 5 140 280 3 attempt to divide with tf math divide no nan a b yield the follow error tensorflow python framework error impl invalidargumenterror incompatible shape 5 140 280 3 2 vs 5 140 280 3 op divnonan I have also test the regular tf math divide op and it yield the same error describe the expect behavior with broadcast behavior I would expect to be able to easily divide a by b with b be broadcast to fit the innermost dimension of a standalone code to reproduce the issue a tf one 5 140 280 3 2 b tf one 5 140 280 3 c tf math divide no nan a b other info log none |
tensorflowtensorflow | keras layers api documentation not sort properly | Bug | url s with the issue description of issue what need change very minor issue but the overview list of all layer class be not consistently sort alphabetically though it very much feel like it be intend to e g the gru layer be list above the gaussiandropout layer in the sidebar it be sort correctly alphabetically I personally nearly overlook the presence of the gru layer because they be differently sort in the sidebar and overview unfortunately could I not find the api documentation in the tensorflow doc repository which be why I be open the issue instead of offer a pull request if you point I to the api documentation source can I gladly take care of that |
tensorflowtensorflow | converter error for convert save model to tflite | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip install tensorflow tensorflow version or github sha if from source 2 3 0 also instal tf nightly command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook copy and paste here the exact command import tensorflow as tf save model dir c user diksh desktop fine tune model save model model tf save model load save model dir model signature tf save model default serve signature def key input 0 set shape 1 256 256 3 tf save model save model save model update signature model signature tf save model default serve signature def key convert converter tf lite tfliteconverter from save model save model dir save model update signature key serve default converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert tflite interpreter to check input shape interpreter tf lite interpreter model content tflite model interpreter allocate tensor get input and output tensor input detail interpreter get input detail output detail interpreter get output detail test the model on random input datum input shape input detail 0 shape print input shape the output from the converter invocation exception traceback most recent call last appdata roam python python37 site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 198 debug info str 199 enable mlir converter 200 return model str appdata roam python python37 site package tensorflow lite python wrap toco py in wrap toco convert model flags str toco flags str input data str debug info str enable mlir converter 37 debug info str 38 enable mlir converter 39 exception 0 error loc func statefulpartitionedcall input 0 require all operand and result to have compatible element type 0 note loc func statefulpartitionedcall input 0 see current operation 1 tf identity arg0 class loc func statefulpartitionedcall statefulpartitionedcall input 702 device tensor 1x256x256x3x tf quint8 tensor 1x256x256x3xui8 during handling of the above exception another exception occur convertererror traceback most recent call last in 11 converter optimization tf lite optimize default 12 converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op 13 tflite model converter convert 14 15 tflite interpreter to check input shape appdata roam python python37 site package tensorflow lite python lite py in convert self 1074 invalid quantization parameter 1075 1076 return super tfliteconverterv2 self convert 1077 1078 appdata roam python python37 site package tensorflow lite python lite py in convert self 898 899 return super tflitefrozengraphconverterv2 900 self convert graph def input tensor output tensor 901 902 appdata roam python python37 site package tensorflow lite python lite py in convert self graph def input tensor output tensor 631 input tensor input tensor 632 output tensor output tensor 633 converter kwargs 634 635 calibrate and quantize flag quant mode quantizer flag appdata roam python python37 site package tensorflow lite python convert py in toco convert impl input data input tensor output tensor enable mlir converter args kwargs 572 input datum serializetostre 573 debug info str debug info str 574 enable mlir converter enable mlir converter 575 return datum 576 appdata roam python python37 site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 200 return model str 201 except exception as e 202 raise convertererror str e 203 204 if distutil spawn find executable toco from proto bin be none convertererror 0 error loc func statefulpartitionedcall input 0 require all operand and result to have compatible element type 0 note loc func statefulpartitionedcall input 0 see current operation 1 tf identity arg0 class loc func statefulpartitionedcall statefulpartitionedcall input 702 device tensor 1x256x256x3x tf quint8 tensor 1x256x256x3xui8 copy and paste the output here also please include a link to the save model or graphdef put link here or attach to the issue failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | runtimeerror quantization not yet support for op | Bug | system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source tf 2 3 0 command use to run the converter or code if you re use the python api def representative datum gen dataset list os listdir data dir num calibration image 100 norm factor 255 0 for I in range num calibration image image name next iter dataset list image cv2 imread os path join data dir image name 1 image image astype np float32 image image norm factor image tf expand dim image 0 yield image converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter representative dataset representative datum gen converter target spec support op tf lite opsset tflite builtins int8 converter target spec support type tf int8 these set the input and output tensor to uint8 add in r2 3 converter inference input type tf uint8 converter inference output type tf uint8 converter allow custom op true tflite model converter convert model definition import tensorflow as tf from tensorflow import kera from tensorflow keras import layer def conv2d block 3layer input tensor n filter kernel size 3 dropout 0 2 batchnorm true activation true first layer x layer conv2d filter n filter kernel size kernel size kernel size padding same input tensor if batchnorm x layer batchnormalization x if activation x layer activation relu x x layer dropout dropout x second layer x layer conv2d filter n filter kernel size kernel size kernel size padding same x if batchnorm x layer batchnormalization x if activation x layer activation relu x x layer dropout dropout x third layer x layer conv2d filter n filter kernel size kernel size kernel size padding same x if batchnorm x layer batchnormalization x if activation x layer activation relu x return x def unet v2 nclasse 25 input height 288 input width 224 n filter 64 dropout 0 2 batchnorm true activation true img input layer input shape input height input width 3 c1 conv2d block 3layer img input n filter 1 kernel size 3 batchnorm batchnorm activation activation p1 layer maxpooling2d 2 2 c1 c2 conv2d block 3layers p1 n filter 2 kernel size 3 batchnorm batchnorm activation activation p2 layer maxpooling2d 2 2 c2 c3 conv2d block 3layer p2 n filter 4 kernel size 3 batchnorm batchnorm activation activation p3 layer maxpooling2d 2 2 c3 c4 conv2d block 3layer p3 n filter 4 kernel size 3 batchnorm batchnorm activation activation p4 layer maxpooling2d 2 2 c4 c5 conv2d block 3layer p4 n filter n filter 8 kernel size 3 batchnorm batchnorm activation activation p5 layer dropout dropout c5 up6 layer conv2dtranspose n filter 4 kernel size 3 3 stride 2 2 padding same p5 up6 layer upsampling2d p5 m6 layer concatenate axis 3 up6 c4 c6 conv2d block 3layer m6 n filter 4 kernel size 3 batchnorm batchnorm activation activation up7 layer conv2dtranspose n filter 4 kernel size 3 3 stride 2 2 padding same c6 up7 layer upsampling2d c6 m7 layer concatenate axis 3 up7 c3 c7 conv2d block 3layer m7 n filter 4 kernel size 3 batchnorm batchnorm activation activation up8 layer conv2dtranspose n filter 2 kernel size 3 3 stride 2 2 padding same c7 up8 layer upsampling2d c7 m8 layer concatenate axis 3 up8 c2 c8 conv2d block 3layer m8 n filter 2 kernel size 3 batchnorm batchnorm activation activation up9 layer conv2dtranspose n filter 1 kernel size 3 3 stride 2 2 padding same c8 up9 layer upsampling2d c8 m9 layer concatenate axis 3 up9 c1 c9 conv2d block 3layer m9 n filter 1 kernel size 3 batchnorm batchnorm activation activation outputlayer tf keras layer conv2d filter nclasse kernel size 1 activation softmax c9 model tf keras model input img input outputs outputlayer model summary line length 124 return model the output from the converter invocation traceback most recent call last file kera to tflite py line 105 in tflite model converter convert file home ths anaconda3 envs py37 tf23 lib python3 7 site package tensorflow lite python lite py line 831 in convert self convert graph def input tensor output tensor file home ths anaconda3 envs py37 tf23 lib python3 7 site package tensorflow lite python lite py line 638 in convert result self calibrate quantize model result flag file home ths anaconda3 envs py37 tf23 lib python3 7 site package tensorflow lite python lite py line 452 in calibrate quantize model inference output type allow float activation type file home ths anaconda3 envs py37 tf23 lib python3 7 site package tensorflow lite python optimize calibrator py line 98 in calibrate and quantize np dtype activation type as numpy dtype num runtimeerror quantization not yet support for op the model architecture use conv2d batchnormalization dropout maxpooling2d conv2dtranspose concatenate layer be the operation use in any of the mention layer the operation be not use at all |
tensorflowtensorflow | sparse tensor error message when apply constraint to dense variable | Bug | system information os platform and distribution ubuntu 18 04 tensorflow instal from source or binary conda tensorflow version I try version 2 2 0 2 3 0 1 15 and 1 14 python version 3 6 and 3 8 3 cuda cudnn version cuda 10 1 and cudnn 7 6 for tf 2 3 0 and 2 2 0 cuda 10 0 and cudnn 7 4 for tf 1 15 and 1 14 gpu model and memory geforce gtx titan x 12 gb describe the current behavior I be try to impose a constraint on a trainable variable however at some point in my pipeline I be apply a tf gather operation to the variable that I be constrain which be cause a runtime error the error message see below say that a constraint function can not be use on a sparse variable however my variable be not sparse when I be exclude the tf gather operation it work without any error describe the expect behavior the variable should be constrain to a specific range without any error standalone code to reproduce the issue colab scrollto ry1t16oqz9yo this be not work python import tensorflow as tf import numpy as np tf compat v1 disable v2 behavior y tf variable name y initial value np random rand 5 2 2 dtype tf float32 constraint lambda x tf clip by value x clip value min 1 0 clip value max 1 0 yy tf gather y axis 0 indice 0 1 2 loss tf reduce sum yy step tf compat v1 train adamoptimizer learning rate 0 1 minimize loss loss var list y with tf session as sess sess run tf global variable initializer y sess run fetch y step print y this be work just remove the tf gather operation python import tensorflow as tf import numpy as np tf compat v1 disable v2 behavior y tf variable name y initial value np random rand 5 2 2 dtype tf float32 constraint lambda x tf clip by value x clip value min 1 0 clip value max 1 0 loss tf reduce sum y step tf compat v1 train adamoptimizer learning rate 0 1 minimize loss loss var list y with tf session as sess sess run tf global variable initializer y sess run fetch y step print y other info log python runtimeerror traceback most recent call last in loss tf reduce sum yy step tf compat v1 train adamoptimizer learning rate 0 1 minimize loss loss var list y usr local lib python3 6 dist package tensorflow python training optimizer py in update op self optimizer g if self v constraint be not none raise runtimeerror can not use a constraint function on a sparse variable return optimizer resource apply sparse duplicate index g value self v g indice runtimeerror can not use a constraint function on a sparse variable |
tensorflowtensorflow | subclass of tf keras model throw notimplementederror when fit with custom datum | Bug | tf 2 3 0 test on window and in colab there be an error that come out only when certain type of model subclass of tf keras model be combine with certain custom dataloader python generator and subclass of tf keras util sequence yet same loader work ok with simple functional keras model the error be notimplementederror when subclasse the model class you should implement a call method see colab reproduction here |
tensorflowtensorflow | syncbatchnormalization have nan loss with channel first format | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 14 04 run in docker container host be 18 04 tensorflow instal from source or binary source tensorflow version use command below 2 0 and 2 3 0 python version 3 6 8 bazel version if compile from source 0 29 1 1 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version 10 0 gpu model and memory various e g nvidia titan x describe the current behavior when use the experimental syncbatchnormalization layer with the channel first nchw or batch channel height width format the output of the layer be nan describe the expect behavior the output of the layer should be valid as occur with the default channel last format or with regular batchnormalization standalone code to reproduce the issue this colab notebook contain several model that demonstrate the issue a keras model with nchw format and a keras model with syncbatchnorm both train correctly however a keras model with both nchw format and syncbatchnorm immediately fail with nan loss similarly an estimator model with nchw and syncbatchnorm fail with nan loss other info log I think the input tensor size seem to be relate to the error I try a small tensor 10x10x10 before and it didn t cause the issue and a medium size tensor I think 30x30x30 train for a few step and then encounter this issue I d have to double check on the current code but I think there s something there in the example the loss be nan I believe that the output of the layer be already nan I m not sure if every single value be nan or just some of the value I have some test where I print out the sum of the layer output etc and if I recall correctly immediately after the first syncbatchnorm layer they be already nan this happen on the second and later iteration but not the first iteration so I suspect there may be some issue with backpropagation rather than with the forward step for example hypothetically if it happen that backpropagation in the first step cause the move variance to go to 0 or infinity I think we would see behavior like this you should be able to replicate all these test fairly easily with some simple print function etc but if you need any more information please let I know and I can provide it the loss function doesn t seem to matter I can use various loss function and still trigger the nan but the conv2d at the beginning may matter it seem like it work fine if I remove that layer or if I change it to channel last if I run on colab without a gpu I get the follow error invalidargumenterror conv2dcustombackpropfilterop only support nhwc node gradient tape sequential conv2d conv2d conv2dbackpropfilter define at 18 op inference train function 654 I wonder if this might be relate to the problem |
tensorflowtensorflow | custom xception input size broadcast incorrectly | Bug | system information red hat enterprise linux server release 7 7 maipo tensorflow tensorflow gpu 2 1 gpu nvidia v100 python 3 6 10 gcc 7 3 0 cuda 10 1 I think run through slurm the error log valueerror could not broadcast input array from shape 850 550 3 into shape 850 550 3 3 model summary this model be the stock xception with a custom top global max dense layer and softmax use for image classification layer type output shape param connect to input 2 inputlayer none 850 550 3 0 block1 conv1 conv2d none 424 274 32 864 input 2 0 0 block1 conv1 bn batchnormaliza none 424 274 32 128 block1 conv1 0 0 block1 conv1 act activation none 424 274 32 0 block1 conv1 bn 0 0 relevant code a runnable colab gist can be find here git clone import numpy as np import panda as pd import tensorflow from tensorflow import kera from tensorflow kera preprocesse image import imagedatagenerator load img from tensorflow keras util import to categorical from sklearn model selection import train test split from tensorflow keras optimizer import adam from tensorflow keras model import load model from tensorflow keras model import model sequential from tensorflow keras application import xception from tensorflow kera preprocesse import image from tensorflow keras layers import flatten dense input globalavgpool2d from tensorflow keras import optimizer from tensorflow keras callback import earlystopping reducelronplateau modelcheckpoint import matplotlib pyplot as plt import random import os import time from datetime import datetime from ipython display import svg from pil import imagefile imagefile load truncate image true verify gpu print tensorflow config list physical device gpu fast run false image height 850 image width 550 image channel 3 image size image height image width image channel input tensor def input shape image height image width image channel unused for now name f datetime today strftime y m d xception 850 flowering keras xception model core xception weight none include top false input shape image size model head model core output model head globalavgpool2d model head model head flatten model head model head dense 512 activation relu model head model head dense 256 activation relu model head model head dense 2 activation softmax model head model model input model core input output model head model compile adam lr 00005 loss categorical crossentropy metric accuracy print model summary earlystop earlystoppe patience 20 filepath f content model model hdf5 if not os path isdir f content model os makedirs f content model checkpoint modelcheckpoint filepath monitor val accuracy verbose 1 save well only true mode max callback earlystop checkpoint hard code for now replace later nb train sample 20 nb validation sample batch size 4 train path content flower toy dataset image valid path content nevp phenology unscored 20191206 image train datagen imagedatagenerator rotation range 15 rescale 1 255 shear range 0 1 zoom range 0 2 horizontal flip true width shift range 0 1 height shift range 0 1 train generator train datagen flow from directory train path target size image size class mode categorical class flower not flower batch size batch size validation datagen imagedatagenerator rescale 1 255 validation generator validation datagen flow from directory valid path target size image size class mode categorical class flower not flower batch size batch size print nb validation sample batch size epoch 3 if fast run else 500 history model fit generator train generator epoch epoch step per epoch nb train sample batch size callback callback my thought I feel like normally with broadcasting error it s generally relate to the size of the image or it fail to broadcast from a 3 dim input tensor to the 4 dim batch height width channel tensor however here it just seem like the code have forget about the existence of the batch dimension and confused height for batch width for height and channel for both width and channel I have double check my code but to my admittedly very limited knowledge everything look okay |
tensorflowtensorflow | just qustion in android java keras model be ok | Bug | hi I m develop android app and my nn in python be keras tf keras with lstm conv2d resnet attention very deep I want to use this nn in java android 1 can I use tf lite for my kera of python I hear tf lite only support some tf model so I m worried 2 and can I convert kera to tensorflow and just use tensorflow in java 3 and can I convert kera to tensorflow and just use tensorflow lite or tensorflow mobile in java thx |
tensorflowtensorflow | int16 softmax be use std vector | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 debian tensorflow instal from source or binary source tensorflow version commit sha if source 35d9474383c9befd99f457031ada977d681742c4 target platform e g arm mbe os arduino nano 33 etc stm32f4 describe the problem this command make f tensorflow lite micro tool make makefile target stm32f4 kernel softmax test fail with the follow error message home advaitjain tensorflow github tensorflow tensorflow lite micro tool make download gcc embed bin lib gcc arm none eabi 7 3 1 thumb v7e m libgcc a unwind arm o in function get eit entry unwind arm c text 0x138 undefined reference to exidx end unwind arm c text 0x13c undefined reference to exidx start collect2 error ld return 1 exit status the underlie issue be that the reference implementation of int16 softmax use std vector which be incompatible with micro l168 the near term fix will be to revert pr 38873 long term we would have to fix up the reference implementation to be friendly for embed platform this issue be not catch by continuous integration when the pr be merge because we do not build for target stm32f4 with only the reference kernel we only do so with tag cmsis nn two way to safeguard against this in the future would be 1 build stm32f4 without any additional tag 1 fix up bluepill and add more test to that target this be likely preferable |
tensorflowtensorflow | tf datum experimental service throw error when use with tpus | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I put an example to reproduce the issue below os platform and distribution e g linux ubuntu 16 04 uname a linux james tpu 4 19 0 10 cloud amd64 1 smp debian 4 19 132 1 2020 07 24 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior error out with 2020 09 10 22 59 13 294374 e tensorflow core common runtime eager context cc 678 fail to register function remotely due to fail to register dataset fail to connect to all address see full log below describe the expect behavior I wouldn t expect an error for this code standalone code to reproduce the issue code import log import tensorflow as tf import numpy as np log getlogger tensorflow setlevel log debug def get dataset dispatcher batch size fake datum np zero 128 256 256 dtype np float32 ds tf datum dataset from tensor slice fake datum ds ds repeat ds ds batch batch size ds d apply tf datum experimental service distribute parallel epoch dispatch target job name datum job return ds def run strategy batch size datum dispatcher tf datum experimental service dispatchserver port 0 dispatch address datum dispatch target split 1 worker tf datum experimental service workerserver port 0 dispatcher address dispatch address per replica batch size batch size strategy num replicas in sync dataset strategy experimental distribute dataset from function lambda get dataset datum dispatcher per replica batch size ds iterator iter dataset tf function input signature tf tensorspec 256 256 dtype tf float32 def step fn input return tf function def train step iterator strategy run step fn args next iterator for in range 10 train step ds iterator def setup strategy tpu false resolver tf distribute cluster resolver tpuclusterresolver tpu james tpu tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver return tf distribute tpustrategy resolver if name main strategy setup strategy batch size 64 run strategy batch size other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log 2020 09 10 22 59 07 877313 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 09 10 22 59 07 882757 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2299995000 hz 2020 09 10 22 59 07 883041 I tensorflow compiler xla service service cc 168 xla service 0x3aa2460 initialize for platform host this do not guarantee that xla will be use device 2020 09 10 22 59 07 883156 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 09 10 22 59 07 889580 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job worker 0 10 184 89 74 8470 2020 09 10 22 59 07 889626 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job localhost 0 localhost 31299 2020 09 10 22 59 07 904477 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job worker 0 10 184 89 74 8470 2020 09 10 22 59 07 904533 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job localhost 0 localhost 31299 2020 09 10 22 59 07 905018 I tensorflow core distribute runtime rpc grpc server lib cc 405 start server with target grpc localhost 31299 info tensorflow initialize the tpu system james tpu info tensorflow initialize the tpu system james tpu info tensorflow clear out eager cache info tensorflow clear out eager cache info tensorflow finish initialize tpu system info tensorflow finish initialize tpu system info tensorflow find tpu system info tensorflow find tpu system info tensorflow num tpu core 8 info tensorflow num tpu core 8 info tensorflow num tpu worker 1 info tensorflow num tpu worker 1 info tensorflow num tpu core per worker 8 info tensorflow num tpu core per worker 8 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 2020 09 10 22 59 13 294374 e tensorflow core common runtime eager context cc 678 fail to register function remotely due to fail to register dataset fail to connect to all address this shouldn t happen please file a bug to tensorflow team traceback most recent call last file usr lib python3 7 runpy py line 193 in run module as main main mod spec file usr lib python3 7 runpy py line 85 in run code exec code run global file home jamesbartlett train pixieml pixieml model transformer minimal example py line 54 in run strategy batch size file home jamesbartlett train pixieml pixieml model transformer minimal example py line 30 in run ds iterator iter dataset file usr local lib python3 7 dist package tensorflow python distribute input lib py line 1199 in iter enable legacy iterator file usr local lib python3 7 dist package tensorflow python distribute input lib py line 1752 in create iterator per worker worker device file usr local lib python3 7 dist package tensorflow python distribute input lib py line 1609 in init device file usr local lib python3 7 dist package tensorflow python distribute input lib py line 1448 in init self make iterator file usr local lib python3 7 dist package tensorflow python distribute input lib py line 1619 in make iterator self dataset self device source device host device file usr local lib python3 7 dist package tensorflow python data op multi device iterator op py line 547 in init dataset element spec file usr local lib python3 7 dist package tensorflow python data op multi device iterator op py line 54 in init init func concrete init func get concrete function file usr local lib python3 7 dist package tensorflow python eager function py line 2939 in get concrete function 2020 09 10 22 59 13 297630 w tensorflow core distribute runtime eager remote tensor handle datum cc 76 unable to destroy remote tensor handle if you be run a tf function it usually indicate some op in the graph get an error fail to register dataset fail to connect to all address args kwargs file usr local lib python3 7 dist package tensorflow python eager function py line 2906 in get concrete function garbage collect graph function args kwargs self maybe define function args kwargs file usr local lib python3 7 dist package tensorflow python eager function py line 3213 in maybe define function graph function self create graph function args kwargs file usr local lib python3 7 dist package tensorflow python eager function py line 3075 in create graph function capture by value self capture by value file usr local lib python3 7 dist package tensorflow python framework func graph py line 991 in func graph from py func expand composite true file usr local lib python3 7 dist package tensorflow python util nest py line 635 in map structure structure 0 func x for x in entry file usr local lib python3 7 dist package tensorflow python util nest py line 635 in structure 0 func x for x in entry file usr local lib python3 7 dist package tensorflow python framework func graph py line 942 in convert x op convert to tensor or composite x file usr local lib python3 7 dist package tensorflow python framework op py line 1622 in convert to tensor or composite value value dtype dtype name name as ref false file usr local lib python3 7 dist package tensorflow python framework op py line 1661 in internal convert to tensor or composite accept result type tensor composite tensor compositetensor file usr local lib python3 7 dist package tensorflow python framework op py line 1467 in convert to tensor return graph capture value name name file usr local lib python3 7 dist package tensorflow python framework func graph py line 624 in capture return self capture eager tensor tensor name file usr local lib python3 7 dist package tensorflow python framework func graph py line 721 in capture eager tensor graph const constant op constant tensor numpy dtype tensor dtype file usr local lib python3 7 dist package tensorflow python framework op py line 1063 in numpy maybe arr self numpy pylint disable protect access file usr local lib python3 7 dist package tensorflow python framework op py line 1031 in numpy six raise from core status to exception e code e message none pylint disable protect access file line 3 in raise from tensorflow python framework error impl unavailableerror fail to register dataset fail to connect to all address error in atexit run exitfunc traceback most recent call last file usr local lib python3 7 dist package tensorflow python distribute tpu strategy py line 540 in async wait context async wait file usr local lib python3 7 dist package tensorflow python eager context py line 2319 in async wait context sync executor file usr local lib python3 7 dist package tensorflow python eager context py line 658 in sync executor pywrap tfe tfe contextsyncexecutor self context handle tensorflow python framework error impl unavailableerror fail to register dataset fail to connect to all address 2020 09 10 22 59 13 375822 w tensorflow core distribute runtime eager destroy tensor handle node h 57 ignore an error encounter when delete remote tensor handle invalid argument unable to find the relevant tensor remote handle op i d 30 output num 0 additional grpc error information from remote target job worker replica 0 task 0 create 1599778753 375752373 description error receive from peer ipv4 10 184 89 74 8470 file external com github grpc grpc src core lib surface call cc file line 1056 grpc message unable to find the relevant tensor remote handle op i d 30 output num 0 grpc status 3 |
tensorflowtensorflow | after several iteration the error input and filter must have the same depth be arise | Bug | I be try to fit my model by use a generator work on my dataset audio file I use a gold cluster slurm to run my program include generator for create my input feature and corresponding target by use yeilde method and safe threading and fit generator for train my model unfortunetly after 388 iteration include batch with 128 size the error tensorflow python framework error impl invalidargumenterror input and filter must have the same depth 6 vs 18 node model 4 conv2d 80 biasadd define at project1 training prog preandtrain generator model2 py 187 op inference train function 9095 function call stack train function 2020 09 10 17 09 40 560724 w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator fail precondition python interpreter state be not initialize the process may be terminate node pyfunc be show and my program be fail the depth channel of the input feature be 6 and the first layer be a covolutional layer with 18 channel filter I become confused why the error be show after 388 iteration and why the depth of input feature and the first layer must be same I will be so grateful if someone can help I to solve the problem new text document txt |
tensorflowtensorflow | tf signal ifft return result with different dtype for subsequent call with the same parameter | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 10 4 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 40510 ga32c74ae8f 2 4 0 dev20200829 python version python 3 8 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when call tf signal ifft with an operand with a complex128 dtype several time the first call return a complex128 array as expect but all subsequent call return a complex64 array describe the expect behavior tf signal ifft should always return an array with the same type as its input per the documentation more importantly it should be consistent across call standalone code to reproduce the issue python import numpy as np import tensorflow as tf operand np one 2 2 dtype np complex128 print tf signal ifft operand dtype print tf signal ifft operand dtype |
tensorflowtensorflow | tf function give different output than standard function | Bug | no custom code tf 2 3 I be try to make a backt differentiable so I decide to use tf function s auto differentiation capability despite take a very long time to compile the function finally it get convert into tf op however the loss I get when run a tf version of the function and the python version be completely different tf function def get loss h action nh action price entry point 0 0 exit point 0 0 hold false profit 0 0 index 0 fee 0 001 keep perc 1 fee final index len price 1 while index final index curr price price index if hold act value h action index else act value nh action index if act value 0 5 if hold be false buy hold true entry point curr price else sell hold false exit point curr price profit pow keep perc 2 exit point entry point 1 index 1 return profit the input for this function be precompute prediction and price history be supply I don t believe the value for these specifically be very important I be able to produce the same behavior with a dummy set sample np random normal size 1000 10 price random uniform 1 1000 for in range 1000 sample 1 1 hold pred model predict sample sample 1 1 not hold pred model predict sample the only thing I see be that a warning in throw say that there s a large unroll loop other than that why would I get get a different output |
tensorflowtensorflow | no gradient define for operation raggedtensorfromvariant or no gradient at all | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 2 gpu model and memory rtx 2070 super 8 gb describe the current behavior use raggedtensor I want to minimize kl loss for different segment of a weight matrix assume their coefficient draw from multivariate normal distribution in each segment raggedtensor be use inside tf map fn first I meet issue lookuperror gradient registry have no entry for raggedtensorfromvariant I be fix it with provide code issuecomment 673529072 however optimization be not work weight be not update during training for simplicity I want to minimize only trace in covariance matrix but nothing happen I think that provide code doesn t allow gradient to pass through describe the expect behavior I expect trace will be minimize to zero standalone code to reproduce the issue import tensorflow as tf import numpy as np class densecov tf keras layer dense def init self unit segment kwargs self segment id tf keras backend constant segment dtype tf int64 name segment ids self w rag none self embed dim none super init unit unit use bias false kernel initializer tf keras initializers one kwargs def call self input logit super call input want to minimize kullback leibler divergence between two multivariate normal distribution but for simplicity minimize only trace w rag tf raggedtensor from value rowid tf transpose self kernel self segment id name init rag mean tf reduce mean w rag axis 1 name mean w centre w ragged tf expand dim mean 1 cov matrix tf map fn lambda x tf matmul x x true tf cast tf shape x 0 tf float32 1 w centre name calc covar cov matrix cov matrix to tensor name convert dense trace tf map fn lambda x tf linalg trace x cov matrix name calc trace trace 0 logdet tf map fn lambda x tf linalg logdet x cov matrix name calc logdet disable for simplicity logdet 0 loss trace logdet tf reduce sum centroid mean 2 axis 1 disabled for simplicity loss trace loss tf reduce mean loss name total loss self add loss loss return logit def build self input shape super build input shape distort matrix a bit w self kernel w assign add np random randn n dim n class astype np float32 def get config self config segment self segment id numpy base config super get config base config update config return base config n class n domain n dim n sample 50000 1 10 100 segment np random randint 0 n domain n class segment np sort segment datum np random randn n sample n dim astype np float32 y true np random randn n sample astype np float32 tf keras backend clear session model tf keras model sequential model add tf keras input shape n dim model add densecov n class segment model add tf keras layer dense 1 def dummy loss y true y pre return 0 tf reduce sum y pre model compile optimizer tf keras optimizer adam 0 1 loss dummy loss from tensorflow raw op import raggedtensortovariant tf registergradient raggedtensorfromvariant def raggedtensorfromvariantgrad args if len args 2 op grad args re raggedtensortovariant rt nest split rt dense value grad batch input false else op empty grad args re raggedtensortovariant rt nest split op output 0 rt dense value grad batch input true return res initial trace np trace np cov model layer 2 weight 0 numpy loss for I in range 10 re model train on batch datum 10 y true 10 print f iter I loss re delta initial trace re return home data venv lib python3 7 site package tensorflow python framework index slice py 432 userwarning convert sparse indexedslice to a dense tensor of unknown shape this may consume a large amount of memory convert sparse indexedslice to a dense tensor of unknown shape iter 0 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 1 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 2 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 3 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 4 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 5 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 6 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 7 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 8 loss 9 99978256225586 delta 1 1581866488086234e 07 iter 9 loss 9 99978256225586 delta 1 1581866488086234e 07 |
tensorflowtensorflow | tf lite fail conversion with post train int quantization convert fine without runtimeerror mismatch between number of weight maxs and channel | Bug | hey there I ve manage to convert stylegan2 to a tf lite model with your recent advice however now upon try to quantize the model I m get an error the model convert with no issue if quantization be not use it also convert fine with dynamic range quantization only fail if a representative dataset be use I have replicate the issue on a small part of the full model here system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version or github sha if from source try on tf 2 3 tf nightly if possible please share a link to colab jupyter any notebook link to colab notebook with all command and download link to savedmodel tar link to successfully convert tf lite model without post train quantization command use to run the converter or code if you re use the python api take in 1 18 512 tensor of random input python sample for I in range 10 sample np random randn 1 18 512 sample sample astype np float32 sample append sample def representative datum gen for sample in sample yield sample converter code python converter tf lite tfliteconverter from save model content synth const converter target spec support op tf lite opsset tflite builtin converter optimization tf lite optimize default converter representative dataset representative datum gen tflite model converter convert with tf io gfile gfile synth const opt tflite wb as f f write tflite model the output from the converter invocation 2020 09 10 11 03 29 887482 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 09 10 11 03 30 014402 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 09 10 11 03 32 260056 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 260451 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 260987 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 1 2020 09 10 11 03 32 261050 I tensorflow core grappler cluster single machine cc 356 start new session 2020 09 10 11 03 32 261693 I tensorflow compiler jit xla gpu device cc 161 ignore visible xla gpu jit device device number be 1 reason invalid argument invalid device ordinal value 1 valid range be 0 0 2020 09 10 11 03 32 319520 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 319815 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 6325ghz corecount 28 devicememorysize 10 91gib devicememorybandwidth 451 17gib s 2020 09 10 11 03 32 319865 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 320143 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 05 00 0 name geforce gtx 1050 ti computecapability 6 1 coreclock 1 392ghz corecount 6 devicememorysize 3 95gib devicememorybandwidth 104 43gib s 2020 09 10 11 03 32 320163 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 09 10 11 03 32 320177 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 09 10 11 03 32 320188 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 09 10 11 03 32 320198 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 09 10 11 03 32 320208 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 09 10 11 03 32 320218 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 09 10 11 03 32 320228 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 09 10 11 03 32 320263 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 320548 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 320844 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 321124 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 321394 I tensorflow core common runtime gpu gpu device cc 1843 ignore visible gpu device device 1 name geforce gtx 1050 ti pci bus i d 0000 05 00 0 compute capability 6 1 with core count 6 the minimum require count be 8 you can adjust this requirement with the env var tf min gpu multiprocessor count 2020 09 10 11 03 32 321401 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 09 10 11 03 32 321419 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 09 10 11 03 32 321424 I tensorflow core common runtime gpu gpu device cc 1263 0 1 2020 09 10 11 03 32 321428 I tensorflow core common runtime gpu gpu device cc 1276 0 n n 2020 09 10 11 03 32 321432 I tensorflow core common runtime gpu gpu device cc 1276 1 n n 2020 09 10 11 03 32 321489 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 321776 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 322041 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 9842 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 01 00 0 compute capability 6 1 2020 09 10 11 03 32 366684 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item graph to optimize 2020 09 10 11 03 32 366717 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer graph size after 88 node 84 89 edge 86 time 31 342ms 2020 09 10 11 03 32 366721 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 045ms 2020 09 10 11 03 32 479552 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 313 ignore output format 2020 09 10 11 03 32 479576 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 316 ignore drop control dependency 2020 09 10 11 03 32 498762 I tensorflow compiler jit xla gpu device cc 161 ignore visible xla gpu jit device device number be 1 reason invalid argument invalid device ordinal value 1 valid range be 0 0 2020 09 10 11 03 32 498945 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 499265 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 6325ghz corecount 28 devicememorysize 10 91gib devicememorybandwidth 451 17gib s 2020 09 10 11 03 32 499316 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 499602 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 05 00 0 name geforce gtx 1050 ti computecapability 6 1 coreclock 1 392ghz corecount 6 devicememorysize 3 95gib devicememorybandwidth 104 43gib s 2020 09 10 11 03 32 499622 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 09 10 11 03 32 499636 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 09 10 11 03 32 499645 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 09 10 11 03 32 499654 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 09 10 11 03 32 499663 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 09 10 11 03 32 499672 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 09 10 11 03 32 499681 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 09 10 11 03 32 499718 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 500010 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 500314 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 500599 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 500966 I tensorflow core common runtime gpu gpu device cc 1843 ignore visible gpu device device 1 name geforce gtx 1050 ti pci bus i d 0000 05 00 0 compute capability 6 1 with core count 6 the minimum require count be 8 you can adjust this requirement with the env var tf min gpu multiprocessor count 2020 09 10 11 03 32 500978 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 09 10 11 03 32 501002 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 09 10 11 03 32 501007 I tensorflow core common runtime gpu gpu device cc 1263 0 1 2020 09 10 11 03 32 501011 I tensorflow core common runtime gpu gpu device cc 1276 0 n n 2020 09 10 11 03 32 501015 I tensorflow core common runtime gpu gpu device cc 1276 1 n n 2020 09 10 11 03 32 501084 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 501389 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 10 11 03 32 501696 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 9842 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 01 00 0 compute capability 6 1 traceback most recent call last file home y4tsu pycharmproject sg2 nhwc reduce cleanup test piece py line 187 in main file home y4tsu pycharmproject sg2 nhwc reduce cleanup test piece py line 174 in main write synth const g param convert file home y4tsu pycharmproject sg2 nhwc reduce cleanup test piece py line 81 in write synth const tflite model converter convert file home y4tsu anaconda3 envs tf2 3 lib python3 8 site package tensorflow lite python lite py line 1076 in convert return super tfliteconverterv2 self convert file home y4tsu anaconda3 envs tf2 3 lib python3 8 site package tensorflow lite python lite py line 899 in convert return super tflitefrozengraphconverterv2 file home y4tsu anaconda3 envs tf2 3 lib python3 8 site package tensorflow lite python lite py line 638 in convert result self calibrate quantize model result flag file home y4tsu anaconda3 envs tf2 3 lib python3 8 site package tensorflow lite python lite py line 450 in calibrate quantize model return calibrate quantize calibrate and quantize file home y4tsu anaconda3 envs tf2 3 lib python3 8 site package tensorflow lite python optimize calibrator py line 95 in calibrate and quantize return self calibrator quantizemodel runtimeerror mismatch between number of weight maxs and channel 1 vs 512 I think the quantize op be expect a shape that it s not receive but I ve try change the dimensionality of tensor at various point and I haven t be able to identify the source of the issue if you have any idea please let I know |
tensorflowtensorflow | cudnn error when pass all mask sequence to rnn layer on gpu | Bug | system information have I write custom code yes os platform and distribution cento 8 tensorflow instal from binary pip tensorflow version 2 3 0 v2 3 0 rc2 23 gb36436b087 python version 3 6 8 cuda cudnn version 10 1 7 gpu model and memory rtx 6000 24 gb describe the current behavior when pass a boolean mask together with datum to a rnn layer run on gpu a cudnn error be raise if any row of the mask be make entirely of false value I e if one of the batch input sequence be make entirely of padding value the raise error be the follow unknownerror cudnn status bad param in tensorflow stream executor cuda cuda dnn cc 1521 cudnnsetrnndatadescriptor datum desc get datum type layout max seq length batch size data size seq length array void padding fill op cudnnrnnv3 this may seem as an edge case but can be encounter when reshape a batch block sequence dimension tensor with variable size therefore zero pad block of sequence represent e g a long text into a sequence dimension tensor to be process at once by a rnn and then reshape back to batch block rnn dim I be able to implement a work around use a custom wrapper class but this do not feel like a righteous solution describe the expect behavior I would expect the rnn to output some default value e g a vector or zero represent the all padding sequence s as it do on cpu standalone code to reproduce the issue minimal code to reproduce the error python import tensorflow as tf gru tf keras layers gru 128 rng tf random get global generator inp rng normal 8 64 256 msk tf concat tf one 4 64 dtype tf bool tf zero 4 64 dtype tf bool axis 0 work with tf device cpu 0 gru inp mask msk work with tf device gpu 0 gru inp mask tf one like msk fail with tf device gpu 0 gru inp mask msk workaround I implement which unmask all padding sequence thus trigger unrequired computation python import tensorflow as tf class safernn tf keras layers wrapper wrapper for keras rnn layer avoid a mask cause cuda error def call self input mask none kwargs run input through the wrap layer if mask be not none valid tf reduce any mask axis 1 keepdim true mask tf where valid mask tf one like mask return self layer input mask mask kwargs def compute mask self input mask none return an output mask tensor if mask be none return none return tf reduce any mask axis 1 gru safernn tf keras layers gru 128 rng tf random get global generator inp rng normal 8 64 256 msk tf concat tf one 4 64 dtype tf bool tf zero 4 64 dtype tf bool axis 0 work with tf device gpu 0 gru inp mask msk other info this issue be not consistent from a system to the other it do not trigger on my linux mint 19 1 system with the same python tensorflow and cuda cudnn version but distinct gpu quadro p1000 |
tensorflowtensorflow | graph mode failure with mask fed rnn layer within a train step loop on gpu | Bug | system information have I write custom code yes os platform and distribution linux mint 19 1 cento 8 tensorflow instal from binary pip tensorflow version 2 2 0 v2 2 0 rc4 8 g2b96f3662b python version 3 6 9 cuda cudnn version 10 1 7 gpu model and memory quadro p1000 4 gb rtx 6000 24 gb describe the current behavior model training fail in graph mode when the custom tf keras model train step involve both a loop except if iterate over an int and not a tensor a tf keras layer rnn inherit layer the passing of a mask to say layer whether implicitly base on previous layer s compute mask output or manually pass a deterministic mask even an whole true one the error raise have the following format invalidargumenterror 2 root error s find 0 invalid argument instantiateoption input device must have the same length as the number of argument input device length 49 number of argument 50 node while body 1 statefulpartitionedcall while exit 59 46 1 invalid argument instantiateoption input device must have the same length as the number of argument input device length 49 number of argument 50 node while body 1 statefulpartitionedcall 0 successful operation 0 derive error ignore op inference train function function call stack train function train function it appear to be raise on the second call to self get gradient I e when enter the loop within the custom train step and pass input to the model for the second time unresolved automatically close issue 39827 appear to be similar to mine I hereby provide a minimal example to reproduce this issue which be arguably an edge case but have cause I quite some trouble in this example I implement some gradient stack as part of the custom training step in a non general way for the sake of simplicity note that in this example since the number of substep be fix the loop could be rewrite as a python for loop over an int then the issue do not arise in real life though I be use a generalized gradient stack implementation that require iterate over tensor hence my providing here a tf while loop base example describe the expect behavior I would expect all four test case to run without trigger an exception as they do in 2 3 or on cpu so that proper masking may be use with rnn within gradient stack training loop in graph mode which be the most desirable setting in term of both model correctness and code optimization standalone code to reproduce the issue the follow script will raise the issue if run on gpu with tf2 2 use a gru layer and or a bidirectional wrapper do not alter the issue note that be will not raise the issue if run on cpu or with tf2 3 python code utf 8 minimal example script for an rnn issue within custom train loop import numpy as np import tensorflow as tf class stackedmodel tf keras model minimal gradient stack example model subclass def train step self datum note here we assume datum be a single x y tuple in order to provide with a minimal example input y true datum size tf shape input 0 4 compute gradient on the batch s first quarter gradient self get gradient input size y true size define a process to compute and stack gradient def process quarter idx gradient compute gradient on a datum sub batch and stack they grad loc self get gradient input idx size idx 1 size y true idx size idx 1 size gradient self add gradient a b for a b in zip gradient grad loc return tf add idx 1 gradient iteratively process the remain data quarter use the former gradient tf while loop cond lambda idx tf math less idx 4 body process quarter loop var tf constant 1 gradient parallel iteration 1 apply the aggregated gradient grad and var zip gradient self trainable variable self optimizer apply gradient grad and var return the current value of the loss and metric return m name m result for m in self metric def get gradient self input y true compute gradient for give x y datum with tf gradienttape as tape y pre self input training true loss self compile loss y true y pre return tape gradient loss self trainable variable staticmethod def add gradient grad a grad b return the sum of two gradient object tensor of indexedslice if not isinstance grad b type grad a raise typeerror try to add object of distinct type if isinstance grad a tf tensor return tf add grad a grad b if isinstance grad a tf indexedslice value tf concat grad a value grad b value axis 0 indice tf concat grad a indices grad b indice axis 0 return tf indexedslice value indice grad a dense shape def build example model run eagerly avoid mask return a keras model for binary classification of tokens sequences this model expect an input batch of token with zero value be treat as padding and thus mask an embed layer encode the token into vector in r 128 then a lstm layer produce sequence wise vector in r 128 which be finally transform into binary probability by a dense layer input tf keras input none dtype tf int32 emb tf keras layer embed input dim 200 output dim 128 mask zero true rnn tf keras layers lstm 128 out tf keras layer dense 2 softmax embed emb input if avoid mask embed rnn embed mask none else embed rnn embed mask be pass implicitly model stackedmodel input out embed model compile loss binary crossentropy run eagerly run eagerly return model def build example dataset return a tf datum dataset of batch right pad tokens sequence define a random token sequences generator def generator yield sequence of 8 to 32 random int in 1 200 plus a label size 8 np random choice 24 size 640 replace true for I in range 640 seq 1 np random choice 199 size size I replace true lab tf one hot np random choice 2 depth 2 yield seq lab set up and return a dataset make of batch of 32 padded sequence dst tf datum dataset from generator generator output shape none 2 output type tf int32 tf float32 return dst pad batch 32 padded shape none 2 def main minimal demonstration script dst build example dataset repeat print run eagerly without mask at lstm model build example model run eagerly true avoid mask true model fit dst step per epoch 20 epoch 3 print run eagerly with mask at lstm model build example model run eagerly true avoid mask false model fit dst step per epoch 20 epoch 3 print run in graph mode without mask at lstm model build example model run eagerly false avoid mask true model fit dst step per epoch 20 epoch 3 print run in graph mode with mask at lstm prepare for failure model build example model run eagerly false avoid mask false model fit dst step per epoch 20 epoch 3 if name main main other info the issue I raise here appear to have be fix in tensorflow 2 3 0 since I realize that after have spend a few hour track down the bug revise my code and eventually implement this test case I still think that it would be worth report I would like to have someone confirm whether this have properly be fix in 2 3 and if possible I would be interested in some insight as to the issue s initial cause and solve I be also wonder whether the fix would be worth backporte to tf 2 2 e g as a version 2 2 1 as update custom code from 2 2 to 2 3 can require a limited yet non anecdotal effort |
tensorflowtensorflow | dense to ragged batch fail with map function implement with py function | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 8 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 3 0 rc2 23 gb36436b087 2 3 0 describe the current behavior dense to ragged batch fail when I use a map function implement with py function describe the expect behavior dense to ragged batch should work no matter which map function I use standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python usr bin python3 import numpy as np import tensorflow as tf def map function x image bbox label tf py function map function impl inp x tout tf float32 tf float32 tf int32 return image bbox label def map function impl x image np random normal size 416 416 3 num target np random randint low 0 high x size bbox np random normal size num target 4 label np random randint low 0 high 10 size num target return image bbox label def main dataset tf datum dataset from tensor slice np random randint low 3 high 10 size 6 dataset dataset map map function print dataset element spec 0 shape print dataset element spec 1 shape print dataset element spec 2 shape dataset dataset apply tf datum experimental dense to ragged batch batch size 2 for batch in dataset print batch if name main main other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | batch training with model fit not work for all batch size | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 windows tensorflow instal from source or binary binary via anaconda tensorflow version use command below 2 1 0 python version 3 7 9 describe the current behavior when the number of sample in the training set be not equal to a factor of the batch size model fit will throw an error on 2 1 0 I get an error say that the shape of two operator be incompatible not sure if this be the expect behavior or not tensorflow python framework error impl invalidargumenterror incompatible shape 32 1 vs 4 1 node mul 5 define at classify py 31 op inference distribute function 435 function call stack distribute function it seem like when the function be create the last batch in the epoch it doesn t have enough sample remain to create a full batch as a result it build what it can then error out the size of my dataset be 1763 not exactly a neat number the only other method I have to implement minibatch training be to split the dataset into batch myself and train manually without model fit like I say if this be the expect behavior ignore this probably but it seem like a hassle especially if the size of someone s dataset be a prime number in which case they would be restrict to training either with a batch size of 1 or the size of their full dataset which seem inconvenient describe the expect behavior the expect behavior for I would be for model fit to split the datum into batch without run out of room on the last batch standalone code to reproduce the issue import numpy as np import tensorflow as tf from tensorflow keras import sequential optimizer loss from tensorflow keras layers import input dense flatten def make model input shape output shape batch size 32 model sequential model add input shape input shape batch size batch size name input model add flatten model add dense output shape model build batch size input shape return model if name main batch size 32 input size 5 5 output size 1 num sample 100 model make model input size output size batch size batch size model compile optimizer optimizer adam loss loss sparsecategoricalcrossentropy from logit true x np one num sample input size y np zero num sample model fit x y batch size batch size epoch 10 here s a link to a colab notebook as well it seem like in more recent version of tensorflow the error be still there but the exact error that come up be slightly different |
tensorflowtensorflow | link doesn t work | Bug | link for tensorflow graph optimization with grappler from url performance comparison do not exist the link point to page which give page not find error |
tensorflowtensorflow | reproduce internal ci error via tf lite micro makefile | Bug | tensorflow micro see this comment issuecomment 689199971 for an internal error due to wswitch that we could not reproduce via the tflm makefile the underlying issue there be a miss wswitch in the makefile and an inability to build without dtf lite static memory |
tensorflowtensorflow | model predict problem when use one datum at a time | Bug | system information have I write custom code yes os platform and distribution docker tensorflow tensorflow 2 2 0 gpu tensorflow version use command below 2 2 0 python version 3 6 9 cuda cudnn version v10 1 243 gpu model and memory titan x pascal 12go describe the current behavior I be currently try to learn the sum of 2 digit thank to kera my input be an array of 2 int and output be one value the problem be in the predict function when use model predict with a batch it work perfectly however when try to predict one datum at a time model predict doesn t work as it should and return a warning warn tensorflow model be construct with shape none 2 for input tensor input 1 0 shape none 2 dtype float32 but it be call on an input with incompatible shape none 1 describe the expect behavior use the exact same code in docker tensorflow tensorflow 1 9 0 py3 both method batch or one datum at a time give the same result standalone code to reproduce the issue |
tensorflowtensorflow | pylint incorrectly identify tensorflow public api function in tensorflow 2 2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow apart from the example below no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 osx 10 15 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device x tensorflow instal from source or binary binary tensorflow version use command below 2 2 python version 3 7 4 bazel version if compile from source x gcc compiler version if compile from source x cuda cudnn version x gpu model and memory x describe the current behavior when use pylint a lot of function from the public api be misidentifie for example the public api function tf split which be publicly define as tensorflow python op array op split be misidentifie I think as tensorflow python ops gen array op split other example include tf random uniform tf concat and the list go on let s take the code snip below example py python import tensorflow as tf tensor tf random uniform 2 4 minval 0 maxval 256 tf split tensor num or size split 2 axis 1 this be perfectly fine code it run as expect however when run pylint both function be misidentifie and lot of linte error be raise running pylint on the module pylint e example py give example py 3 9 e1123 unexpected keyword argument minval in function call unexpected keyword arg example py 3 9 e1123 unexpected keyword argument maxval in function call unexpected keyword arg example py 3 9 e1120 no value for argument dtype in function call no value for parameter example py 4 0 e1123 unexpected keyword argument num or size split in function call unexpected keyword arg example py 4 0 e1124 argument axis pass by position and keyword in function call redundant keyword arg example py 4 0 e1120 no value for argument value in function call no value for parameter example py 4 0 e1120 no value for argument num split in function call no value for parameter describe the expect behavior run pylint e example py should not give any error standalone code to reproduce the issue python import tensorflow as tf tensor tf random uniform 2 4 minval 0 maxval 256 tf split tensor num or size split 2 axis 1 other info log this occur with tensorflow 2 2 0 previous version of tensorflow 2 x e g tensorflow 2 1 x and tensorflow 2 0 x do not have these problem I have use pylint 2 6 0 for the example but previous version have the same behaviour one of the thing that might have cause this just a guess be the upgrade to a new version of gast that occur in tensorflow 2 2 where they go from gast 0 2 2 to gast 0 3 3 now I know that this issue be not a code break issue but it be a workflow break issue when use tensorflow in a professional setting for example one of the requirement for pass all step in the ci may be run pylint which now fail pylint allow disable error for specific third party package so really the only solution be to add a pylint disable comment every time you use a tensorflow function which be misidentifie or to disable pylint for the project all together both option aren t desirable this issue be also raise in the pylint repo and probably also relate to but I don t think these issue belong in the pylint repo or astroid for that matter but here in the tensorflow repo as it s probably cause by the import structure of tensorflow one lead might be that a wildstar import overwrite function an example might be the wildstar import in l40 which overwrite tensorflow python op array op split with the star import tensorflow python ops gen array op split I m not sure but not perform wildstar import might solve this linter problem |
tensorflowtensorflow | tensorflow2 tutorials actor critic method | Bug | url s with the issue description of issue what need change clear description at the second cell we want to install additional package for visualization maybe should be add sudo apt get update before the line sudo apt get install y xvfb python opengl dev null 2 1 let the xvfb package can be instal normally usage example correct the second cell as below bash sudo apt get update sudo apt get install y xvfb python opengl dev null 2 1 pip install pyvirtualdisplay dev null 2 1 pip install git dev null 2 1 |
tensorflowtensorflow | same variable name for tf feature column embed column embed tensor in tf 2 3 0 | Bug | system information have I write custom code yes os platform and distribution 64 bit linux tensorflow instal from source tensorflow version v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 6 10 cuda compilation tool release 10 0 v10 0 130 describe the current behavior in tf 2 1 0 when I use tf feature column embed column the variable name of the embed variable be derive from the categorical column name example if I create an embed column use a categorical column feature alpha tf feature column categorical column with hash bucket feature alpha 100 dtype tf dtype string alpha emb tf feature column embed column feature alpha dimension 10 feature beta tf feature column categorical column with hash bucket feature beta 200 dtype tf dtype string beta emb tf feature column embed column feature beta dimension 20 the embed variable for feature alpha be name dense feature feature alpha embed embed weight 0 the embed variable for feature beta be name dense feature feature beta embed embed weight 0 the variable name contain the categorical column name feature alpha and feature beta in tf 2 3 0 the embed variable for feature alpha be name dense feature embed weight 0 the embed variable for feature beta be name dense feature embed weight 0 this work fine if there be only a single feature column but when there be multiple feature column the embed variable be assign the same name which create problem while save the model I get the error runtimeerror unable to create link name already exist while save the model since the embed variable for feature alpha and feature beta have the same variable name describe the expect behavior the expect behaviour be that what we observe in tf 2 1 0 standalone code to reproduce the issue this code work in tf 2 1 0 but fail in tf 2 3 0 with the error runtimeerror unable to create link name already exist due to the same variable name import tensorflow as tf input feature alpha tf keras layers input name feature alpha shape none sparse true dtype tf dtype string feature beta tf keras layers input name feature beta shape none sparse true dtype tf dtype string def gen model input feature alpha tf feature column categorical column with hash bucket feature alpha 100 dtype tf dtype string feature beta tf feature column categorical column with hash bucket feature beta 200 dtype tf dtype string alpha emb tf feature column embed column feature alpha dimension 10 beta emb tf feature column embed column feature beta dimension 20 out tf keras layer densefeature alpha emb beta emb input out tf keras layer dense 64 activation relu out model tf keras model input out return model model gen model input print model trainable variable model save mdl h5 |
tensorflowtensorflow | valueerror tensor type variable initializer must either be wrap in an init scope or callable in tpu stragegy | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I use add weight function to add variable to model in tpu strategy but it show that usr local lib python3 6 dist package tensorflow python autograph impl api py in wrapper args kwargs 256 except exception as e pylint disable broad except 257 if hasattr e ag error metadata 258 raise e ag error metadata to exception e 259 else 260 raise valueerror in user code 38 call target feat self build parent 26 build parent weight tf nn relu tf cast self add weight initializer tf one initializer name block fusion format 1 j dtype dtype for j in range len parent usr local lib python3 6 dist package tensorflow python keras engine base layer py 614 add weight cache device cache device usr local lib python3 6 dist package tensorflow python training tracking base py 750 add variable with custom getter kwarg for getter usr local lib python3 6 dist package tensorflow python keras engine base layer util py 145 make variable shape variable shape if variable shape else none usr local lib python3 6 dist package tensorflow python op variable py 260 call return cls variable v1 call args kwargs usr local lib python3 6 dist package tensorflow python op variable py 221 variable v1 call shape shape usr local lib python3 6 dist package tensorflow python op variable py 67 getter return capture getter capture previous kwargs usr local lib python3 6 dist package tensorflow python distribute distribute lib py 2024 creator with resource var create self create variable next creator kwargs usr local lib python3 6 dist package tensorflow python distribute tpu strategy py 870 create variable kwargs usr local lib python3 6 dist package tensorflow python distribute distribute util py 291 create mirror variable value list real mirror creator kwargs usr local lib python3 6 dist package tensorflow python distribute tpu strategy py 861 real mirror creator v next creator kwargs usr local lib python3 6 dist package tensorflow python op variable py 199 previous getter lambda kwargs default variable creator none kwargs usr local lib python3 6 dist package tensorflow python ops variable scope py 2597 default variable creator shape shape usr local lib python3 6 dist package tensorflow python op variable py 264 call return super variablemetaclass cls call args kwargs usr local lib python3 6 dist package tensorflow python op resource variable op py 1518 init distribute strategy distribute strategy usr local lib python3 6 dist package tensorflow python op resource variable op py 1601 init from args raise valueerror tensor type variable initializer must either be valueerror tensor type variable initializer must either be wrap in an init scope or callable e g tf variable lambda tf truncated normal 10 40 when building function please file a feature request if this restriction inconvenience you describe the expect behavior can successful add new variable to model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook the standalone code can be see here this code can sucessful run in gpu cpu but get error in tpu other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | conv1d and probablyall other conv layer with dilation rate 1 do not reliably handle change in input size | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow bug be present with tensorflow 2 1 and tensorflow 2 2 from anaconda os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 and debian 4 9 189 3 deb9u2 2019 11 11 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not mobile tensorflow instal from source or binary binary tensorflow version use command below unknown 2 1 0 and unknown 2 2 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 243 7 6 5 gpu model and memory fgeforce gtx 1050 ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior this extremely simple sequnec of call of a conv1d layer fail on the third call import numpy as np import tensorflow as tf cc tf keras layer conv1d 1 3 padding same dilation rate 3 res1 cc np zero 1 100 1 dtype np float32 res2 cc np zero 1 101 1 dtype np float32 res3 cc np zero 1 100 1 dtype np float32 describe the expect behavior the third call should run similarly as the other standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook test code zip other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach error message res1 do res2 do 2020 09 06 00 05 58 405706 w tensorflow core framework op kernel cc 1753 op require fail at spacetobatch op cc 219 invalid argument pad shape 0 107 be not divisible by block shape 0 3 traceback most recent call last file test conv dila py line 12 in res3 cc np zero 1 100 1 dtype np float32 file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python keras engine base layer py line 968 in call output self call cast input args kwargs file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python keras layers convolutional py line 207 in call output self convolution op input self kernel file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python op nn op py line 1106 in call return self conv op inp filter file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python op nn op py line 638 in call return self call inp filter file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python op nn op py line 621 in with space to batch call input inp block shape dilation rate padding padding file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python ops gen array op py line 9491 in space to batch nd op raise from not ok status e name file datum anasynth anaconda3 envs tf2 2 lib python3 7 site package tensorflow python framework op py line 6653 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror pad shape 0 107 be not divisible by block shape 0 3 op spacetobatchnd |
tensorflowtensorflow | large number be multiply with the embed output | Bug | url s with the issue the url with the issue be description of issue what need change clear description while I be check the transformer code from this tensorflow documentation in the encoder subsection under encoder and decoder section I find that before add positional encoding square root of d model be multiply with the embed output I doubt what be the specific reason behind multiply the embed output with a large number attach the screenshot of the specific code snippet from the link |
tensorflowtensorflow | microinterpreter tensor size always return zero | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux fedora 32 tensorflow instal from source or binary source tensorflow version commit sha if source 238981a91a9b780ab4449829469a71c5d668a273 target platform e g arm mbe os arduino nano 33 etc x86 describe the problem the tensor size method of the microinterpreter class always return zero please provide the exact sequence of command step when you run into the problem to demonstrate insert the follow line tf lite micro expect gt static cast 0 interpreter tensor size here l87 and run make f tensorflow lite micro tool make makefile target x86 test micro interpreter test |
tensorflowtensorflow | cmsis nn incorrect flag for non dsp mve processor convolution op only | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 5a16264ba6f12883726d12d484d4cd61405ddab7 target platform e g arm mbe os arduino nano 33 etc arm internal testing use model for cortex m3 describe the problem calculateopdata in cmsis nn conv cc be inside a if define arm feature dsp define arm feature mve flag which be not set for a cortex m3 or similar that do not have dsp or mve extension this result in uninitialized quantization parameter be pass on to the convolution api in the eval procedure this could result in an assert in debug build or atleast an incorrect output from convolution please provide the exact sequence of command step when you run into the problem just a visual code check |
tensorflowtensorflow | more detail on the reduce codesize tag use for arc | Bug | tensorflow micro this come up during the review for pr 42020 in this comment discussion r483160984 the tflm team would like to understand well what this reduce codesize tag be do in micro examples micro speech arc emsdp makefile topic of discussion in the current design tag be mostly mean as a way to allow for multiple optimize kernel implementation while not enforce the expectation be that each tag have a corresponding directory in micro kernels we be plan on make some change in the interest of be able to register different kernel variant that might be useful for this instead of what appear to be a find and replace |
tensorflowtensorflow | tensorflow lite micro make it possible to specify optimization level when build with the makefile | Bug | tensorflow micro it would be useful to be able to manually set the optimization level for different build type s from the make command my idea be that the default optimization flag should be as it be now none for build type debug o3 for build type release if the user want to use different optimization level that should be configurable my propose fix for this be to change tensorflow lite micro tool make makefile as can be see in makefile txt if it look ok I can submit a pull request |
tensorflowtensorflow | abort core dump tf 2 3 tf lite model convert but crash on invoke | Bug | hey there I m try to convert stylegan2 to tf lite currently there seem to be an issue with invoke the synthesis block of the generator so I ve include a small model here which be just one synthesis block to display the issue convert fine in tf 2 3 with no issue then on attempt to invoke I get the exit code process finish with exit code 134 interrupt by signal 6 sigabrt in pycharm and abort core dump if run from the terminal judge from the graph in netron seem there be 2 flex op conv2d on 4 dim tensor in nchw format and everything else be build in op system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source conda tensorflow version or github sha if from source convert with tf 2 3 try to invoke on tf2 3 and tf nightly command use to run the converter or code if you re use the python api here s a colab notebook which download the tflite model and attempt to invoke link to tflite model link to savedmodel log from conversion conversion log txt here be the code use to convert the model converter tf lite tfliteconverter from keras model synth const converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert with tf io gfile gfile synth const tflite wb as f f write tflite model attempt on both the savedmodel and the tf keras functional model the output from the converter invocation without gpu 2020 09 03 13 00 19 623097 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 info create tensorflow lite delegate for select tf op 2020 09 03 13 00 20 461562 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 09 03 13 00 20 490338 I tensorflow core platform profile util cpu util cc 104 cpu frequency 3199980000 hz 2020 09 03 13 00 20 490824 I tensorflow compiler xla service service cc 168 xla service 0x55c893461c80 initialize for platform host this do not guarantee that xla will be use device 2020 09 03 13 00 20 490859 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 09 03 13 00 20 495591 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 09 03 13 00 20 508318 e tensorflow stream executor cuda cuda driver cc 314 fail call to cuinit cuda error no device no cuda capable device be detect 2020 09 03 13 00 20 508384 I tensorflow stream executor cuda cuda diagnostic cc 169 retrieve cuda diagnostic information for host y4tsu pc 2020 09 03 13 00 20 508399 I tensorflow stream executor cuda cuda diagnostic cc 176 hostname y4tsu pc 2020 09 03 13 00 20 508513 I tensorflow stream executor cuda cuda diagnostic cc 200 libcuda report version be 450 66 0 2020 09 03 13 00 20 508567 I tensorflow stream executor cuda cuda diagnostic cc 204 kernel report version be 450 66 0 2020 09 03 13 00 20 508580 I tensorflow stream executor cuda cuda diagnostic cc 310 kernel version seem to match dso 450 66 0 info tfliteflexdelegate delegate 2 node delegate out of 25 node with 2 partition process finish with exit code 134 interrupt by signal 6 sigabrt with gpu 2020 09 03 13 03 30 866202 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 info create tensorflow lite delegate for select tf op 2020 09 03 13 03 31 687743 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 09 03 13 03 31 710426 I tensorflow core platform profile util cpu util cc 104 cpu frequency 3199980000 hz 2020 09 03 13 03 31 711119 I tensorflow compiler xla service service cc 168 xla service 0x564ecd17eb30 initialize for platform host this do not guarantee that xla will be use device 2020 09 03 13 03 31 711173 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 09 03 13 03 31 718118 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 09 03 13 03 31 873839 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 886701 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 887313 I tensorflow compiler xla service service cc 168 xla service 0x564ecd21f560 initialize for platform cuda this do not guarantee that xla will be use device 2020 09 03 13 03 31 887326 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1080 ti compute capability 6 1 2020 09 03 13 03 31 887331 I tensorflow compiler xla service service cc 176 streamexecutor device 1 geforce gtx 1050 ti compute capability 6 1 2020 09 03 13 03 31 887558 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 887995 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 6325ghz corecount 28 devicememorysize 10 91gib devicememorybandwidth 451 17gib s 2020 09 03 13 03 31 888041 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 888337 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 05 00 0 name geforce gtx 1050 ti computecapability 6 1 coreclock 1 392ghz corecount 6 devicememorysize 3 95gib devicememorybandwidth 104 43gib s 2020 09 03 13 03 31 888357 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 09 03 13 03 31 889727 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 09 03 13 03 31 891078 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 09 03 13 03 31 891344 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 09 03 13 03 31 892774 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 09 03 13 03 31 893593 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 09 03 13 03 31 896654 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 09 03 13 03 31 896761 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 897299 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 897737 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 898261 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 31 898550 I tensorflow core common runtime gpu gpu device cc 1843 ignore visible gpu device device 1 name geforce gtx 1050 ti pci bus i d 0000 05 00 0 compute capability 6 1 with core count 6 the minimum require count be 8 you can adjust this requirement with the env var tf min gpu multiprocessor count 2020 09 03 13 03 31 898559 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2020 09 03 13 03 31 898586 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 09 03 13 03 32 301209 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 09 03 13 03 32 301234 I tensorflow core common runtime gpu gpu device cc 1263 0 1 2020 09 03 13 03 32 301241 I tensorflow core common runtime gpu gpu device cc 1276 0 n n 2020 09 03 13 03 32 301244 I tensorflow core common runtime gpu gpu device cc 1276 1 n n 2020 09 03 13 03 32 301444 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 32 302199 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 09 03 13 03 32 302626 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 9647 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 01 00 0 compute capability 6 1 info tfliteflexdelegate delegate 2 node delegate out of 25 node with 2 partition process finish with exit code 134 interrupt by signal 6 sigabrt any idea what s go on or a possible fix for this should I be change some of the op to fit tf lite well in some way |
tensorflowtensorflow | matrix do not consider sample weight | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow provide below os platform and distribution e g linux ubuntu 16 04 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary docker tensorflow version use command below below python version 3 7 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 3 0 rc2 23 gb36436b087 2 3 0 describe the current behavior when sample weight be provide metric do not consider that describe the expect behavior when sample weight be provide metric should consider that standalone code to reproduce the issue import tensorflow as tf import numpy as np datum size 100 input size 3 class 2 x train np random rand data size input size y train np random randint 0 class datum size x val np random rand data size input size y val np random randint 0 class datum size input tf keras layers input shape input size pre tf keras layer dense 1 activation sigmoid input model tf keras model model inputs input output pre loss tf keras loss binary crossentropy metric tf keras metrics binarycrossentropy model compile loss loss metric metric optimizer adam for layer in model layer layer trainable false sample weight train np random uniform 0 1 100 sample weight val np random uniform 0 1 100 model fit x x train y y train sample weight sample weight train validation datum x val y val sample weight val 29ms step loss 0 3799 binary crossentropy 0 7369 val loss 0 3454 val binary crossentropy 0 7502 pred1 model predict x val log loss y val pred1 sample weight sample weight val log loss y val pred1 log loss y val pred1 sample weight sample weight val np sum sample weight val len sample weight val 0 7352043427636902 0 7502077868580819 0 34536829505647026 be it expect behaviour or be I miss something |
tensorflowtensorflow | non responsive model when build micro speech with cmsis nn | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary source tensorflow version commit sha if source d3cdadd target platform e g arm mbe os arduino nano 33 etc mbe os stm32f746 describe the problem try to build the micro speech application for stm32f746 also see with nxp frdm k66f with the cmsis nn optimize kernel result in an application which be non responsive to input each iteration of the model result in equal average score for each category 64 64 64 64 such that no predict command be ever display the same network response be see when supply feature datum from yes micro feature datum h and no micro feature datum h into the model this behavior be only see when compile for release mode when compile with debug mode the application be responsive to input datum and seem to be somewhat able to detect the speak word yes and no when generate the application without the cmsis nn tag the application run fine both for release and debug mode please provide the exact sequence of command step when you run into the problem make f tensorflow lite micro tool make makefile target mbe tag cmsis nn disco f746ng generate micro speech mbe project in location of generate project mbe config root mbe deploy mbe compile m disco f746ng t gcc arm profile release flash |
tensorflowtensorflow | fail to convert sparsetensor to tensor | Bug | I m use tensorflow 2 3 0 and kera 2 4 3 on ubunto 20 04 the code work okay on tensorflow 2 1 0 and kera 2 3 1 class args def init self dataset path dataset train mymodel output my model h5 label output le pickle embedding output embedding pickle image out dataset test img test jpg image in dataset test 001 jpg video out dataset video output test mp4 video in dataset video input ok arya stark mp4 image size 112 112 model model arcface r100 v1 model 0 ga model detector gpu 0 det 0 flip 0 threshold 1 24 self dataset dataset path self mymodel mymodel self le label self embedding embedding self image out image out self image in image in self video out video out self video in video in self image size image size self model model self ga model ga model self detector detector self gpu gpu self det det self flip flip self threshold threshold def init parsearge self ap argparse argumentparser argument of insightface ap add argument dataset default self dataset help path to training dataset ap add argument mymodel default self mymodel help path to recognizer model ap add argument le default self le help path to label encoder ap add argument embedding default self embedding help path to embedding ap add argument image out default self image out help path to output image ap add argument image in default self image in help path to output image ap add argument video out default self video out help path to output video ap add argument video in default self video in ap add argument image size default self image size help ap add argument model default self model help path to load model ap add argument ga model default self ga model help path to load model ap add argument detector default self detector type str help face detector name ap add argument gpu default self gpu type int help gpu i d ap add argument det default self det type int help mtcnn option 1 mean use r o 0 mean detect from begin ap add argument flip default self flip type int help whether do lr flip aug ap add argument threshold default self threshold type float help ver dist threshold args ap parse args return args class softmax def init self input shape num class self input shape input shape self num class num class def build self from keras loss import categorical crossentropy from keras model import sequential from keras optimizers import adam from keras layer import dense dropout create model model sequential add model layer model add dense 1024 activation relu input shape self input shape model add dropout 0 5 model add dense 1024 activation relu model add dropout 0 5 model add dense self num class activation softmax loss and optimizer optimizer adam learning rate 0 001 beta 1 0 9 beta 2 0 999 amsgrad false model compile loss categorical crossentropy optimizer optimizer metric accuracy return model def make model args classifier softmax load the face embedding datum pickle load open args embedding rb read num class len np unique data name ct columntransformer my name onehotencod 0 label np array datum name reshape 1 1 label ct fit transform label embedding np array datum embedding initialize softmax training model argument batch size 32 epoch 32 input shape embedding shape 1 build classifier init classifier classifier input shape input shape num class num class model init classifier build create kfold cv kfold n split 5 random state none shuffle true history acc val acc loss val loss train for train idx valid idx in cv split embedding x train x val y train y val embedding train idx embedding valid idx label train idx label valid idx his model fit x train y train batch size batch size epoch epoch verbose 1 validation datum x val y val write the face recognition model to output model save args mymodel f open args le wb f write pickle dump labelencoder f close I get the error 28 his model fit x train y train batch size batch size epoch epoch verbose 1 validation datum x val y val typeerror fail to convert object of type to tensor content sparsetensor indice tensor deserializesparse 0 shape none 2 dtype int64 value tensor deserializesparse 1 shape none dtype float32 dense shape tensor stack 0 shape 2 dtype int64 consider cast element to a support type I have use several issue solution but none have work |
tensorflowtensorflow | bug when use num parallel call when mapping dataset to tfa function | Bug | as mention over the issue here and advise from other contributor I m create this issue cause use num parallel call tf data experimental autotune inside the map call from my dataset appear to generate a deadlock I ve test with tensorflow version 2 2 and 2 3 and tensorflow addon 0 11 1 and 0 10 0 on google colab pro gpu env python3 8 link to example tfrecord code to reproduce the issue python test dataset tf datum tfrecorddataset num parallel read tf data experimental autotune filename drive dir tf issue test 0 tfrecord map parse fn num parallel call tf data experimental autotune test dataset test dataset map translate num parallel call tf data experimental autotune prefetch tf datum experimental autotune test dataset test dataset unbatch iterator tf compat v1 datum make one shot iterator test dataset for I in range 5 image label iterator get next auxiliary function python def parsing fn serialize feature image tf io fixedlenfeature tf string label tf io fixedlenfeature tf int64 parse example tf io parse single example serialize serialize feature feature image raw parse example image image tf io decode jpeg image raw image tf image resize image size 224 224 label parse example label return image label translate lambda image label tf py function func translate pipeline inp image label tout tf float32 tf int64 def translate pipeline original image label print 1 height tf shape original image 0 numpy width tf shape original image 1 numpy y fraction tf convert to tensor height 0 2 dtype tf float32 x fraction tf convert to tensor width 0 2 dtype tf float32 print 2 batch image tf tile tf expand dim original image axis 0 4 1 1 1 create 4 copy version from the original image and add to a batch translate image tfa image translate op translate image batch image translation x fraction y fraction x fraction y fraction x fraction y fraction x fraction y fraction augment image tf concat tf expand dim original image axis 0 translate image axis 0 print 3 label tf reshape label 1 1 label tf tile label 5 1 print 4 return augment image label output 1 1 2 2 |
tensorflowtensorflow | deprecate function setdiff1d still use in the tf source code | Bug | system information I have write custom code linux ubuntu 20 04 tensorflow 2 3 instal use pip python 3 8 2 current unexpected behavior calculate the gradient of the reduce prod function raise this warning warn tensorflow from home prasanth local pythonuserbase lib python3 8 site package tensorflow python ops math grad py 297 setdiff1d from tensorflow python op array op be deprecate and will be remove after 2018 11 30 instruction for update this op will be remove after the deprecation date please switch to tf set difference standalone code to reproduce the issue the warning will only be display once in a session import tensorflow as tf x tf one 5 with tf gradienttape as g g watch x y tf math reduce prod x grad g gradient y x |
tensorflowtensorflow | experimental step per execution option discard when load a model from checkpoint | Bug | system information have I write custom code yes os platform and distribution google colab tpu tensorflow version 2 3 python version 3 describe the current behavior I have create and train a model compile with the experimental step per execution option when I load the model through tf keras model load model to resume the training the option experimental step per execution be not take in consideration and kera perform on step action after every step and not after n step as specify in experimental step per execution when the model be first compile describe the expect behavior the model should load the experimental step per execution option or there should be an option to specify experimental step per execution when load the model if necessary I can modify an mnist example on colab but to reproduce this bug it be just need to create a model compile it with the experimental step per execution option enable train it for a few epoch to verify the correct behaviour of experimental step per execution save the model load the model through tf keras model load model train the model for a few epoch to verify that experimental step per execution be be disregard other info log one possible workaroud which I be test right now be to set manually the step per execution with model configure step per execution experimental step per execution once it be load thank you for your help |
tensorflowtensorflow | documentation be become too undecipherable for a meaningful practical work | Bug | the documentation be become too undecipherable for any practical application use tensorflow look at this one how would anyone learn anything for a stream datum from disk not many how tos for datum that don t fit into memory and that page just like a man page for api we end up waste many man hour without good documentation |
tensorflowtensorflow | tflite c java experimental kernel ctc beam search decoder return always buffer length length 1 | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device pixel 2 tensorflow instal from source or binary source tensorflow version master python version 3 8 instal use virtualenv pip conda pip bazel version if compile from source 3 1 0 gcc compiler version if compile from source 9 3 0 cuda cudnn version gpu model and memory ndk android ndk r20 describe the current behavior all return java intbuffer tflite android from the concrete function decoder tflite have an extra add byte 0 in the end of the return dense decode e g 11 11 4 7 8 0 this happen only in tflite with the tflite experimental kernel ctc beam search decoder cc return from java describe the expect behavior the return dense decode from the concrete function decoder tflite should be e g 11 11 4 7 8 if I use the concrete function directly in python it work as expect if I use the concrete function export as decoder tflite and load directly in python it work as expect standalone code to reproduce the issue bash git clone cd tensorflow git checkout b master 6e9d916229b5aefbdcfd33cbc4b34c9f48b5e6e1 nano tf configure bazelrc bazel config tf configure bazelrc build action env python bin path usr bin python build action env python lib path usr lib python3 dist package build python path usr bin python build xla define with xla support true build opt copt march native build opt copt wno sign compare build opt host copt march native build opt define with default optimization true build action env android ndk home change to your android ndk home build action env android ndk api level 21 build action env android build tool version 28 0 0 build action env android sdk api level 23 build action env android sdk home change to your android sdk home test flaky test attempt 3 test test size filter small medium test test tag filter benchmark test no oss oss serial test build tag filter benchmark test no oss test test tag filter gpu test build tag filter gpu build action env tf configure io 0 and compile with bash bazel build cxxopt std c 14 c opt fat apk cpu arm64 v8a armeabi v7a config monolithic host crosstool top bazel tool tool cpp toolchain tensorflow lite java tensorflow lite tensorflow lite java tensorflow lite gpu tensorflow lite delegate flex delegate tensorflow lite experimental kernel ctc beam search decoder op tmp tensorflow lite select tf op concrete function export to decoder tflite python tf function def decode logit top path 3 beam width 3 batch size current timestep tf shape input logit seq len tf fill batch size current timestep logit tf transpose a logit perm 1 0 2 decode log probability tf nn ctc beam search decoder input logit top path top path beam width beam width sequence length seq len dense decode tf sparse to dense decode 0 default value 1 |
tensorflowtensorflow | unexpected behaviour of tf keras activation relu | Bug | system information tensorflow version v2 3 0 0 gb36436b087 2 3 0 describe the current behavior when pass np nan to tf keras activations relu it return 0 0 this only happen when use gpu the relu activation behave as expect under cpu it return nan describe the expect behavior when pass np nan to tf keras activations relu it should return nan standalone code to reproduce the issue image |
tensorflowtensorflow | uninitialized memory access of per channel param | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 5a16264ba6f12883726d12d484d4cd61405ddab7 target platform e g arm mbe os arduino nano 33 etc host describe the problem function in question tflite populateconvolutionquantizationparam in tensorflow lite kernels kernel util cc l138 operator in question conv and depthwise conv pointer argument per channel multiplier and per channel shift be access and write to in all case in the non int 8 16 case these argument can be null pointer or uninitialized pointer the reason it doesn t crash now for reference kernel be because memory be allocate for per channel quant parameter irrespective of the quantization type this ticket be for protect access of per channel param in populateconvolutionquantizationparam once that be do memory usage for non per channel case can be reduce for tflu and tfl as an improvement please provide the exact sequence of command step when you run into the problem simple step simple way be to run the unit test for conv or depthwise conv and see that per channel argument be access and update in the non per channel case tensorflow lite micro tool make gen linux x86 64 bin kernel depthwise conv test how it be discover since it be now possible to dynamically allocate per channel param in cmsis nn cc thank to the conv cc and depthwise conv cc in cmsis nn folder be update base on pr with some additional correction to not allocate per channel param for uint8 operator this lead to a crash |
tensorflowtensorflow | convert keras h model to tflite fail with error window fatal exception access violation | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook copy and paste here the exact command the output from the converter invocation copy and paste the output here also please include a link to the save model or graphdef put link here or attach to the issue failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | conv3d | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use concatenation conv 2d pack split sum here be a list of operator for which you will need custom implementation conv3d standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook tensorflow keras layers conv3d 3 3 padding same x also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tflite micro resize near neightbour test fail for build type debug | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version sha 1 source 44dbace063796ea99c5b3d27a3a5810048d5096c describe the current behavior test fail assertion in kernel implementation for build type debug build due to size tensor be provide as 2d non constant tensor describe the expect behavior test should pass standalone code to reproduce the issue make f tensorflow lite micro tool make makefile build type debug test kernel resize near neighbor test small unified diff patch to correctly setup the size input tensor as 1d constant to correct the issue be attach resize near neighbor test patch txt |
tensorflowtensorflow | tf2 can not convert efficientdet d0 to save model tflite | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux arch but also get same issue on colab tensorflow instal from source or binary binary conda tensorflow version use command below 2 2 2 3 python version 3 7 cuda cudnn version run on cpu describe the current behavior I train an efficientdet d0 from on a custom dataset use the object detection api I be able to use the train model for inference without an issue I be able to export this model in the format save model pb use but be get the follow error when I try to load the save model failedpreconditionerror error while read resource variable efficientdet d0 bifpn node 15 2 up lvl 5 combine bifpn combine weight 81607 from container localhost this could mean that the variable be uninitialize not find resource localhost efficientdet d0 bifpn node 15 2 up lvl 5 combine bifpn combine weight 81607 n10tensorflow3vare do not exist node statefulpartitionedcall efficientdet d0 bifpn node 15 2 up lvl 5 combine relu readvariableop op inference signature wrapper 35854 function call stack signature wrapper the error change even if I run the same code again but it remain a failedpreconditionerror I have also try save the model I use during inference use the official documentation save a custom model but that be also not work and give I this error keyerror fail to add concrete function b inference call 31522 to object base save model as it capture tensor tf tensor shape dtype resource which be unsupported or not reachable from root one reason could be that a stateful object or a variable that the function depend on be not assign to an attribute of the serialized trackable object see savet test capture unreachable variable finally I be also unable to convert this model to tflite as outline here convert a concrete function as I be get this error when I try to make an interpreter use interpreter tf lite interpreter model path tflite output path valueerror model provide have model identifi tent should be tfl3 I be also unable to use to export to tflite as it require a ckpt file whereas I only have ckpt index and ckpt datum file any advice on how I can convert my model to tflite with any sort of quantization prune will be appreciate |
tensorflowtensorflow | cmsis nn conv kernel write outside of allocate memory when num channel 256 | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 n a tensorflow instal from source or binary source tensorflow version commit sha if source b36436b087bd8e8701ef51718179037cccdfc26e target platform e g arm mbe os arduino nano 33 etc st iot discovery kit describe the problem the cmsis nn convolutional and depthwise convolutional layer write in uninitialized memory in calculateopdata when num channel be high than 256 kmaxchannel l60 there be no boundary check for this function and thus no error be throw when this happen non cmsis nn kernel have no problem I can get around this by switch the structure to use dynamic memory cpp void per channel output multipli context allocatepersistentbuffer context sizeof int32 t num channel per channel output multipli datum per channel output multipli int32 t per channel output multiplier but this be a temporary object so I assume I should use the scratch buffer instead however I have no idea how to clear the object again from the scratch buffer please provide the exact sequence of command step when you run into the problem the problem show in the following tflite model in one of the final layer it have 1280 channel ei plant vs lamp transfer learn tensorflow lite int8 quantize model lite zip cc kwagyeman this could be the same issue as you re face with cmsis nn it show on our image model |
tensorflowtensorflow | autograph unable to process lambda statement tf 2 3 0 | Bug | tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 3 0 rc2 23 gb36436b087 2 3 0 problem description python import tensorflow as tf assert tf version 2 3 0 dataset tf datum dataset from tensor slice 1 2 3 map lambda x x throw unexpected warning warn tensorflow autograph could not transform at 0x7f4ac022e710 and will run it as be cause could not parse the source code map lambda x x this error may be avoid by create the lambda in a standalone statement to silence this warning decorate the function with tf autograph experimental do not convert warning autograph could not transform at 0x7f4ac022e710 and will run it as be cause could not parse the source code map lambda x x this error may be avoid by create the lambda in a standalone statement to silence this warning decorate the function with tf autograph experimental do not convert expect behaviour the autograph convert the lambda statement as usual additional info this bug do not seem to occur in early version 2 2 0 as per python import tensorflow as tf assert tf version 2 2 0 dt tf datum dataset from tensor slice tf one 10 map lambda x x run without any erroneous output all installation be make use pip package manager |
tensorflowtensorflow | op require fail at conv op cc 1115 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no use code from cnn tutorial os platform and distribution e g linux ubuntu 16 04 arch linux kernel 5 8 3 arch1 1 tensorflow instal from source or binary python tensorflow opt cuda package tensorflow version use command below unknown 2 3 0 python version python 3 8 5 cuda cudnn version cuda 11 0 2 1 cudnn 8 0 2 39 2 gpu model and memory nvidia geforce rtx 2070 8 gb describe the current behavior any model fitting use conv2d layer fail with op require fail at conv op cc 1115 not find no algorithm work error full trace 2020 08 28 12 42 25 012311 w tensorflow core framework op kernel cc 1767 op require fail at conv op cc 1115 not find no algorithm work traceback most recent call last file test py line 26 in history model fit train image train label epoch 10 batch size 64 file usr lib python3 8 site package tensorflow python keras engine training py line 108 in method wrapper return method self args kwargs file usr lib python3 8 site package tensorflow python keras engine training py line 1098 in fit tmp log train function iterator file usr lib python3 8 site package tensorflow python eager def function py line 780 in call result self call args kwd file usr lib python3 8 site package tensorflow python eager def function py line 840 in call return self stateless fn args kwd file usr lib python3 8 site package tensorflow python eager function py line 2829 in call return graph function filter call args kwargs pylint disable protect access file usr lib python3 8 site package tensorflow python eager function py line 1843 in filter call return self call flat file usr lib python3 8 site package tensorflow python eager function py line 1923 in call flat return self build call output self inference function call file usr lib python3 8 site package tensorflow python eager function py line 545 in call output execute execute file usr lib python3 8 site package tensorflow python eager execute py line 59 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl notfounderror no algorithm work node sequential conv2d conv2d define at test py 26 op inference train function 853 function call stack train function describe the expect behavior model fitting work correctly standalone code to reproduce the issue I use convolutional neural network cnn tutorial full code below python import tensorflow as tf from tensorflow keras import dataset layer model import matplotlib pyplot as plt train image train label test image test label dataset cifar10 load data normalize pixel value to be between 0 and 1 train image test image train image 255 0 test image 255 0 model model sequential model add layer conv2d 32 3 3 activation relu input shape 32 32 3 model add layer maxpooling2d 2 2 model add layer conv2d 64 3 3 activation relu model add layer maxpooling2d 2 2 model add layer conv2d 64 3 3 activation relu model add layer flatten model add layer dense 64 activation relu model add layer dense 10 model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy history model fit train image train label epoch 10 validation data test image test label I m attach full log file with tf cpp min log level 0 log txt how can I help debug this problem |
tensorflowtensorflow | memory leak when use tensorflow c api | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 15 0 python version bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 5 0 c 11 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior our project use bazel and tensorflow c api for inference in our workspace we use when inference on feature repeatedly memory keep rise our model be an old style pb file not save model it be generate from onnx to tensorflow project our graph have two input and one output here be some key code class tfmodel public brief construct tfmodel pb path xxx input name input1 0 input2 0 output name output 0 sess option config set use per session thread true sess option config set intra op parallelism thread 0 sess option config set inter op parallelism thread 0 sess reset tensorflow newsession sess option tensorflow graphdef graph def auto default env tensorflow env default auto status tensorflow readbinaryproto default env pb path graph def if status ok throw std invalid argument error sess create graph def brief compute void compute const tensorflow tensor tensor1 const tensorflow tensor tensor2 const std vector tf input tf input emplace back input name 0 tensor1 tf input emplace back input name 1 tensor2 std vector output auto status sess run tf input output name output private const std string pb path const std vector input name const std vector output name std unique ptr sess tensorflow sessionoption sess option int main model tfmodel while true model compute tensor1 tensor2 printrssmemory I have search some solution and try tcmalloc the memory leak seem negligible but be 25 slow than before which be not acceptable the memory leak problem seem very severe if I set inter thead and intra thead to 0 if set to 1 the problem seem disappear but we must use large inter thead and intra thead for multi thead I don t attach model since it s a little large hope anyone can help thank describe the expect behavior no memory leak standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | clarify that tf image adjust brightness accept negative value | Bug | url s with the issue description of issue what need change the documentation state that delta should be in the range 0 1 however the delta be add to the image so negative value in the range 1 0 be also valid and useful if the image should be make dark clear description it be useful to be able to darken image and the documentation should make it clear how to do so |
tensorflowtensorflow | datahandler intermittent invalidargumenterror for large input dimension | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc2 77 gaad398b5e9 2 2 0 rc3 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory v100 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior base on trace an error for a call to the fit method of a model I determine that an invalidargumenterror be contingent on both the number of input n in a dataset batch in this case the datum be of the form bxnxc as well as whether class weight be be set describe the expect behavior set class weight as show in the online doc would not break call to fit depend on the number of input datum I determine that the datahandler be the object that throw the exception during the model fit call so the min example below be base on a datahandler object this be relate to this issue but I don t see how the issue be resolve for arbitrary model in particular it have nothing to do with nlp as be mention in a comment issuecomment 673667914 standalone code to reproduce the issue colab link scrollto finvxczg3k67 same code post here python import tensorflow as tf import numpy as np from tensorflow python keras engine import datum adapter def test weight model n sample use weight true def test datum def test data gen n class 5 x np random randn n sample 3 y np random randint 0 n class n sample yield x astype np float32 y astype np int32 gen func test data gen gen type tf float32 tf int32 gen shape none 3 none return gen func gen type gen shape gen fn gen tp gen sh test datum ds tst tf datum dataset from generator gen fn gen tp gen sh ds tst ds tst batch 2 ds tst ds tst prefetch 2 cw 0 0 0 1 1 5 2 0 5 3 0 5 4 0 5 datum handler datum adapter datahandler x ds tst y none sample weight none batch size none step per epoch 10 initial epoch 0 epoch 1 shuffle true class weight cw if use weight else none max queue size 10 worker 1 use multiprocesse false model model print next next iter datum handler dataset model tf keras model test weight model 5 always succeed test weight model 50000 sometimes fail test weight model 50000 use weight false always succeed other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach bash invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python eager context py in execution mode mode 2101 ctx executor executor new 2102 yield 2103 finally 11 frame invalidargumenterror indice 0 5 be not in 0 5 node gatherv2 op iteratorgetnext during handling of the above exception another exception occur invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python eager executor py in wait self 65 def wait self 66 wait for op dispatch in this executor to finish 67 pywrap tfe tfe executorwaitforallpendingnode self handle 68 69 def clear error self invalidargumenterror indice 0 5 be not in 0 5 node gatherv2 |
tensorflowtensorflow | tflite c api docs seem break | Bug | url s with the issue description of issue what need change the c api reference for tflite be miss many important component on tensorflow org for example the doc for tflite flatbuffermodel and tflite interpreterbuilder be completely go which use to be there the last time I check about two month ago |
tensorflowtensorflow | add optimize svdf kernel from cmsis nn | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 na tensorflow instal from source or binary na tensorflow version commit sha if source na target platform e g arm mbe os arduino nano 33 etc arm cortex m an optimize version of svdf will soon be present in cmsis nn and the glue code for it should be add to tensorflow lite micro |
tensorflowtensorflow | calculation of effective scale differ from tflite implementation in quantize kernel | Bug | tensorflow micro describe the problem in the quantize kernel the calculation of the effective scale differ slightly between tflite and tflu we find that in some case this result in single bit difference in output be this difference in implementation intentional l129 l133 l75 l79 |
tensorflowtensorflow | can not convert predict function of linearregressor | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary pip tensorflow version or github sha if from source 2 4 0 dev20200824 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook I have a tf estimator linearregressor and save it first with function export save model from linearregressor then I load it and save only the predict funtion import tf save model load modelbefore tf save model save import model import signature predict save model attach below wenn I then try to load as follow I get the error below converter tf lite tfliteconverter from save model model tflite model converter convert the output from the converter invocation exception traceback most recent call last appdata local program python python38 lib site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 195 try 196 model str wrap toco wrap toco convert model flags str 197 toco flags str input data str appdata local program python python38 lib site package tensorflow lite python wrap toco py in wrap toco convert model flags str toco flags str input data str debug info str enable mlir converter 31 wrap tococonvert with lazy loader 32 return pywrap toco api tococonvert 33 model flags str exception 0 error loc callsite callsite parseexample parseexamplev2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf parseexamplev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite parseexample parseexamplev2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation sparse indice 5 sparse value 5 sparse shape 5 dense value 5 tf parseexamplev2 arg0 cst 6 cst 7 cst 5 cst 6 cst 4 cst 4 cst 4 cst 4 cst 4 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tensor tensor 0x tf string tensor 5x tf string tensor 5x tf string tensor 0x tf string tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d category i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d category i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 4 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 5 tf lookuptablefindv2 4 sparse value 0 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice output shape tf sparsereshape sparse indice 0 sparse shape 0 8 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 22 output shape 23 tf sparsereshape output indice output shape 17 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 24 output value empty row indicator reverse index map tf sparsefillemptyrow 18 14 output shape 23 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description description lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description description lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 22 tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 23 tf lookuptablefindv2 22 sparse value 1 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 25 output shape 26 tf sparsereshape sparse indice 1 sparse shape 1 26 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 27 output shape 28 tf sparsereshape output indice 25 output shape 26 35 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 29 output value 30 empty row indicator 31 reverse index map 32 tf sparsefillemptyrow 36 32 output shape 28 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d host i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d host i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 40 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 41 tf lookuptablefindv2 40 sparse value 3 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 35 output shape 36 tf sparsereshape sparse indice 3 sparse shape 3 44 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 37 output shape 38 tf sparsereshape output indice 35 output shape 36 53 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 39 output value 40 empty row indicator 41 reverse index map 42 tf sparsefillemptyrow 54 50 output shape 38 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d size i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d size i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 58 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 59 tf lookuptablefindv2 58 sparse value 4 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 45 output shape 46 tf sparsereshape sparse indice 4 sparse shape 4 62 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 47 output shape 48 tf sparsereshape output indice 45 output shape 46 71 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 49 output value 50 empty row indicator 51 reverse index map 52 tf sparsefillemptyrow 72 68 output shape 48 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 76 idx 21 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 88 idx 34 39 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 100 idx 44 57 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 113 idx 54 75 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error fail while convert main op that can be support by the flex runtime enable via set the emit select tf op flag tf parseexamplev2 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tf sparsefillemptyrow device tf sparsereshape device tf sparsesegmentsum t f32 tidx i32 tsegmentids i64 device op that need custom implementation enable via set the emit custom op flag tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tf lookuptablefindv2 device 0 note see current operation func bb0 arg0 tensor no predecessor cst std constant value dense 0 117987722 0 0684242323 0 100408614 0 0145546673 0 0430826135 0 103112921 0 0680344701 0 0248609539 0 0398180261 0 122247897 0 0273514148 0 0187784135 0 102349631 0 0905613824 0 0723603144 0 0438856669 0 0021427928 0 0984751954 0 0817138106 0 109699354 0 191155598 0 0545536913 9 727810e 02 0 0141912363 7 680510e 02 0 0899474472 0 0498611145 0 0884774774 0 114087969 0 0725763887 0 141074464 0 176522136 0 0143758887 0 0524854325 0 155160338 0 0285528414 0 264534861 0 106433257 0 135232121 0 225332677 0 129775301 0 191358164 0 0178745817 0 0918667614 0 107648872 0 0921946167 0 064818345 0 0105348462 0 132097453 0 110714845 0 0700208098 0 034297362 0 0263220761 0 059998773 0 0116290115 0 101751082 0 0713425949 0 0987613201 0 209998265 0 0471415743 0 10908471 7 703180e 03 0 0123223783 0 103961319 0 00920343306 0 110373154 0 113558963 0 0215992182 0 21590668 0 103494935 0 21094574 0 132196262 0 18838799 0 659609914 0 209931418 0 195380583 0 115891144 0 130379677 0 236354247 0 111823596 tensor 80x1xf32 tensor 80x1xf32 cst 0 std constant value dense 0 0778336599 0 0839953199 tensor 2x1xf32 tensor 2x1xf32 cst 1 std constant value dense 0x00000000b145813c7303b1bc0bc8c43c010f1c3d548ec7b91283c2bcbdcfc23c1e8e1c3dc1772cbbc8139bbc8214983aa707d3bc9722d93bd59c2abacf006e3ddd1f12bde131dfbd65a3e1bbe5722e3b5090803d304a95bcebe8debd16133d3b641dca3c61ac2b3b8f4acdbcbfe4fbbc4e71bebc86fb393c9d1304be6f0b9d3cffe56cbdb362ef3db6b3d23d1300b2bdc63a0ebde446a2bca2644fbe97cc80bdf64179bda6f456bcbd7852bd0e9a7c3dd04e8fbdf355e23d1a049abdec183e3df12960bc384d7e3d1b6c0bbd97531fbc5232c23d2fa6743d628296bd35f1cd3d838eb2bdf4d77dbb9169ac3d9a502fbd2349dbbc7c3295bd1a2cafbb05ef4abb92848ebdf6b992bcc17a85bd51d7febdad735cbd7ec2a5bdfef6a93ddd52ea3de8506fbcad4284bb619f52bdd152fabb3706efbc9eaa55bed1819c3d25db8a3d89c8d8bd5c3befbdbc6930bda72e213d21f09c3c4336803c43b382bd4acf94ba13a7a23d1d6dacbc236aaebcd54027bd8abe9d3bb0707b3e869d43bd98b1c3bcb6f2ad3c736bc83dddab0b3cea144f3e7f2098bcd2d1813dde450b3cfa6321bdcbed453be52d01beb42a0abec0b768bd152ea7bdd856053de096a3bd204e40bdd417063d1c3d4e3e4967a4bd0b0972bc504ab13d02ca90bdbb7060bdba08063e4b9f61bd7ea7273ed5640f3d0ec4823d5c6724bd45ce1ebe4b53d3bd25e5b23d9369c2bd1fc0073d1c4b93bde3f5da3d2c83c3bd4ea5073d268b2d3c90cf003e34cf4f3e2184c8bdac5d64bec11c393d5a11ecbc8632023c552012bc59113fbd2d91cabd7410a13d8c92973deed4d73de9de7a3d0f1e91bdb6bff63da52a03be0abafc3d37c5563e309ad6bc645cafbb4aaf37bc750a8abc05b3b0bd2de7093e63c210bd478c4a3eca2d043df53997bdda0fb6bdbbca3d3d7a662abd40f92dbde3756e3d7f03a0bcd95b513e142a72bcb072ff3cbda1fc3dc6100cbde63e4f3e52bc673eea020f3dc2265639788c4d3c0eb01a3d6c8a6b3d01e2903d2c1aa3bd43d408bccf91353c5628a7bd2123563eb32d8ebdec162c3ea8a1a6bdffd3953ccec494bdaae3a6bdad7493bd91fc0f3e5e49543debec9dbd66d7bf3d36b64bba4dd0463d46feeabbf58622bde64a98bcb30c1a3e69db01badd8b533dcf5820bd07ac353d89b9b43c04fa49bd9a105b3df85710bdb3b94fbd3f09593df91946bc857cf03d566044beec6430bd2246ecbd7a27d23d6c627d3bc486983d522025befcf413bcc0b1123e9bc90a3ef0f1f43df951f13d56e33cbe330b0c3d614bbbbcc4dfc83b02c0b63da425e73dffac15bd46d30d3e8dcf99bdf275dd3df1891a3e9a1937bd6104e13cbc9de3bb71b291bccbb307bd23a439be7498d7bde8356abd462e613dc30278bdbdc11c3e385694be03cb06be9310d53b92787b3df0f3103c57a7733dfbf6afbdfff1053e0b869ebd9b7250bc2c9cfe3c3916e53d44bab43d3423c3bd24d580bdb5a15ebe0800b23ed3731bbd115846bb5ea2163df7c3cabc1fb95fbd12be04bea389e33cdcab263e58d4f83c95c3b03b50f7823e4b022abe9925a53e71ec833e8059b73d7eb3cebc35d411be09f7903d0cf00e3d3d644c3d860a58be959b463d4f1b1a3e435412bd6e470e3ed933363cee77cbbd0b76c2bd2180243d4355883ec60be6bdde31d43cbb228f3dcb462abef71204bd478ab53c6904913dec8128bd318707bea4db343c1733633c26ac463d350727befa94383e4b5ca1bc5497cd3d39314f3ef7eb573ead4903be8c554abeefeda9be2ace073e61ec353eb0272c3ef19512becbad0abd2866673e952b763ca37dabbc8a9f853ca876cebd667136bd8a0a133e81aa963c732f943c2d0b8ebccc2f4d3dfd9d13bdc6c2a0bdcd6a79bd88b4553edeccb2bccf4243beb2f0b43df4888abd40c402beda7c67bd3f0a8a3dc24d78399419bfbcc436903df70863be1095123e0bf0663e7c80473ea50d69bc5fd7bf3d3bd615bc5c421b3e42d6913e6566863d950a583e268f20beacd34abe6db0613cad5f8d3d4647b0bd98e9213d6ee814be0439943e304f78bd9cbc9bbe551a023c9524413d73af8dbe0572b23d0ff6dbbb53bc0fbdfcfef9bd19474c3d752e933dcd331abe0ad0f93d7d273cbd968483bdb20e55bdd0b801beed4a8dba8db9c0bed53a643e35d3473d18db4dbd941f063e0b5a65be9d982d3e01fcdb3c9b47433d4a4739bd69bf3f3e714814bca2dc80be385a023d73b86e3eee7a7f3d90fbb23d00da493ea28408be804cb03d99a748be435eb23d6ff407bd27d0b5bd8f8ad2bdc5ca463e1bf75d3b3571603e26e33f3e32f0ac3d42e3bf3dcebd0b3e77d82e3d657827be5e4baebbbe49ab3cddfa17bd3bb8903edf3d8c3d3ccd8cba3ef5643d553672be41e5c03d18507fbd2fa5413e841df63ef701373dfdf21b3e41f6c2be8b2d0bbec48287bd4f522e3ddd922bbe68da413e0521513e1e5f813e05346a3de3854cbd97d564bece4315be0a1b893e58804a3cf305853ce682303e26a939bee482b3bd3d6e453ed49836be3d3e59be1164ea3d1b25a0bdc4733cbe6ba3a83df30b09bed635a9bc31fdf23d8a5ad9bd89a6273dd914b93bbf0f9cbe8534823e2b84343c73606bbeca19de3db70f0abea46f1b3e4581b53e3962e03aacab9cbe0711c3bd0711c3bdcb3e9fbb1fa4f83d26e7e4bd1a84253dab0d14bd18e730be9b9b2d3ec3e6253d532df1bdc3a430be64ab16be3d2f1bbeb54c943e52e88e3d29a450befe5194bc067d413e096c30be55bb54be7bd5c4bbb10e243dadc9c23e771c29bdf59bfdbdf54430be2fb8d8bd87c3a0bd989b223e3509163e0d22693be8c0b1bd604514bd593477be55328bbd03f52a3eb532d7bd9c9e11be9d0f2bbde176c6bd8463d73d94d5973bdd0e183ddd0e183d18fe003c518d6dbe7aa8753e9fcf9b3c1e702dbd8f4c91bd491ee8bd151dc2be53a4e33d75fa81bd6b619fbe8353153e024c65bdca92483c1fd55a3ef21c783e2288803da474dcbbfd8cd13bfc0f5abe95731cbd532ee2bc866b87be2dc3113ecbce94bc89f8403d52c1723c2a977d3e1299313e8f5b2abd789f763d7b62f03c31fbbebe4189fabcfe4539be27564f3e82e3f73c5ce7e73d33ce233e87f108bd7be5543e56cc433e0f76a13d0fb1bebda786063d30ea0c3da428c13edb23833db122f6bc4647ff3de1ace93d3526acbd6c818bbdaf7cee3d0fef03be3d7f073df66ddfbb6507463d0f64173dc0c57b3eeb7d4fbeb0aa9cbeda2895bee73d5c3e660c50be86c21ebe4b3b82bb8be67bbec27c89bd92f8bfbc6ef252bc42d7a1be9648613ed629cc3b40f090bd9233803b615f5cbe8c270e3ec17bdd3dfc2540be327e5fbe25bc74bef52d3b3dbcaf4d3ebf3d5ebe23ec83bd23ec83bd942248bded3bf93c2b891dbd0175bb3d7b187d3eba2bd13db32d243e81eed83c33aa4ebe282f843ac02d973e24221b3da4a807be8fcbb1bcb3972dbea24a333c8463003d4016c5bde1ed313ec571653e6ff649be7a7b853e469df23d408f63be5b894abe5040423e8e24ccbdf205bdbd1e7691bebdfb9cbeb6a4c1bd20b795be25d1743e591615bd04fd823ec50aafbdf5a1003df2e7c73d1e761c3e4edb3f3ed955893ee0083c3d2cba723eabd3a3bd1161a93ee85afdbdf51eba3da9ce1a3eee3780bdaa6c57be04538cbe18d003be3b271a3e92d043be66cd00be3ff047bd61200f3cceaf453d9d39f13bb63b2d3e12349abd9862753d2662c23db5e778be5058923e081852beafcc41be306754becd5396bd4b4c04be6405c1bd65306b3ec897823dcfe3c83dc494b43ee139b8be0d9e0f3dbf4085be9241c1bd0b94c83de2eb1e3e2226b33da778ba3e75a52e3e53a42ebd53a42ebd0a1328bea32689bee865b23ea6c72cbe415afebc357630be2763d6bccddc3b3ecca30bbee92796be0f09933dff93b33d96234d3e2dad733e818276be956686bd599da2bcc8d950be9d85903dfecaf73e87c3b1bd65af9b3dda81d23d4820063ec9370dbe8e84993de480303d8f4c553e33d29c3d37fe99bed0c5edbd983e9a3b36aef9bdff6fae3e171976be582f06be9f8f0a3d82a646bd28ae9bbde3d3e4bde3d3e4bd9d351f3ea7abb6be450f89bdd1cf0abdfe2f9ebe5ef2d5bc0c063a3d3883aabdea21afbc7ef85e3d3194dc3e0f0999bdf1e74cbe6fdc7dbef4eb343ecc8f123e6787d83d47328cbe768f0dbec63421be1c685a3ecfbe1a3df175993d75563dbe33478ebe5abb10be2cf8bfbde879b7bdc2a654bddee84cbcbc7c363eb39cb53cf9e091becf34febc739356be5fe067bd11ab01be11ab01bef50d2dbe0c020f3ec39b473cf4502b3ee9409abe7127f43de20e9d3c4152d93eb8c906bee185403eb9f29bbe472d5a3e817917be2b7951be5705a13da6f56bbed759023d0ad6803c2a7c05be73c6153e683f1d3ed593b5bec8e4c23da60b95bec01934be2c6f363df8d480be36cd743c695749bed075d33c5f4e78be495c54bd1e2a803d2536e8bdff7d64be17b60cbe669476bee20f6bbe8c495d3c8ef990be86139c3c1222983e7b920cbec71d083f78a74fbe035fafbec1ce853e4366563eda2d9d3eb8b5a23e369493beb592eb3ea23f52be3a51323e9989e4bde938bdbcf5a642bb058b3e3de0a0f1bc6e0d683e66de42be6cb67f3efdc80a3d8d4391bc636d833e1ad5b8bd242591bd4f2097be987d253d2a7e22bcc6f193be429915bd5511d0bd29c1eabdc81913bd74a29dbdee4e95bde57a85be89469bbdc2170cbecffafc3cec8512bddc1644be7487753cd1fd853cdb0fc4be4ff7b63e9db30d3ebdd36b3d728dc2bdb91a2c3ecbbda73e8c298f3e2b55013e5f5bbbbdaf7168be6beeaa3c74315fbc78252f3ca3c036bec69d38bc83964abeac7a82bee85a93bdaf15543d81e93cbe5aad4abef58755bcf0d247beedc4b03d49f9bf3ea09394be4734bdbdd62b57be269ebc3c1d4b3bbd980221bc1444a03d06ed1abeba58323e5bd730be70eb9a3eee2defbdd075b43db960d9bded9898be7b7e9ebe9c3d963eea6f103e7c7b0f3c4ee36b3e50d1b7beb4b589bedfc0cfbebd5d1a3eccbf0fbe107f223e9acbcebeeeb83ebc7111223e498241bdfe55acbe6ebd233eb36fa83d57c378bd397a883e37ebfabd799b923edf615d3ee8fad3bc953deebd5c8fa0be02ad87bdb4658dbe2353b6bd63a8473ea07b56bec22b7e3e32dd473e8409043fbd5d1a3e2ddaac3ea698313ddddb3dbe6c0c94be32aaaf3ec018123f454e6c3e107dc2bdc1fc96bd137b6b3eeaa97b3ed965a73d7733b4bc8195f3bdab4646bed806123ed4a78bbe4b73633eb63dc83d9eb46a3eb0daab3e8c5fa6bd3a8fcbbd0d39e43def25983cc6d650beeead5cbd62929f3d25f1073db99415be0b8b183e7a1e963c37d7e0bd5346e83dfc1308bed7f94abee7133d3ead6293be56d2bebe96b02e3d22877bbe0a99143ea94dd73ecf8687be55a739be04c1fdbe45b9683e8817a8bc2b1c19be77429bbe58b736bca958ff3d737ab6bd7e153a3da1f25f3df2e97a3da7da96be9b2ad13ebf400ebe9afc133c2252a8bdaf27803e7051cb3efe9fda3cee4e95bdb01514bd05e68c3edf2f76bef10e82bdf5d350be8c32773ec34720bdafa7a33e9b9610be04b9a83eec69163ea84094bd705ba0bd0248073f2cdb5c3e15b70a3d5f5d92bddd5491be8b0ca9be7c01643e9b5871bef6ac97be46f36b3d6f0a1737c627afbe16c26e3e099e333edf509cbd99aaac3d8f347b3d39a6223d07f250befcce68bec8b9073d54388c3e1f153c3ed94192bdb36b0e3d264d4d3e75d1e83db6e7a43d2eaebbbe1e5bf53e200469be93ada73d76ee33be696b3fbd14d6db3d824f433ee99d3d3e8dcfa2becf298c3ec7ec313d3fe4b2be8d8db73e6c8e25be4cf5b03eb2a956be57821dbdcd4780be572f67ba87dec1bdc91e5a3ee5a9103da6a47a3e2571adbea97791bcf36d5fbe8fe0b53e5fd2bc3e727dbbbec00a673d86db40beeb41eebc2c7f4fbe4d8d823ed2b5ebbe971980bef3a1f7bdfbce4d3ed03c6e3e3d9eaabda0bcb23e1b62683ec4ce30bdce1bbf3e5c679b3cb00e5bbdfc0509be696c133ed177d5be546f05be24feb9bccc99a53e4c0321bea5ce2abe5eeea4bee3ea153e55f0243d11034ebe91c89abe38dfa73d918c063f628087be897bde3e38105bbe5cf395bd1c020cbeb31e2abe6df1cabd44ad6cbe5e03c8bdb057033daae7523e466117be5835a8bd4cb3a93c9c504fbe1aaa8dbe7510acbe8662143f5350193d5350193d219b92bcb5aae83b4f5613befa1f183e18a7853c5e47f83e1e7463be2841e7bd70b429be16e913be86aa083e86aa083e8342173cbb7f50be9241333e9241333ece9028be57b0713dc852903dc21c1ebe6c2c9a3ea7f9153d4f4f2ebe61dd2fbd5dedba3e03a4a3be9c16013d1bbe2dbe9eae88bdcf804a3d5a7bd33d5dc0303e1f16cdbd99e652bc2120df3e48d7973eea8a173e81dde83eb154e3bd2d281b3e356def3a279784becfd7d93e44e0963ec9e39d3eab0603bef6958dbec74c12bed753d03dd2c1ba3e7dca0fbeacf9293d35a44bbe97df823dd0aa9dbed17881bd0563e8bdedb5f03d882a083ef16686be2f028cbd77032a3ea6b9ae3c7f4a3e3e2d2307be9b0d023d273c9e3e6eb0c2be56cb4cbef06623be1d37013ed335863dceb0063ee642233e2a1598beab0f32be70a94b3d0501b4beb79c693ef382c0beaff6bebecd041f3efae0d73d4412abbd886d443ef58985bbf2268cbe9705cd3e001b0a3fb6e481bd4710263ea187e5be123d8fbe051b6a3d479480beed24fd3da1752fbe0e629cbee0fae6bd2fabb03e9e00fabdcb94263ee92781bb7d6f69be1279c93b59f222bef2e541be8de0b33c67511cbe8a3b713dbafe493edeb4a43c4ab808bc8ccdb7bdc436c33e970a2a3e3ad08f3ee6526bbd3810103e3810103e03aae83d7d54b53d2a00903dc417973df91524be3aff08bd7fdbc03b5d30b83e100fbdbd05ddcf3e4573073e11ca413e77b704bd5c40f0bd74293abe802a9a3e955363be2ecf89be94299bbd79431bbe79431bbe204719be8bb191be44f1a7bdfe4538be111f403d9bd8023cfe5e28bebf6595bd7b6171be0059533e7ec2753d3ab6cebe126d44bea1dc10bed4a40d3eb0359b3e0a4b99bed95a24bef44c0f3edb0b813d420bb83efeb9083df4bcb83e054c89bd688c0abeca8a10bd1047f2bdc5780abed38beb3ef7bb4e3e6eff0bbe793d5ebdf433b4bef51af2be1aaf0d3ef1af15be4430563ea37eaf3e5ffa2abe3a0c34be997be73dd9b6743d0507cabe62b99a3e4ecb0cbfafa924bcd6573cbdf790973e6ad899be08e9df3dc772c3bc46e9ea3b4df8103c6de81c3e9618dcbd5dc59a3db8f9d6bd39f31ebe6c42a53ddcfe003e1af6f4bd4c8b44be1a93f73b9f570b3ea422013ec45f26be13cf5abe03babf3d88031b3e537efcbd403b62bd8f405bbe0000000029a213be4305ba3e75ec96be32758cbe910fd7bdb2d6cabd81bf1a3ebd62513e99f9493e8559453e2f85c5bd83919bbd2c36aabdb84b1dbe30932bbe2e437abeb2a1423dfb3f3fbec38d0a3f045acc3d239153be0627023ebb13c5bef15055bed3440cbed3440cbeabab1d3fc2c35e3d707a3cbee037c5bd90e0adbe373c793e1fac08bda1937fbc11dc433e3623053ec284ea3ef1db23beaf4c25beb1679abe01302e3d5f1f6a3ee8e1b93ecb0d403ebfe821bdf13da7be513d323ffd1b89be5da988be933a1dbe4fed7abe59c4b9bee7139bbdbadc6d3e4597ae3dd19fa4be981f6cbee18c4fbd61f9c7beeaf9483e7ddbff3df77e51be684ce6bd4af7df3e43e044bea2cb93bc2a0d0e3f29ae7c3edd73fc3d521d20be0bb81c3efc8791be937d933e83a7f2bd6ab1a33d7e7d5c3e480503be0de915beaf499cbec5f029bdd0951e3ee41a0bbd4aa69f3c1fe58cbd6b6d91bd265ada3ec8c523be8ed1aa3bde710f3d0df4543c88ae7c3db16bdbbeb516183e0a1f5f3e64435b3d7f7f3a3edd39a0bd986f7c3ef4c02b3e416ea93d0840fdbce09ba4bea05251bdffa491becdca99bd3c92fc3ddd2e05bea1fb703d7483723d4791123e68e43c3e57d5bcbefe8381be45ba833eedc6183e5ddef73e985b6ebef7e26cbe00000000e7304a3e0cadc43ec085763e1f90213e80d74fbe80d74fbe9bc8f73d1337a43e8922753e1c618a3e3c96323e0df039be8a398d3e7ee6f13d0bdd17bd41a10abeb3067bbd96219cbee41d9c3ce514b0be108d19bd9785d73edb200ebed908043e5ee1243e51adec3e5f8f433efd73823ddc2ee13e1a28923e04799ebd3757c13d8bbdb0be6c8102be97f9b73de8379cbed3a6b4bda99f0cbe2652c1be0880843d13d0523e1cee803e0f0f6ebe30c8c1bdad45803e389048bec1dfeb3decf463be000981bd496419bd08ab513eef1d04bea8a2a73e16294fbe49e0033ef5297c3eae4b463e0ee611be5e8131bedf98aa3e79775dbeb6377abe78484ebe1c60cdbd4332a3bdfbd4b43d2052b7be2646993eeaf210bd0735963ef856a8beb787173d899a5a3e39734ebe233a87be117d153e802a9a3e031e7abe52bf91be3d12f73e7fd920be6621cdbe3517d93e6ea493be990720bdf3c2443e03fb1fbdee4db43ea6d619be74843f3d56439fbd3f0198be91bd42bd74616bbe1f9dafbebd2ce83c6fcc96bd9e66a5bd1241073e4de1a33e5e33043f1b1987beba7683be9d99c8bd685439bcc042cdbe8e4a243e748f09be9347d3be9e3a12be3b36b03efc635cbebfd6b53ccae31e3c0d2b153e0d2b153ebe083bbe0da486bd73f80d3ed1d2b13d8d9a5f3e279ded3c3fad17bedc4300bedaf49abea101c6be00000000c4a86e3d517573bc517573bc517573bc517573bc1b9c53bd1b9c53bd1b9c53bd8f0bd13cd92c043f159096bc3a72de3e0823a3bdf36158be6b8e40be221c383e197537be5a75d2beb760abbe05f1ffbdc6566a3d5807953e7698df3e8dfd8ebc25952ebe0b17f1bd3e2b90bd43e0683d5344853d388ce43d787e08be7e4be93e6ce36ebef2a5143fa5c886bebdcb9d3ed18c873edd17b33dd05558be4e9323be8bfa80be069c83bd3c3885be2a30b7be0000000043cba9be000000008b057a3d3e70603cba4aa5bd237395bed72b7ebe1d25f0be6f5e1e3e76749fbd25d2a33e8271a43c8271a43cc38b2abeda9837beeb219bbe584b0b3f846a853e3f27f23d517a073ea0ac333e3b4304bec01b97be5c5d10be5a9a02bf4a21b0bdc186893e2dc30dbd000000008b3a87be5f575ebdfa5511bd4aab063edee8cf3e746facbea26755bda3e9c73d06191dbe86851d3eec16903dec512bbce7bfbb3e4f4da73e403141be361b65be23fa3dbe1544da3ec789fb3d5dad543e97668cbe50c24e3e177952bc04d900bd179c943d293f9abefb1bb13e9fb9ddbd30f7043e2b873e3e62a63ebe9c70433e6cfe1c3d2b724bbe1206b73e4bd59f3efd3de7be25739ebee829febde829febddad469be737b5bbe085f8d3dba4d453e5a4fa5be77fc6c3e6f42473eada629becb0d403e1b687f3e5bc724bdd1349abe22c748be98cdebbd0e0708be3f2073bedeca67be911a81be996c683d47a9cd3ea60d8abe5ac7dd3cf3a3ae3ede7acdbda6dd6fbe9621dbbd9f6d923c80c320be0a9984be0f3068bed19e68bece7a48be9532f9bcd81d8abc23d10fbe1e02b63dbffc41bdadf3493e2d6b8bbc6c33563e6c33563e21ef29be59b7c53d6d9243be0dd9033e39302ebe51bcdbbdf3a3ae3ea400113de1a003bee1a003bedc0367be9ce3d83d596211bd4597173fd24b0a3e17f2e4bd3b6ac9bd883909be46221a3ed01c35be0f7c8dbea2ecc33e84d38b3eec33aabed6558cbe3eab47be3bcdefbdfb44303e2f6b37be25639f3e05e1de3d540c9a3eacb0de3eda01f73d2121f1bd4a3781be00000000b9ee4cbe5f0fe43df4207a3e4aaa65bde14987beb0d8a9bea192b23ee7d3c73eeae7713e96e9a6be179950bd9e231cbec94cdcbdcd9b79be3378b13ea7da4f3ef7dd473e4205f4bd24f5873c93e29d3e50933d3edb6bc93c5bc27abdc2f928bef2aae93c64503c3cbb98b6be6eb2a13e34a0b23eced6bf3ec7166dbe7df8cc3e2e2ebf3e8ed1783e7948313e0ec069be23f32a3e83b1373d22d6fa3ef3a08dbea70926bebe3b48be9acbc0bd8517dbbdce2c0fbd55681cbe676cc6beaabe4c3ed8b9e1bee515e93db885ee3eac736f3eba6f8c3e71647c3d2b5cadbe5d5b043ec8cf9f3e4b37663ef90658bebbe313bea83e8e3ec3228f3ee21110be1cf789be765065bd4e56c6bc8c4fc83dd60d6c3e95092c3e0da85abeb8e9ab3ebbc58c3d47acd83e2f3f83bed6319ebd0e043cbee322053ed1121c3eeb1b8f3eeb1b8f3ebab78abd70bd14bedc7a38becb5e933e84c7d7be515f64beb0ed6a3e98fd30be258b9ebe00a28ebe4a63c03e4a63c03e1abb353e1abb353efac7d4bd8337a33e6e90523ec7e8cbbeacd14d3eacd14d3eab668f3ea9cf5f3e8e6151be796603be928d05be9bdc063edbc8313efbcc13bcc4cc18bec4cc18be3fc0b2bd2d0fb2be4b67b7bd04f66dbe75aa36be3c17d5bdea97bcbd2ded343e435f77bed1e694bd1b440f3f4aece93e29bd24be5a52c93e0222a7be02a8853eaaba61beaaba61be3ce9c9bd08c927bebc8fad3d616475bee1de91be544c68bee27f6a3ea61982be00000000c8a24ebefa50aabbe15eb73e945e0d3f23a9583e23a9583e005684bd9accadbd660c81be21e2b7bd46b1193e2d57e43e574ef03df6df9a3ede6ffe3de41f16be6e61203e0eed663e06de85bedec269bd4bec94bdc4e6abbd93b8a93eb413723ede710f3d91df1b3ff094753ef34b50bea99f033e06c6383e00000000000000009807573d391e3d3e01764b3ee7bce6bd71e93d3d9b0e003f804707be804707be804707bec16d71be3b80cc3e9aec7cbe02bf7d3ee4012b3e5a63db3e12a626bebf7c173f7c64793d12be59be932724be8f1d343e8c39e2bcf8dde63d389048be5ef03c3e7423523e0000000062c8493e4d23113e59eb6e3d59eb6e3d59eb6e3dba92863e8e8eb33dfc74be3ea088953d45a56c3d19dc283e202079bef792143e5b9f67bcc4b1ee3ceffe26be647b51be02f475be54efd4bdb3c2d3bd525e40be54c610be5b296ebeaf14c13e3f5077bd8dfd2dbd1889c3be0aa70a3e0aa70a3eac9e533e88b728bec4d06b3d4b3ed73c1268203ec651c5bd99ac613e064aadbe0457993e700e8bbdc0239a3edcdf253e31cf92becbc5b1bd045933bd045933bdb0b9a83e74375ebe000000007d24e5be49203f3ed0ab55be0c9eef3ec92a923ec92a923e4ee5bf3edaf5db3ec6c7db3e0000000099db9a3e76d3d93e8bc27cbef3d17fbe8ae359be81ee82be38b889be74293abe9cc21a3e32824ebe162c113e2f3191beef3a85be7b48133e407a73be2ea57ebe8da6cbbdabf685bec1b036bec1b036bece3374be2ed43abe2203883c2203883c9fbad73e9656e93e5a14dbbda781d4bd8fb5753e44cca2bc23390bbe98012abe000000001e53433d086976be471480be1423e43e4f9c4cbeaf39a7be14558abc14558abca3009abe9836fd3b96e7623d2bae80be421708bd00000000c19c96bd46342dbe51a8063e08971abec0452fbec21f3bbe9b46853eee335a3eae29a1bdae29a1bd8d1058be6a15b4beccc4ad3e37c0ccbd25b529be452c873e4555ddbce3823cbeed5eaebc3d8397be472e89be4fe271bef64857bddb5473be11bb323e67e9c23e895525bedc24df3def85253e46d303bee291ab3edf10703db596413e36f891be0c6839be0b53f83ca00d0e3f9c57c0bd8b5838bea12cb43e9cf9f83e19d4bfbcbde076bea6c7a73e619c6a3e619a8b3efb32a2bd0a0e9bbe8922753eed2a9fbd7cc93abef516cbbc4ed53bbdc99b82bd1d6e62be7e1a92beb40b4ebe94cc393e32aa5dbc000000005f1a0b3f383c8bbcc54533be71724fbe71724fbe0da486bdc3d12cbec3d12cbe043b76bdd46c2fbe8b7e1bbe8ecb55beb12a09be4065023f6cc9043e395482be000000001df46fbe2adf61bedcef86bd4fa1bfbc051ea1be518584bee1a71e3e591afa3ddabc6e3e43f8bebed4da9bbc39fd3abe4429b03ed113043dcf626e3e83036ebedc9f25be0823a3bd61dcff3eca51603d641553be579e4abeac112fbec7d03b3d9011b1bd40e3f3bd0000000000000000c9aa923d10fd92bea1ef1fbe8740a2bd5347033e7f70efbd0000000000000000db553cbe9c79833e1cedd2bd05f1ffbd13630ebe022ebebeb6ca56be1db391bdd76ca13e135261bede062bbe8402b2be868c1a3d6dec593e57f340bef14e3c3e84c1b33de1b48a3e749125bedee9ec3e87a393bee51684beebad5bbe9fcd7c3e49e5c03e494e143e0ed0acbe8bdf41be7e419cbe8c4895beb1d491bed27bec3ee3e02ebe0000000062fab33e043be13ea968873b709ed4bd354460bea05fb53e6af6c8bd0ec98dbe9c6742be473c75bd000000003a70a83de09269befefd2b3efefd2b3e8773703e16f9c0be37d9923e7354413e0b228bbe28f7febd510788be5e0a87bead148b3e971068be0000000000000000941bc73e1576b0bd00000000fd403ebe37c510beb54cad3e1616b3bd41e822be528d09be528d09beb3100cbe5f78a9bc5f78a9bc13cb4c3ed83c1dbeab4c0f3fe2450abefe3a8c3d05bf1dbe62ee25bcea136ebeb7f9df3edc2f9abe8f2942bec157debe07f14c3e238f18be000000006d42b8bd6e4f973ed5b54cbe383a91be70caa2be39659bbc34c3bd3c1aa2c5bef86a8b3d11045cbe2a47eabd2a47eabd628983bd1362e63e4bb76fbef72714be38463dbe3bf8ddbd8ae23cbe8ae23cbe88ce2cbdec09bdbd715f4d3e1ba9bcbd1ba9bcbd2d25973e00000000f2cf023e0a80f4bde31627be0000000045866e3de47194bc88b2183f8a5934bec6f52fbee574583e9f459f3e04b2e8bdd20ec6bd228c4fbec60884be08c6bc3c7c5b43be79fbccbd50de723e021ab8bd021ab8bdf4fd25beb4370d3e3a7284be29c0a2bc5aef023d00000000ba07993edb6f803eda3fd8bb9d2e39be1ed1853e361c5a3e54c795beb3de27be7645eabdfa4757be00000000b8b856be733da13cf051a33d78b613bd775990be333b29bd4d1e913ed07c8a3c678d123fb7492dbe2b1a45be9a82abbd870817be8f7537be464b033f347a74bd69b79dbdcfd928be417d93be000000002c6631becf8d01befbcf78befc4ea8bc0c5ccd3d4f9c69be076032bec9ba873efd70413e000000006a34453e6831ef3efc1c21bd7ad750be3b55263cb36c053e3e7fe7bd2c259e3db9dc4e3e7901aabe5dcca7bcc26accbd39c55e3ea34561be86ac95bea1e29bbe1a6b26be7d8583be6e8c833e18a033be18a033beea436ebe3368a53d00000000e07869bef8bfa63ec19a93be24930fbe24930fbefc286e3e653281be07360ebee5b6b1bec1b7a7be19d13ebe13188d3edcce763eccc6ac3dd15988bd5bea93beb48d92bea32d1ebe00000000ebe7cabd0845943e9c2990be82169a3d9b353cbe1ae401bead5486bec3dc62be00000000bff15dbe0000000000000000ae06323ef8b071be3801b83efb37c8bd87fdbfbeefbf823e4c280bbcc94a9c3e50a2de3d2a1282bee76a67be7324c8bd000000007636c73e0f08b13e4d798fbe12129dbe4399bdbef816f3bd7a2c44be2c5860be234955bef7670abeed1c6c3edabb08be9ee163bea0e98dbe70e410bdc61526be219945be43a30cbee86510be0b47c5bddff45b3c66406bbe067d3f3e1800b9be1acb483ef3274dbefa298bbe13b0f93d079a5dbd13ae97bed47988be1a5826be77898dbdc2f721be6efba0be03df06be6b6627beabaad6bd99272abe99272abe8d2b2cbe1f8354be1d799bbef24ebf3efad230bde4063e3eaa99a1bdc3d0173f00000000789db03eb471fd3db471fd3d5ce6c2bd37e2e2bd2a88c0bd63b73dbecf890fbe547d9dbe0a5cd23e3dfe52be91a7013f00000000c054c3becf875f3e0000000000000000e69b16bd170e813e9f3ccdbd8c2e153d09938fbd0b981d3d97c4853e95629e3e95629e3e6bbe523efdfbc3be94918a3e7d2c723e84845fbeb74c78be8b72f83d6b3c9abdbcc127be9c606c3eb18a26beac7739bdcaa133be3db24b3ea3df18bd826a9abd826a9abd00000000a4785ebe5b805cbd2a01213e05e4a4bd0395fb3da2235e3e000000002ff22f3eee8fe9bda52cf23c492bfebd947bcc3eb4f2643d3a008c3dae2d65bed13a553e7c0d8bbee8c930be00000000d5a563be9fc601bdc7f0a33ebd4c81be66fafe3e79582b3d4e69b33d65cc90bedaeb61be3dfa95be39d989be90b956be8f90b1be1f27cebe746b2cbe4fd948beb0aef53e6087aebd0fbd18beffdfd93d02bd03bee847773e0c195bbeaa91a3be1cca8abefa71aabdd0b89bbd580c203de584813d7385d43ed7b005be5ab6ca3e72964cbdcdd52ebe42dd7cbe8cdb14be616f743e21b07abd21b07abdaf49c7bc304afbbd46ef2bbee944a53d4932d0bde11b15be9cbf733e872b113ee883c63ee7a4ddbd6f414bbed02e213e7dbaee3e73d12fbe7ee975bdba1324be71a3013f8ba253bef3ec02beb409cebd572133bea8b51d3e83f2113eac922f3ee804b6bd99fa9abe4932bcbe84b3033e23bc573ecd1cb93ee99a73be744ad53c2a778bbee274663e46250bbe46250bbe46250bbe711ea73eaaa2a0be7b20133f83ad0c3ffc88e63e0000000011e474bdbcb17d3e0974abbe2c7353be8c79b8bd8c79b8bd298b8fbe11525dbe2ec360bef116a5bd5d79dd3e09fba63e042509be1b4982be5d4186bcce0e58be677caebec53ec1bc1b74373ef50a9fbdacbe4b3e4e4c93bee8131dbee8131dbee8131dbe53e0d13dd9038c3e9040403e9040403e9e22543e0000000093288fbef09e67beab6117be01e2963e3ec7b33dfb07283f42c6b23d6d8a083e9325463e2aef2f3de4056ebdccfa8fbe632c643e43aba4be3d5f453e2c0a6e3e3bab7dbd202298be697a71bd3ceaae3e1466af3e28b2583e542d373e2d499c3e4428d83e50b455be3d954abe595838be0eba363e33f7abbdf7685dbe79a9afbe001e4cbe43d6b9bdcf727ebef13959bd625f113d7d248fbe7e5bbd3d7e5bbd3d49b5a7be8d623e3e03d025be00000000d3a225be0000000000000000c469b93ec469b93e59ce5b3d000000007daf51be30db42be44d6b1bd8bfc673d5ae00bbec690873e8319b5bd5689173f6aa2c33ee891093fbdb0dc3e7bba833e997e3a3e0b56f23ed7164fbdf3ec02be9ad67dbe332059be57cea3be74d1b63dbaf02a3e330b13be80a35bbe80a804be883db23e79f58dbea83aeebd10308bbe3ee1d5bc2abec8bd7b9f1b3fbdb4dbbdfed79d3e000000004056c9be5533f83d7a57c1bdfea61fbd3b3e5cbe7fc26cbd7fc26cbd35dc463e46cc46be769aadbec2e3743e079c503ea69580bda37958be16815cbe352381be280149bed743123ed90784be4d8d61be5156e83db1d69a3e03262abec13174be3953063e70170abefd76a4be0000000033acc33ecbc215be934c9b3efef2c73e0aafcc3e45b7343e00000000a781dbbd178f783ecb20913ea6bab6be00000000aac89fbed1c3bb3e0000000054907fbe4ce910be4ce910be4ce910be9f14b43ec4c963bec4c963bef6a989be98f96cbe000000002b5eacbe203ba83ead080a3fad080a3f057b9fbd637984be722fca3ed6d40abe970834beaa5ed43e528396bebccb17bd48fea3beb3144e3dcfd0673dc409f03e8d11b2bef1fdacbea28739be0e6b8ebea28f8dbdd276e8bed611ab3e52d823be7c0da53e0809afbe0000000050cea03eb9c03f3e0568acbd6aa4c33ee6b9993ee6b9993efa946a3efa946a3efa946a3edc72e23ddc72e23ddc72e23d000000002887acbd2887acbd00000000d46de43ed300843ee8cacf3dc24f36be0ffd003f7c6730be00000000bf49ac3e7b8fc4bd00000000dacfb0bddacfb0bddacfb0bd30680ebe0559f33c1de3eb3e867d9bbd867d9bbd867d9bbd02dfdd3e000000000000000000000000c1d71fbec1d71fbee21110bee21110be00000000765065bd765065bdb6d46dbeb6d46dbeef8d4fbeef8d4fbed15a673ed15a673e250384bd250384bd250384bd250384bd8b44d73ec99921bd000000000000000000000000f7e89c3efd97bb3e57c71cbe00000000871d13be46587cbe00000000687f66be9dce68be000000007fc2ba3e00000000000000004ee974be00000000fa449e3ed9ac24bed9ac24beae2f753eae2f753e0c0bcbbd0c0bcbbdd0e217be65b3993ef48cb63e2b3a8cbd38e1e3bdb21d99be4dde4ebe000000000000000000a28ebe00000000cf53a83eb4750d3eb4750d3e0000000000000000e7257d3e4993a4beab668f3efa85c7bdfa85c7bda9cf5f3ec56586bee78b9cbee78b9cbe00000000b6aab13eb6aab13ec0f3f3bdc0f3f3bd48e08dbe00000000000000001349a5bd9df009be00000000a40d19be3e65ab3ee6a1f03cda0f103f847163bea8d5b33e4539d93e83c199bedbc8313e6d5f15becd92203ecd92203eb041b83ee506c5bd7e2c8abed321b43e00000000c06ff5bdac7feebd00000000606060be606060be29761ebe00000000158fe8bc0293c73e000000005ee710be25e401bef655cd3e1cf2083fd6e4e7bd0946b93e0000000000000000f8598d3e0000000000000000e005493e63b982be7a3ea13e0000000098c48d3e0000000000000000000000000df039be0df039be000000001a27bbbd59bacabedd1ef03e4e4cc5bec2f58c3ea00e87be96f74ebd00000000e134953ec0802cbefc90b53ebe738c3ebe738c3eff7506be4c766fbe4c766fbe000000006c87053f000000000000000091d442be073166be073166be4088a3bef4ea8a3dc532983e817c993e000000000000000024098b3d0000000037bbac3e6e7240be6e7240be21e2b7bd8c95b23e8d739cbc8d739cbc796e753ee6b403bee6b403be0a6304be0a6304bea7aa34be1c4a353e1c4a353e1c4a353e4dad8ebea05d5ebe00000000513d923e97a81ebe4bec94bd4bec94bdcc24f83dcc24f83dc9578a3ed121fe3e17227fbe6daaf43e8a4c9b3e00000000c59fd83e033ae43e4af1cf3e00000000dce0adbef6c3b4be60c2003e000000004a13d13ecc2f773ef564d73e00000000a10ee63ef47756be00000000a853f1bd968a89bcd17e1bbed17e1bbe0000000000000000e92781bbe92781bb0852fbbd0852fbbd0852fbbd00000000000000007f1143be8711b6bd6bf10bbe809d4fbe58a3eb3ef8596c3ede2a60be3919af3e804707be090d4fbe0be673bea3ee3dbe00000000758ae73dd52b0e3e4c34f43e4e8d6fbed98984be870b843e0000000071c228bedca452be63b921be63b921bec87719bef4dfa4bebd3174bd0000000053fce9bd815819be46aaa63e8db58a3df6779c3eaa99483eaa99483e4d3d213e8c9588be63ed6f3e63ed6f3e0000000000000000e9cc8dbed270bc3ec0588a3e15bc2b3e15bc2b3e15bc2b3e06fc90be00000000000000006184a73e98849b3e93b280bea4bf83be0de915be0de915be0000000000000000623c3d3d000000002a2229be119c8f3e119c8f3e119c8f3ec0a30ebec0a30ebeaa0a46beaa0a46be5d4521be00000000799c713e799c713e7ad1c93eec9a6cbe01a8f73d9f8b39be41cbfd3eedbd9b3ed49cafbdd21ae9bdd21ae9bd4ff7c63ec7d8e5bc190fecbd190fecbdd71bc03e000000000000000005df3bbedcb099be00000000894f29be0000000017423fbe0a6655be9c5872be6794d93e817b15be5e21b8bd5e21b8bd6d9b01be6807b53e76638cbe60861ebe095489be9b2834be9b2834be49a796bd49a796bdabc852bea821dbbda821dbbd000000004f78b53b4f78b53b0091dc3e598157be5b06e6bc5b06e6bc00000000a7e33abd08d71cbe00000000000000001ce041be6ad686be7e9eb8bc808fe53ee69a87bdf9568c3ef9568c3e9f30573eafd720bef54442beb6fff73e4afd72be85c676be893732bebc393abe6d0d32be6d0d32be1577b63eb253e2be3a07153e00000000168941be168941be00000000000000007500663e4b7c913e93abbc3e9b58edbcb07a0ebe60fea3be000000009450c43e9450c43eb8e34fbeb8e34fbe99c51cbe25dbdc3e6f7c18bef671c33ef671c33e8635e9bdd328913e72a04dbc33ed743e33ed743e0ba3773e0ba3773ec54e42beecd8c73e00000000e6920dbce6920dbce6920dbc80866a3e80866a3ecb489b3e52782fbe03547ebe5b6ba1be000000003d8f383ef21c993ec9a88bbe1287b63ee35891be13c8b53eeb40c33e068f8c3e8fb5753e7b0a96bd7b0a96bd39b4363c6517c13d6517c13d000000004315babc4723a3bd81541cbd741ed2bb741ed2bb741ed2bb0000000000000000dd6c9f3ec602b1bd7b9381becb893abec808603e86a97fbe000000003090cb3e000000000000000054ff38be377845bed1ce16be58a07dbd82dcecbd00000000a69211be7053d9bd21caf93ea21ac93eeb4262bc2bae3fbe74468a3d00000000d17dfd3e0000000000000000000000007a21a63e7a21a63e0000000012bea63e552e4bbe00000000dd7eb3bdf1939dbdf1939dbd000000000000000005f8c73c0000000000000000b641d4bdecec00be64ac1d3e2cba0ebebd0d9dbec211a1bc4f120ebe7aa019bed7edc8bda208f2bd060bcebd0000000000000000cf59603b2ee49c3b0000000009c239bcdf3e2bbeeea20fbe7a9c823c0619103f65270bbed86f4abe00000000afded6bdafded6bd87ddb8bd27544bbd27544bbdfc99b7be00000000706846be00000000c53e2ebeb637b4bd573122be00000000be2ab8bda4f1b63e25bb46beb9520fbe6ef1d53ecbacbd3e970ab23e00000000e258033f89a5fd3d00000000069bab3eb0b89a3e000000000000000064ea2dbd64ea2dbd0575943e0575943e0000000000000000000000004cff033e4cff033e4cff033e4cff033e000000007640f13e892bef3ed130d93e2ac031bed4430fbe0047f43eee5521be00000000000000000fda43bedd2833be895525be0000000000000000453c0abea56152bea48c85bd1f2dd2bd00000000a0e9dbbd0000000000000000319eaabdd0e1c0bd51fcaf3ec510d83e6b8dd63e604638be000000004a5699bd87142ebec923d3bec55a6ebecb5daf3ea06e343ea06e343ea3d76bbd44d9fabd00000000000000000000000076461dbe8ec857be6c728a3ea9247cbe00000000000000005c7d4cbe847746be231c94be5e248fbe93a335be622ea23e0b53f83c00000000168666be397013be016cc23ea48d15bea802a9bd0bea42be6ed3ec3e9659973e9659973e39994ebe6cfa77bea3a0ec3e341f9cbe4c3a5bbd4c3a5bbd187f2fbe000000002b669f3e00000000000000001af3cf3ecb6c84be000000001492bf3e1d6e62be907365be00000000d7c7c33e5345cdbdbb3a98be00000000000000000000000000000000000000005b6f5a3c5b6f5a3c2b6d50beda5af1bdda5af1bdda5af1bdc5780abe239982be6a8d42be00000000ccf34ebd82b520bea36b1cbea36b1cbe888ab2bec00232beee4029be94cc393e94cc393e00000000e9c6863e3af4b3bd3af4b3bd3af4b3bd3af4b3bd3af4b3bdd3cfa13bfab5e4bdbdad42be1cd99d3e01ccab3e7f5e45be6e7b9abd233305bd233305bd73f80d3e73f80d3e73f80d3eb9427abe7737c03e528ae73ee0ce37bdc8f9e1bd000000000f8fc93e4dea29beafdc873ed88d0bbe58979fbe89634abe18bc0abe18bc0abe000000000000000000000000000000009390453d16d4d7bd16d4d7bdf87b15bedcef86bd39165abc00000000000000000df604bdbd2fdebdbd2fdebd561471beb4be2dbe00000000c21baabe0000000001e5a3bea541af3ed55c833ddf10163e000000000000000000000000d6d198be1ee1923e4a2238be4a2238be6df2763e6df2763e2935ecbd2935ecbd9583a23e000000000eca303ef03c603ed83ccebec134463e54079d3efc39c03e563727be563727be563727be1ce9a6be000000003e789abd00000000000000001e8bc73e000000003e63823c2e1956bea525343cfb27d73e3ec3b73efb24b03ebc5fcd3e579e4abe941e5dbd000000002a4e31be2525e0bd6c9fcb3ea51e45bede4692bdde4692bdb99af53e00000000b2a83ebe00000000000000003b4d98be1ba787bb1ba787bbd57a5fbdd57a5fbd00000000509aa03e509aa03e680671be90d410be90d410bec60fdfbdc60fdfbdf49c3dbdb6ca56be74186d3ef0365ebe9cfde9bdd76ca13e20ec39be20ec39be000000002c2da33ea7e499bdf86dcc3e371b7c3ef777ab3e65d3c43e9a6612be9a6612be62b0dc3e933b2cbecb591bbe000000008bb674bedd5aa13e0000000000000000f7796bbef7796bbe00000000000000000000000000000000000000001356a33c1dda803e00000000a805e6bd00000000aae6fabd23a480be1ae68ebe85214abe6b6893bee724e6bd3d1c45be629ad53e1f373fbe5e7636be000000005c2bcc3e654403be654403bea59c4a3ca59c4a3c971e97bd08bac93e2a20b73e67e3a83e67e3a83eb4b064bdaf2493beaf2493be44bf0ebef573aabe0000000092340ebe112e0e3e000000006b27e23e7334a03efd9831be00000000000000003480f4bedfd17dbe64c61f3e9ad3153e6c24bd3eeb77e73e86fbe1bd86fbe1bd49150ebe0000000097b0023ea1b6c63e0c1e97be0318b43e54a4c13eb14847bef2701bbe6d739bbd245b48bc707590bd47e686befbf821befbf821be16b6c83eb4c865bdff6aec3a217eb23e60f61ebeb64b3dbeb64b3dbeff0e9b3eff0e9b3e00000000950c06be950c06becf7aafbdcf7aafbd08f8debd1d7d0bbe0000000000000000ca572f3d0000000000000000c9a3083f91aec2bd00000000c1f4ef3e0000000000df09befd5971be0000000000000000000000000000000000000000360fe3bdc84f4abe00000000de130cbe442710bc442710bc80ae103c639c97bd89dec23ed98af33eb13214beb13214be714208be00000000bd84dc3e0000000030d4ba3d000000009928abbe565359be8fb052bef63189be6fc9a9be0000000044d512be115363be0000000000000000667305be9b800cbe224ecebdbed84cbe0000000000000000a0dc2abc6089f13efae22dbe00000000869a293e869a293e0000000000000000a220b8bd37c510be56e5bdbc000000009805dcbd000000007376d4bdcfa60abecfa60abe9245083f00000000320eb5be5c8ace3e0000000000000000a3ff4b3e0000000000000000b906ca3e884f80be7bb0d93edd9e86beee8f24bec3dcbabe69c7c03e00000000000000002c26ec3e8eae68be298daa3e000000005d2185beafbfcc3e82f8d13ed60022be17b228be0481ba3e0000000061a00abe28a26ebd28a26ebd1ce9b03eb2bec63ecfb1b73e00000000591f3d3ea553093df335a6bbcc62643d9c1f8dbdfd7b51be55bf2cbe487c2ebe65e200be9735b0bd52c5edbd787ff1bd383a91be7c1343be7c1343be4f854bbe4f854bbe8209a4be2a3ea1be38ca97be81166e3dd043843e5228a33e00000000f623843ef623843ede35973e000000004f508bbeb21915beb21915be35e3793e35e3793e0000000011045cbef51f723ef51f723ed32c353ed32c353ed32c353e29a6863e29a6863ea7c9743ea7c9743e000000004537f6bd664e9abd00000000816d90be548b4bbeb7ff38be5581f23e356c0fbe356c0fbe00000000521fb5bd521fb5bde40e2dbe00000000ba19623eba19623eba19623ef0ffd8bd11d630be99da1fbe27b0e03eaef2a73e0000000000000000733c6ebee33f2abd000000000000000000000000000000009bf500be3e6a593c47ba5cbedddfa8bde62cecbd66b47cbd00000000d20ec6bd540e3dbe540e3dbea12478be000000000000000005f0b0bd00000000247e57be22ed26bd031cdd3e32f9db3e000000003a9432bd3a9432bdfb9f1cbeed0f1fbe9170e8bd00000000ec72e13ef7ea67bd0000000000000000e2dfb53ee42a44bde42a44bdcd028b3ebecdf8bd00000000dcf0bbbe19a150beb50a1abe00000000e03fb63eb8ffb4bd0000000009d6c43e00000000000000009c529fbccd730cbe2b6ed03e2f39fa3e93f687bec171563ec171563ef556a63d3dbc8bbe00000000a9b67e3ea9b67e3ebe6ed63eca978abeb87389be11b2e53e6110d53e0c6f8dbe2ea50dbe1ebbf0bd49792fbe369e01be00000000000000000000000073c552ba81fcb03e6fb596be171ddc3e776b1bbe776b1bbec0cc58befe270e3efe270e3ee9c90dbe000000000000000041a10abe41a10abecd9750be00000000333b29bd7b1e81be86543dbe00000000000000004e50c43e000000003efc50be9518f0bdc25933be4f7cd2bd50ae07be000000000000000000000000332408be49e56d3c00000000000000006c19bd3e6d42b8bd34d339be000000004a5b4abefdc6d2bc677b24beb16f8dbdb16f8dbdb16f8dbd370c58becc0f963e34f0afbd34f0afbd6fdbdfbd9a08cc3e0000000000000000000000000000000071e0083fdec269bddec269bd6787e73e65ad1ebe00000000358e2bbe8821cd3e080e25be666f14bd666f14bd3dcddcbdddc27ebe0287b8bd000000000352893e0352893eb0bc61bece736dbeee35ccbdee35ccbdee35ccbdff35473eff35473e00000000eacfdfbd2b442fbe032bdc3e00000000a20f02bbdc392ebe000000003977afbe273828bee22cd43e3886b43e5a0d9b3e00000000fb30913cfb30913c2127b73ee20c96be00000000061ad7be0000000006adf6bd06adf6bd1d8933be000000001d20db3ebb86433ebb86433e21b5cb3e74db2fbd74db2fbd9b3a65bed2a7c53e00000000e80a7bbe1d56833e90d5a63e471bb23e98a5743e60ba413df001ac3e74e8e13d529077bd3339b43e8b03acbe283fcb3e98ec6f3e775ac4be8d21c8be00000000000000005dcca7bc1433f1bd0848cabdb88ac0bd2481ec3eeb7e47be769454be07ffa23eb43f1abe15079f3e43a132be7db9bebd6e8c833e1f4d38bee9c18bbe00000000861222be00000000c3437ebe00000000f862af3e4c11673e4c11673eaaca43be00000000e1830ebee1830ebe0000000051ee31be51ee31be88918ebe00000000874686be114b8cbe0000000000000000bdffbc3e00000000a53038bed7eb1dbec23f48befc286e3efc286e3e00000000048a44be21d430be88b5c73e3891ccbda6053dbebdf346be7d1e39be0000000000000000000000005ee88bbe9dd0febd9dd0febd9dd0febd004f14be004f14be2953603e7ba3efbd7ba3efbda8fd973ea8fd973e8801a03e8801a03e9ad4cd3eab0fd83e05e00dbeb2f594be0fb522bd6d36e1bde49e14beb96d74bd10bc08be00000000d9e20abe5c7ead3d7724d3bdfc36a23ea8d4a6be8e949d3e8342fe3eff226fbe694353be48fad33d02627d3d02627d3d6ef5c0bd45b623be8b9105bd00000000036b5c3e036b5c3e036b5c3e3c8c8d3c000000000a7280be00000000001594bed3da5abe0b67b53e6b639ebd3a455abe0a70cd3e4d351cbe4ef000be57c6ca3e000000003620f93e3731df3e604bcabd604bcabdbea412be000000003429ce3e90d5dcbbd9c793bd45c11cbe45c11cbe67fe58beabec6cbe855739beabc78c3e0000000000000000eda1773eeda1773e506d593ee3804e3ee3804e3e6e82833e672f6b3e571bb23e335895becb5a9a3e992288bd992288bd992288bde88f35bd687a2ebe0000000000000000c4d7c93ed33613be696e443e696e443e21f7bebe4d798fbed36f20be26c73abeb413b7bdf51a9f3e714e8c3d327149bef7670abec25345becd8fd0bd24a9413ce440c53e1613bf3c619aa03e40afac3ec44829be26b537bebff996bc01d0b0bda99c6ebe099e53bea800ad3ecb1a51bec3bf84bec82b56beb30b8cbe808b61beb4a9ac3ec84c4b3e00000000a8952cbe5a0bcdbdacd783be00000000a4bc0cbee9ea63be98e3b9bd09f8933e00000000d3f0d33e8284b33e5ff6c93e1cbbac3ea969a93e34aece3eb62eb73eae4fd13e285fb03edb18ad3ef34b50be464ccd3e0000000096a3c73ee708cf3ebf544e3ebf544e3e3c7a8f3e6f5098be000000005bdea93e2b3ac2bdee2696bdc06ed0bd7b3597bdaf1c70becf41dc3e6e3ce6be1030763ed1ebaebc000000003c2363bd1576b0bd00000000e3cd49be0aaa3a3d7d8364be2e3e3bbeabaad6bda5bbc0bd7d6ad1bdfa8847be0b3bb73eaa07ac3e00000000425eb63e0000000000000000cdf987bdcdf987bdcd91003f7acf1fbe098282be24c62fbe1284f0bd0000000000000000d10796bc10e539bef2623dbdee0736bd7c3e6ebe2c2481be82e495be29f86dbe00000000ccf169be109006bebbd3cabd00000000f1f466be909996be00000000ee50bfbdee50bfbd407fe43e0842b33e8747c93e8747c93eba6952bec11308be000000000b6e08bedf0d5fbe817f61be3bded1b88ab2b1be00000000a3ae3cbead8c04be0000000000000000000000001baeaebdc93455bdc93455bd17132dbe8a93e5bd0000000000000000000000004c9a08bdbcff73bdc085cbbe33241abea0de8dbc59a5cfbd59a5cfbd59a5cfbd9fd5d9bd9fd5d9bdaf11e3bda82a703ea82a703ea82a703e674b58bdfad4babd2f20813e2f20813e3e3140bd00000000c90c54be6a4d0cbea2d9a83e00000000000000004e044cbe4e044cbe00000000000000008db6a9bd4b6d8c3e4b6d8c3e7353a23ec8e412be21ac8abd096aeb3e3ef1103f2451c23ec6edaf3eaf16af3e91599ebd9c7860bee58ca9be00000000000000003b8a97be9795afbe285546be00000000a41393be3ef12fbec594e7bdc594e7bd775cf43c55d3dbbd2000d8bd13d0523e7d2c723e753341bd8c46ab3ee661bb3ee661bb3ebaa33cbe4b7b81be8d6892be00000000ab0bc0be566a41beca5a973d2d9cf7bcf74700bef74700bec0a79abe8e54cc3e000000005745acbd5745acbd5745acbd000000000000000025929bbd25929bbd5fb9b63e000000008d2d33be8d2d33be73301cbd73301cbd00000000000000009e5cdfbd9e5cdfbd9e5cdfbd037ce7bd2b2cdcbd2b2cdcbd9c606c3e9c606c3e0000000047b8e0bd9df5e4bd0b981d3d0000000041dd3ebe054fc6bda557d0bd162694beb8ef5abee06d08be00000000932a4cbe0000000000000000b8fd83bebbec5ebe3827d6ba5b781abe000000000000000000000000c37b04be0000000072bbf7bd6cc327bedc44efbd7a2c25be4bca45be4bca45be00757cbe272882bd272882bd71b1fa3e0000000047fdd73e411e90bd4ea66cbe24a38bbd24a38bbdaf4ba33ef18bb3bdf18bb3bdead79a3e000000000000000098f1eebd6007fe3e9d1a06be39e16cbe39e16cbed98be23e06ac42bedd5f3e3e48bad63e0000000027e449be30d54abe4435f3bd7f46cf3ea86b1ebe48a389be4cf2b4bdcd062abe00000000dd8537be000000001b11fabd0a77d9bdc7869dbd4172ae3e000000008b6d24bd8b6d24bd0000000014c118beee3d683ec78f24bec78f24bec78f24be58274bbe9fc601bd00000000148aa5be000000000000000008bf96be4822babd4822babd583422bd583422bd583422bd624b05bd624b05bd00000000000000003505c13e4edd4dbe4edd4dbe000000005d6f7ebd0000000000000000b1b48fbe6305093e6305093e229201bd52e214be55a93bbe3a99ebbd17af2cbe17af2cbe8912ec3e582e27be029bd83e00000000000000008d5407be225437be732ab43e0abbc43e158ab23e00000000646299be7147923e1a7e56be00000000407d1cbee036e3bde036e3bd50e2f73e9b7f9dbdc38533bddb6f803e9e703c3e3f7e363ea2f33a3ec628363e00000000000000000c5186bea2933bbe227d46bead58a6bec11489bdc11489bd5e08853e5e08853e5e08853ecbd70dbecbd70dbe0896983d0896983d00000000abd882befb59083f00000000248a46be5b9f9fbe80e4adbd277766bd000000004dcc46becfe638becfe638be121cb03e2451a2bd4589f0bd32adebbd0000000000000000788ec63e00000000000000008eb7303e8eb7303e0000000002dcb13e976551bd15eb0dbe5bb8af3e455288bc455288bc4d4dcbbd4d4dcbbdc80a50be00000000faea6f3e7d0554be7d0554bec0ecafbdb46a57beb46a57be3dfbb63edfaab2be65719fbee059b0bebe3dddbd9b0927be000000005d5710be5d5710be006c41bd006c41bd006c41bd485064be00000000936c05bef1e9f23e33cdbcbd33cdbcbddb553cbe00000000000000000f7bfc3ee33f0ebe925611be00000000d8af553e3f7ba6bee81797bde81797bd37e8e9bd37e8e9bd00000000000000000654873e000000000e3cb23e5c9e24be5c9e24be0000000046f1b33e0000000000000000ebaf1cbe756a82be6c6fe23eb6d1ee3e00000000fbbbcf3e4905cdbb76354fbeff5f1dbeff5f1dbe201c13be649417be649417be00000000045b03be00000000013337be50dd45be464ec43e5a75e33e00000000000000005f8c00be86996ebea2521ebedabc6e3e84d019bef299a33eb7d490be8cdb14be0000000086a5aa3e0000000040a492becf235e3e139776bd139776bd9fcdb4bd78e526be00000000000000000000000014d3c63e441d1abe691abfbe00000000a5f5a6be5c21ec3e674419be0343e13e0000000000000000b39ec23eb39ec23ecb3fe7bb907306be00000000e70856be260a98bcff1fdd3e00000000000000000546d2bd977812be563d63be9e50173ed1d336bed1d336bed1d336be9870993e395f823e849049bec6074ebe43404abe0000000016e8b23e00000000d5b0383ed5b0383ed5b0383e000000007d50d43eed5721beed5721be355ea73e16fb36be16fb36be2877763e75841e3e75841e3ee7a4ddbdcd5e3fbe00000000000000000000000048da1a3e48da1a3e462b7b3e462b7b3e00000000931fde3e1da544be000000000000000036010f3e36010f3e36010f3e00000000000000009168c4bd9168c4bd9168c4bdc0e2ad3e6284e5bd6284e5bddc2314be7ee975bd7ee975bdefe94dbe916815be13d8ccbd13d8ccbd66dc14beab44f93ee75c38bee671e7bd6e1404befea3b4be00000000576d05becc30633dcc30633d6ad6bbbe2abe683e28e4ccbe00000000000000006175a6be469143beb35ec5bd00000000000000000000000077a72e3c00000000f086483df086483d0ab04d3e0ab04d3e0ab04d3ea490a0be9b3e5abed6a1e4bd00000000a52787be55f14dbe123828bec09595bd0b15933e0b15933ef647863ef647863e07a3db3d07a3db3d00000000d205c73ec44ea43ec44ea43e00000000d091e83c0000000000000000a8350f3e23a443be23a443be5115e5bd990680becf00393ed31435bed31435be165f7b3e165f7b3e435b9bbec286173ec286173e22a880be45dff4bd45dff4bdf21e89bed1bdc1bd39805fbe00000000c601933ec601933e6a9280be56ee3ebe00000000000000000000000026c94fbe4ed53bbd4ed53bbdf37841be23390bbe23390bbe17f783be00000000d17ec9bd231b2cbe231b2cbe0000000054b53ebcf8d4e3bdf8d4e3bd93766cbe5dc456becd27c2bd9a3d46be00000000c56324bef8f02ebe00000000330b34bef98ac53eca03f73ed60ebabd9a9d2fbee33759be2d33e2bd31afd3bd31afd3bd805eb13e702a8e3edd8836bedd8836be817ad83d99ae5dbe490d953e9cfb92bea6ae57becb2c9cbe5e4135be1fb5dc3e00000000ed7413bef0999f3ef0999f3ea985e83e3871ffbd13a7f9bdaff9543dd00d363dd00d363dd00d363d826f5dbe4b7e77be7f5058bed699dd3d3a955ebe45fb8fbc2cb2e93e78b1d83e43196abea48f17be3adb413e98f4d63e8c6328be8c6328be00000000d8ed4ebe2955003e00000000a5e25ebe694221be00000000000000009127e7bd786468be0000000038ef653e0000000000000000c736c7bdbcc127bec7c69bbdc7c69bbdc7c69bbd000000008f3203be00000000391add3e0000000000000000ed0f66beed0f66be61b070bd61b070bd4415f23e0671a63e85d6c4bd85d6c4bd7865e8bd7865e8bd4ab65bbe52100abe52100abe55b8e9baeffea93eb02bab3e5ff75dbe1dfd77be97d263bd8e67d63ef45f20bef45f20bef5a380beecdf9e3ebe2c953e00000000dd89c8beb95296bdab51eabdab51eabd102464be000000008231a3bedd1d9bbec55787bea83c87bec9a569be0000000000000000000000008d148a3ef4795bbe792175beaa37bdbd9ac073be58fab3bee34b6abe1f2f5abdc4f9973e37f3e8bd37f3e8bd37f3e8bdda0424be9b46853e46c4843e6f8becbd6f8becbd000000006ccee2bd126468bea0333f3ea0333f3ea0333f3e0000000034389fbdac23e9be5298653eeeee893dbb7f223ebb7f223ee94e82bcea4e81be639ac33b6eaf9e3e6eaf9e3e6eaf9e3e8bf411be58d18d3e38972dbdc24bc23ee6929abe55dbc63ed7886fbef402c43e4cedab3e00000000000000006bd1dabd00000000766259befa71aabddfe6b5bd00000000d6493abe09938fbd341ae2bddb1dbdbc00000000db596dbe101f52be9e7ea4bea6a213beac7739bdac7739bdfc6a4fbebaa867be7f5ab33e77a4a53ecdf8a43e5f3bff3e000000002645b4bd0000000000000000000000000bf80f3e000000006409cdbd3bcdefbd3bcdefbdc5318b3ecb0fdf3ef2fb44be00000000de7acdbd9d3c3d3e7d3b01be7ee2133c0000000000000000000000004a4a08beff7ebe3e00000000f79b51be0000000000000000e80d7dbde80d7dbd0000000000000000c40db23d0000000052224fbd26ac9bbd26ac9bbdc112aabdc3c3363dc3c3363dfbc41bbefbc41bbe3d41df3ef9e9f2bdeb2c2abe00000000acac1bbe000000009325463e00000000000000002efa9bbe00000000cb5e933e13bca33e6bfe553e6bfe553e00000000e4056ebd0ecf11be281f40be60ebc7bdb3cb75be4af91ebe4af91ebe0000000067c4393e8f5dab3e83857e3e00000000686948bd00000000e7e81cbe000000001452013f00000000000000004a8acbbd07ae53be0c533c3d404b903e404b903e4d2cd43ea22906be000000004464e33ee66744be6866a7bc6866a7bc1f0bde3e00000000094ef3bdf43a9838074f573ce05d6cbe9acdd9bd37a69dbefc1b25be84b6ae3e40be86be57704c3e6691a13e6691a13e0000000003825dbe4af731be0643003fc8ec1abeb6991bbea3b536be00000000000000009c72413c9c72413ce85b52be4f65b7bd9aea4fbd520989bd00000000b4d6cabd0000000000000000eb7bf53e21fe163e21fe163ea0f427be092bc63eaaf8a7be00000000099b0bbe4d6976beb3520dbe000000000000000000000000000000003c4ac83e0000000000000000eea8e33e00000000c364c63e000000001c6d51bd32564bbd32564bbd727729bee8bd84be000000004ada36be5a1fe1bdc59122bec7d4b4bd2f4578be5f196fbd5f196fbd5f196fbd000000004549fdbd000000000000000048f9f1bd48f9f1bd000000000000000000000000279bfb3e0318cfbd0318cfbd0000000022cf0dbe10fc13bc10fc13bc000000000000000000000000eae7713e03fd63be75edb93e75edb93ee1f682bee1f682bec7c5f3bdc7c5f3bdc0ca623e21ed9c3e3cde55be391e96beb7053dbd000000009ca5623e00000000e822f5bd5d0719be00000000761387be00000000e6bc40be366ea4bdcefefb3e3d4cde3b000000004cc5efbd6166aebc31775f3e31775f3eb23b7abdb23b7abd00000000000000007b0cb83ef254943ecf626e3e718810be0000000024d7abbd24d7abbd24d7abbd24d7abbdb2006bbe84eb5ebe0319b03e7edc4cbe9384b23ea2be7dbe5c4a963e00000000a95f93bd05e4a4bd7bffebbdb2ddf8bdb2ddf8bd59845c3e59845c3e59845c3eb4bdd8bdb4bdd8bdb531033d919febbd4b6da93c4b6da93c4b6da93c18d8a63e0000000040a6e83eb01063be2009a2be937451be2ef408be2ef408bed798e1bcb016d2bdb016d2bd8613edbd8613edbdf6ff24be4f3b9bbe0000000000000000000000000000000025b196bd000000000000000000e963be00000000000000000000000010dcd6be702bdabd702bdabd702bdabd0149723c8867a0bd8867a0bd0000000044eeb93e9b51863eae54c73e83e85dbe365544be99a343bed75c8cbec432c7bebd8d6e3b0000000000000000fe297dbecb4a26bee209d8bd7ed939be002030bed5c289beb5ed63be000000006476cebd6476cebdf70013be3fe5a43e3fe5a43e000000000000000000000000c0c6a5be66305ebe66305ebe000000005fba05bd0000000015ca4b3e15e272be86b15ebe47582abeebcede3e5a33a7bd53158cbeb352a9be84b12dbefaaf3fbefaaf3fbef2561abedea96ebe7d28debd716b45be2eb9bf3ed41aa93e82acac3e000000006eaf82be18a4a9bea33198bd3239d43e00000000460551be0c6677be8d0c32be44ee61be6f68f4bc0000000000000000000000000000000068c3f2bd68c3f2bda136dd3e3d852ebeeb4f38bd0000000085ec83bef9813dbe94f304be7daf51be3d2358be0665b2bdb1d3883eb1d3883ea0eb36be000000000000000075238fbe00000000885261be5c3e4fbe000000003dd04ebe6fe091bed9a427bec4944bbef611b93e23a0ba3e000000004ae5cd3e4ae5cd3e4d1815bec37c4cbead5522be3fe3e93e3209bd3e4f321ebe86a59abd0bd8aabe1fe3b0bd1dbb17becbc89abd000000000000000000000000640789be026c33be000000001b8a8dbe1226b53e65668dbe5abb87bedee2b13ef12c40bdf12c40bd965f2cbe87b82dbed7d6dbbd504908be504908be504908be5254ffbd4fa912be470d833e470d833e000000000743c63e184079be5ae00bbe747cc53e86cf2fbe00000000fb03adbe236226be7da7c63eef1087be0000000000000000c690873ec690873e87ef12beae6b8dbd4c32803e46430f3fb450743d000000000000000023bf8e3e22f9cf3ebcc70e3ebcc70e3ebcc70e3e00000000dea37bbd5654803e59305dbec2159d3ec2159d3ea5ad8b3ea5ad8b3ecfcbd63e5899f0bd5bfba63e37ef9ebd37ef9ebdb1b2b93efa10db3ec7e78a3e37f9f7bd37f9f7bdca841cbe0000000038bbbd3e00000000d6dd9abb446dd73e6be705bedbda883edbda883ee0c6f1bde0c6f1bde11b15be565fa7bdd519edbdd519edbd6160d0bdade17d3eade17d3ec24de93ac24de93a33f7abbd00000000cbf434be74ad27be74ad27be483b4b3e483b4b3e92735cbe00000000a85681bec9975a3ec9975a3ec9975a3e000000007389a4bd2e9e0dbe2e9e0dbe0000000000000000e4ff6fbee4ff6fbe03d025be4ffb88be15b0af3e000000003c7d39bdd931693ed931693ed931693eca1cea3e00000000e8b1b4bd833d1bbefbcf50be00000000b07fc0bed2e727be79a60fbe653c87bd0546f3bd0546f3bd0000000000000000484b12bec36928bec36928be46972abedd2558be1128bdbd1128bdbd774a93be8ac2db3e2bf68dbe832b74be9b9ad53e000000003398adbe0000000025c2e23e28682dbdcf254cbe000000005fd430be1e201ebe00000000e71bd23e00000000000000000000000037fe723eff5f22bdff5f22bde151a23ee151a23ed95367bdd95367bd427e51bd427e51bd427e51bd427e51bdf1ce563e757c36be09e2fbbd09e2fbbd8978d53ec64fa93e9287b1bd9287b1bd7373683e7373683e7373683e8d9f28be1027963e1027963ef79f36be9108c5bcdec91fbe00000000000000000000000000000000f0025d3ef0025d3e90c0d53e0000000000000000fc8430be05b597be036036be415242be00000000daa67ebe1bd7983e6bd077be315148be048409be7e001abe2113e73ea91025be06be3dbe000000000c4ded3e0000000000000000d249b4bedcc36c3edcc36c3e453f5bbe189dca3e14c863be2b7b0abe2b7b0abe2b7b0abedb056bbe000000000000000032fe21be82ff5abe6a2c5dbe614fd8bdd7d9e93c0000000000000000000000005598043fe0ebeebd6436063fd82ce53eeb873ebedf8b3bbddf8b3bbd2e6ef13e069955bd9d968e3d223dc5bd047bddbd812e963d00000000f5e9833b2f1808be00000000a07d0f3f11e474bd472335be40c082bd00000000000000000000000000000000000000000000000000000000fed79d3e36eb3abeada7e7bd02bd53be000000000000000023fed53e00f45abed9276e3e4700e83e0000000022821cbe6814663e6814663e00000000ec18a6bdec18a6bdec18a6bdd9dfe1bdd9dfe1bdc8db25bea1bf87beaa8c10bcff95693eff95693e4b9e73bea284b53e000000008b5e75becb427dbe33b8c6bd4bfad63e8cda983e000000004ea43bbe0000000000000000000000000000000094ca5abe376e43bd49553abea4b9a43eb4f80ebefe0181bd6f6365be533d98397e1734be22e683be586f4cbefa7a2cbe7147f5bd357c133f243c643ee8c6febd26bab83e00000000000000000000000001898bbdf65d993e83916e3e00000000d38418be292168bebde2e2bdd1e280bd032288be2f07df3ec5d40fbec5d40fbec5d40fbef7c43b3ef7c43b3ea125833d1b363cbe572a6dbd3d4c0a3f4727bd3e000000006b8dc73e3fea7f3eed55b23e02dab83edad18cbe27ed303e27ed303e2148843e8df0b83ec48e883ec48e883edadc25bedadc25be83bab4bd83bab4bd696bf7bd18ff0fbe1d99f03ec9dd8fbdc9dd8fbd9905a73e6e3629be6e3629be0000000038b51cbe38b51cbe00000000995e55bd1e2a523e6ac1b2bd626231bed966a3ba2dda98be00000000a716e1be00000000fd1dac3ec00557be8883ae3e00000000875c05be875c05be875c05beea6202be0000000000000000303bd33d5cd8b23eebc9bbbdebc9bbbdebc9bbbdd984d03e23c7d13e5479c73e0000000000000000b942a03e538774bd9b3112bd9b3112bd9b3112bd794b9b3e794b9b3e000000008d087abe00000000116eb3bd2544eabdbecfa5bd090f723e84769b3eedc754be85c353be029837be00000000000000000000000033b474bde4f1a93e000000000a39993e000000009a0d82be00000000d3f3f6bdd3f3f6bdd3f3f6bd0000000035c724be0948d93ec2a0823ef12855ba3269a03eacd56d3eacd56d3e3f009e3e8e2c97be28db63bd28db63bd4d0232be9974ec3e16820bbea69580bd094f3fbe314d27beb4a72ebe4e61a43ea7ddca3e31cfdd3e021a8ebe2fa9c53eb1b7db3e42ad19bd42ad19bdaaa3bc3ee3b970bb6656c83e040c6a3e00000000bb2c833ebb2c833ee79097be22eeb3be950f92be00000000666d73beec3dddbd12bcdf3ede69d33ee180b83e0000000000000000f0f1843e5849af3e909ff7bdf8f46fbe67f2de3e832222be7922da3eae717ebec8e3953ec1e843be000000007bea9ebe57b4963e57b4963e8d2be43ef918e83e7ff6b23e7ff6b23e1484823e000000000000000000000000bb2185be0986a8be86db8fbd1feea6bd00000000fe9e88bec832843c29a213be5c2308be5c2308be01d6acbcf03347beb718c63eb718c63e00000000bad2eb3e36de95bd36de95bd36de95bd36de95bdeeac883eeeac883e8f3ca23e73005abef9e938bd55b4ddbd706f6c3c0f7203be0f7203be00000000000000001edcadbd7e246ebeecf534beecf534be00000000000000008ae68f3e8ae68f3ea66d993e4386803e87268abeb3eb663e69e3aebe42aaae3e476221be476221be000000000845943eafdb5abe5c09853e5c09853ee4d69a3e1d4c21be633eb03ea2908cbec172573e00000000a6bf713ea6bf713e00000000000000001665913e1665913e163cc4bdb65816beb65816be97f3aa3e9f0a89be24c278befa82993e540ebfbe2ab6303eae64a83e08edc03ec9bccd3e3c5564be0c531a3e0000000000000000000000007740cb3cab760bbc2996e33e1cedde3e1e5d8e3e65b46bbebb22e8bd91afb93e30f9ba3eef80d83eae9c11be4594833ebba692bec72aff3dc72aff3d2bcf77beb93a83bea48a64bd89376ebeb34081be354dbe3d000000000000000000000000a34608be8ac2e93eb75109be109822be0000000000000000bbad7bbd53d6bc3eb2968dbe3e2a00be10b27cbe10a8abbea9164cbe000000000000000000000000108c64be1117a33e3b53da3ef77b37bef77b37be32eca4be2098c3bd59bb90be1f6348be923e2bbe0505dc3e21b67dbdd2540bbe00000000296768be000000000000000097ae8fbe1ab650bea054f93eb96f59bd1cabb13a1cabb13a1cabb13afe912fbe168ca2be59c64fbe2b4990bd6531be3c6531be3c6531be3c515b60bd34e886bebf2837be886299bec3001cbe0000000095fa2bbe7d19b8bd411b00be0000000010463fbe45128bbe0b4a41be0000000084eea7be491397be16556a3e0000000014bc28be14bc28be32dfacbe4efd38bee2fc37be0000000000000000c4b329bedf0267bd69de8bbe tensor 6203x1xf32 tensor 6203x1xf32 cst 2 std constant value dense 0 137156427 0 0727723241 0 0427678488 2 75064172e 4 0 0233619846 0 0394954272 0 0791109725 tensor 7x1xf32 tensor 7x1xf32 cst 3 std constant value dense 0 131277829 tensor 1xf32 tensor 1xf32 cst 4 std constant value dense tensor 0xf32 tensor 0xf32 cst 5 std constant value dense lat long month price year tensor 5x tf string tensor 5x tf string cst 6 std constant value dense tensor 0x tf string tensor 0x tf string cst 7 std constant value dense category i d description gender host i d size i d tensor 5x tf string tensor 5x tf string cst 8 std constant value dense 1 tensor tensor cst 9 std constant value dense 1 tensor tensor cst 10 std constant value dense 1 1 tensor 2xi32 tensor 2xi32 cst 11 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 12 std constant value dense 1 tensor tensor cst 13 std constant value dense 0 tensor tensor cst 14 std constant value dense 0 0035018248 tensor 1x1xf32 tensor 1x1xf32 cst 15 std constant value dense 0 tensor 1xi32 tensor 1xi32 cst 16 std constant value dense 0 tensor 2xi32 tensor 2xi32 cst 17 std constant value dense 0 1 tensor 2xi32 tensor 2xi32 cst 18 std constant value dense 1 tensor 2xi32 tensor 2xi32 cst 19 std constant value none cst 20 std constant value dense 2 tensor 1xi32 tensor 1xi32 cst 21 std constant value dense 1 tensor 1xi32 tensor 1xi32 sparse indice 5 sparse value 5 sparse shape 5 dense value 5 tf parseexamplev2 arg0 cst 6 cst 7 cst 5 cst 6 cst 4 cst 4 cst 4 cst 4 cst 4 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tensor tensor 0x tf string tensor 5x tf string tensor 5x tf string tensor 0x tf string tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 tfl cast sparse shape 0 tensor 2xi64 tensor 2xi32 1 tfl cast sparse shape 1 tensor 2xi64 tensor 2xi32 2 tfl cast sparse shape 3 tensor 2xi64 tensor 2xi32 3 tfl cast sparse shape 4 tensor 2xi64 tensor 2xi32 4 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tensor 5 tf lookuptablefindv2 4 sparse value 0 cst 9 device tensor tensor tensor tensor xi64 6 tfl stride slice 0 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 7 tfl pack 6 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 8 tfl cast 7 tensor 2xi32 tensor 2xi64 output indice output shape tf sparsereshape sparse indice 0 sparse shape 0 8 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 9 tfl cast output shape tensor 2xi64 tensor 2xi32 10 tfl gather output shape cst 12 axis 0 i32 tensor 2xi64 tensor tensor 11 tfl greater equal 5 cst 13 tensor xi64 tensor tensor xi1 12 tfl where 11 tensor xi1 tensor 13 tfl reshape 12 cst 11 tensor tensor 1xi32 tensor 14 tfl gather 5 13 axis 0 i32 tensor xi64 tensor tensor xi64 15 tfl slice output shape cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 16 tfl reduce prod 15 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 17 tfl pack 16 10 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 22 output shape 23 tf sparsereshape output indice output shape 17 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 18 tfl gather output indice 22 13 axis 0 i32 tensor tensor tensor 19 tfl slice 9 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 24 output value empty row indicator reverse index map tf sparsefillemptyrow 18 14 output shape 23 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 20 tfl reshape empty row indicator cst 10 tensor tensor 2xi32 tensor output idx tfl unique output value tensor tensor tensor 21 tfl strided slice output indice 24 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 22 tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tensor 23 tf lookuptablefindv2 22 sparse value 1 cst 9 device tensor tensor tensor tensor xi64 24 tfl stride slice 1 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 25 tfl pack 24 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 26 tfl cast 25 tensor 2xi32 tensor 2xi64 output indice 25 output shape 26 tf sparsereshape sparse indice 1 sparse shape 1 26 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 27 tfl cast output shape 26 tensor 2xi64 tensor 2xi32 28 tfl gather output shape 26 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 29 tfl greater equal 23 cst 13 tensor xi64 tensor tensor xi1 30 tfl where 29 tensor xi1 tensor 31 tfl reshape 30 cst 11 tensor tensor 1xi32 tensor 32 tfl gather 23 31 axis 0 i32 tensor xi64 tensor tensor xi64 33 tfl slice output shape 26 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 34 tfl reduce prod 33 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 35 tfl pack 34 28 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 27 output shape 28 tf sparsereshape output indice 25 output shape 26 35 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 36 tfl gather output indice 27 31 axis 0 i32 tensor tensor tensor 37 tfl slice 27 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 29 output value 30 empty row indicator 31 reverse index map 32 tf sparsefillemptyrow 36 32 output shape 28 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 38 tfl reshape empty row indicator 31 cst 10 tensor tensor 2xi32 tensor output 33 idx 34 tfl unique output value 30 tensor tensor tensor 39 tfl strided slice output indice 29 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 40 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tensor 41 tf lookuptablefindv2 40 sparse value 3 cst 9 device tensor tensor tensor tensor xi64 42 tfl stride slice 2 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 43 tfl pack 42 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 44 tfl cast 43 tensor 2xi32 tensor 2xi64 output indice 35 output shape 36 tf sparsereshape sparse indice 3 sparse shape 3 44 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 45 tfl cast output shape 36 tensor 2xi64 tensor 2xi32 46 tfl gather output shape 36 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 47 tfl greater equal 41 cst 13 tensor xi64 tensor tensor xi1 48 tfl where 47 tensor xi1 tensor 49 tfl reshape 48 cst 11 tensor tensor 1xi32 tensor 50 tfl gather 41 49 axis 0 i32 tensor xi64 tensor tensor xi64 51 tfl slice output shape 36 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 52 tfl reduce prod 51 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 53 tfl pack 52 46 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 37 output shape 38 tf sparsereshape output indice 35 output shape 36 53 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 54 tfl gather output indice 37 49 axis 0 i32 tensor tensor tensor 55 tfl slice 45 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 39 output value 40 empty row indicator 41 reverse index map 42 tf sparsefillemptyrow 54 50 output shape 38 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 56 tfl reshape empty row indicator 41 cst 10 tensor tensor 2xi32 tensor output 43 idx 44 tfl unique output value 40 tensor tensor tensor 57 tfl strided slice output indice 39 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 58 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tensor 59 tf lookuptablefindv2 58 sparse value 4 cst 9 device tensor tensor tensor tensor xi64 60 tfl stride slice 3 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 61 tfl pack 60 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 62 tfl cast 61 tensor 2xi32 tensor 2xi64 output indice 45 output shape 46 tf sparsereshape sparse indice 4 sparse shape 4 62 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 63 tfl cast output shape 46 tensor 2xi64 tensor 2xi32 64 tfl gather output shape 46 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 65 tfl greater equal 59 cst 13 tensor xi64 tensor tensor xi1 66 tfl where 65 tensor xi1 tensor 67 tfl reshape 66 cst 11 tensor tensor 1xi32 tensor 68 tfl gather 59 67 axis 0 i32 tensor xi64 tensor tensor xi64 69 tfl slice output shape 46 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 70 tfl reduce prod 69 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 71 tfl pack 70 64 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 47 output shape 48 tf sparsereshape output indice 45 output shape 46 71 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 72 tfl gather output indice 47 67 axis 0 i32 tensor tensor tensor 73 tfl slice 63 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 49 output value 50 empty row indicator 51 reverse index map 52 tf sparsefillemptyrow 72 68 output shape 48 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 74 tfl reshape empty row indicator 51 cst 10 tensor tensor 2xi32 tensor output 53 idx 54 tfl unique output value 50 tensor tensor tensor 75 tfl strided slice output indice 49 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 76 tfl gather cst 2 output axis 0 i32 tensor 7x1xf32 tensor tensor 77 tfl custom tf 76 idx 21 127 tf sparsesegmentsum 76 idx 21 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 78 tfl shape 77 tensor tensor 2xi32 79 tfl stride slice 78 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 80 tfl pack cst 12 79 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 81 tfl tile 20 80 tensor tensor 2xi32 tensor 82 tfl zero like 77 tensor tensor 83 tfl select 81 82 77 tensor tensor tensor tensor 84 tfl shape 83 tensor tensor 2xi32 85 tfl slice 84 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 86 tfl concatenation 19 85 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 87 tfl reshape 83 86 tensor tensor 2xi32 tensor 88 tfl gather cst 1 output 33 axis 0 i32 tensor 6203x1xf32 tensor tensor 89 tfl custom tf 88 idx 34 39 127 tf sparsesegmentsum 88 idx 34 39 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 90 tfl shape 89 tensor tensor 2xi32 91 tfl stride slice 90 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 92 tfl pack cst 12 91 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 93 tfl tile 38 92 tensor tensor 2xi32 tensor 94 tfl zero like 89 tensor tensor 95 tfl select 93 94 89 tensor tensor tensor tensor 96 tfl shape 95 tensor tensor 2xi32 97 tfl slice 96 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 98 tfl concatenation 37 97 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 99 tfl reshape 95 98 tensor tensor 2xi32 tensor 100 tfl gather cst 0 output 43 axis 0 i32 tensor 2x1xf32 tensor tensor 101 tfl custom tf 100 idx 44 57 127 tf sparsesegmentsum 100 idx 44 57 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 102 tfl shape 101 tensor tensor 2xi32 103 tfl stride slice 102 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 104 tfl pack cst 12 103 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 105 tfl tile 56 104 tensor tensor 2xi32 tensor 106 tfl zero like 101 tensor tensor 107 tfl select 105 106 101 tensor tensor tensor tensor 108 tfl shape 107 tensor tensor 2xi32 109 tfl slice 108 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 110 tfl concatenation 55 109 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 111 tfl reshape 107 110 tensor tensor 2xi32 tensor 112 tfl fully connect dense value 3 cst 14 cst 19 fuse activation function none keep num dim false weight format default tensor tensor 1x1xf32 none tensor 113 tfl gather cst output 53 axis 0 i32 tensor 80x1xf32 tensor tensor 114 tfl custom tf 113 idx 54 75 127 tf sparsesegmentsum 113 idx 54 75 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 115 tfl shape 114 tensor tensor 2xi32 116 tfl strided slice 115 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 117 tfl pack cst 12 116 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 118 tfl tile 74 117 tensor tensor 2xi32 tensor 119 tfl zero like 114 tensor tensor 120 tfl select 118 119 114 tensor tensor tensor tensor 121 tfl shape 120 tensor tensor 2xi32 122 tfl slice 121 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 123 tfl concatenation 73 122 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 124 tfl reshape 120 123 tensor tensor 2xi32 tensor 125 tfl add n 87 99 111 112 124 tensor tensor tensor tensor tensor tensor 126 tfl add 125 cst 3 fuse activation function none tensor tensor 1xf32 tensor std return 126 tensor arg0 tf save model index path example result0 tf save model index path prediction sym name main tf entry function control output input serve default example 0 output statefulpartitionedcall 1 0 tf save model export name serve default type tensor tensor during handling of the above exception another exception occur convertererror traceback most recent call last in 1 converter tf lite tfliteconverter from save model test2 2 tflite model converter convert appdata local program python python38 lib site package tensorflow lite python lite py in convert self 722 for key in signature def output 723 724 return super tflitesavedmodelconverterv2 725 self convert meta graph graph def input tensor 726 output tensor appdata local program python python38 lib site package tensorflow lite python lite py in convert self graph def input tensor output tensor 637 638 convert model 639 result toco convert impl 640 input datum graph def 641 input tensor input tensor appdata local program python python38 lib site package tensorflow lite python convert py in toco convert impl input data input tensor output tensor enable mlir converter args kwargs 567 input tensor output tensor args kwargs 568 debug info str debug info serializetostre if debug info else none 569 datum toco convert protos 570 model flag serializetostre 571 toco flag serializetostre appdata local program python python38 lib site package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 200 return model str 201 except exception as e 202 raise convertererror str e 203 204 if distutil spawn find executable toco from proto bin be none convertererror 0 error loc callsite callsite parseexample parseexamplev2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf parseexamplev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite parseexample parseexamplev2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation sparse indice 5 sparse value 5 sparse shape 5 dense value 5 tf parseexamplev2 arg0 cst 6 cst 7 cst 5 cst 6 cst 4 cst 4 cst 4 cst 4 cst 4 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tensor tensor 0x tf string tensor 5x tf string tensor 5x tf string tensor 0x tf string tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d category i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d category i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 4 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 5 tf lookuptablefindv2 4 sparse value 0 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice output shape tf sparsereshape sparse indice 0 sparse shape 0 8 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 22 output shape 23 tf sparsereshape output indice output shape 17 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 24 output value empty row indicator reverse index map tf sparsefillemptyrow 18 14 output shape 23 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description description lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description description lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 22 tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 23 tf lookuptablefindv2 22 sparse value 1 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 25 output shape 26 tf sparsereshape sparse indice 1 sparse shape 1 26 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 27 output shape 28 tf sparsereshape output indice 25 output shape 26 35 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 29 output value 30 empty row indicator 31 reverse index map 32 tf sparsefillemptyrow 36 32 output shape 28 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d host i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d host i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 40 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 41 tf lookuptablefindv2 40 sparse value 3 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 35 output shape 36 tf sparsereshape sparse indice 3 sparse shape 3 44 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 37 output shape 38 tf sparsereshape output indice 35 output shape 36 53 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 39 output value 40 empty row indicator 41 reverse index map 42 tf sparsefillemptyrow 54 50 output shape 38 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d size i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf hashtablev2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d size i d lookup hash table hash table inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 58 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf lookuptablefindv2 op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d hash table lookup lookuptablefindv2 inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 59 tf lookuptablefindv2 58 sparse value 4 cst 9 device tensor tensor tensor tensor xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 45 output shape 46 tf sparsereshape sparse indice 4 sparse shape 4 62 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsereshape op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsereshape inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 47 output shape 48 tf sparsereshape output indice 45 output shape 46 71 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsefillemptyrow op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum sparsefillemptyrow sparsefillemptyrow inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation output indice 49 output value 50 empty row indicator 51 reverse index map 52 tf sparsefillemptyrow 72 68 output shape 48 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model category i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 76 idx 21 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model description weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model description weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 88 idx 34 39 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model host i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 100 idx 44 57 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 tf sparsesegmentsum op be neither a custom op nor a flex op 0 note loc statefulpartitionedcall 1 call from 0 note loc callsite callsite linear linear model linear linear model linear linear model size i d weight sum embed lookup sparse inference prune 1133 at statefulpartitionedcall inference signature wrapper 1850 at statefulpartitionedcall 1 see current operation 127 tf sparsesegmentsum 113 idx 54 75 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor 0 error fail while convert main op that can be support by the flex runtime enable via set the emit select tf op flag tf parseexamplev2 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tf sparsefillemptyrow device tf sparsereshape device tf sparsesegmentsum t f32 tidx i32 tsegmentids i64 device op that need custom implementation enable via set the emit custom op flag tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tf lookuptablefindv2 device 0 note see current operation func bb0 arg0 tensor no predecessor cst std constant value dense 0 117987722 0 0684242323 0 100408614 0 0145546673 0 0430826135 0 103112921 0 0680344701 0 0248609539 0 0398180261 0 122247897 0 0273514148 0 0187784135 0 102349631 0 0905613824 0 0723603144 0 0438856669 0 0021427928 0 0984751954 0 0817138106 0 109699354 0 191155598 0 0545536913 9 727810e 02 0 0141912363 7 680510e 02 0 0899474472 0 0498611145 0 0884774774 0 114087969 0 0725763887 0 141074464 0 176522136 0 0143758887 0 0524854325 0 155160338 0 0285528414 0 264534861 0 106433257 0 135232121 0 225332677 0 129775301 0 191358164 0 0178745817 0 0918667614 0 107648872 0 0921946167 0 064818345 0 0105348462 0 132097453 0 110714845 0 0700208098 0 034297362 0 0263220761 0 059998773 0 0116290115 0 101751082 0 0713425949 0 0987613201 0 209998265 0 0471415743 0 10908471 7 703180e 03 0 0123223783 0 103961319 0 00920343306 0 110373154 0 113558963 0 0215992182 0 21590668 0 103494935 0 21094574 0 132196262 0 18838799 0 659609914 0 209931418 0 195380583 0 115891144 0 130379677 0 236354247 0 111823596 tensor 80x1xf32 tensor 80x1xf32 cst 0 std constant value dense 0 0778336599 0 0839953199 tensor 2x1xf32 tensor 2x1xf32 cst 1 std constant value dense 0x00000000b145813c7303b1bc0bc8c43c010f1c3d548ec7b91283c2bcbdcfc23c1e8e1c3dc1772cbbc8139bbc8214983aa707d3bc9722d93bd59c2abacf006e3ddd1f12bde131dfbd65a3e1bbe5722e3b5090803d304a95bcebe8debd16133d3b641dca3c61ac2b3b8f4acdbcbfe4fbbc4e71bebc86fb393c9d1304be6f0b9d3cffe56cbdb362ef3db6b3d23d1300b2bdc63a0ebde446a2bca2644fbe97cc80bdf64179bda6f456bcbd7852bd0e9a7c3dd04e8fbdf355e23d1a049abdec183e3df12960bc384d7e3d1b6c0bbd97531fbc5232c23d2fa6743d628296bd35f1cd3d838eb2bdf4d77dbb9169ac3d9a502fbd2349dbbc7c3295bd1a2cafbb05ef4abb92848ebdf6b992bcc17a85bd51d7febdad735cbd7ec2a5bdfef6a93ddd52ea3de8506fbcad4284bb619f52bdd152fabb3706efbc9eaa55bed1819c3d25db8a3d89c8d8bd5c3befbdbc6930bda72e213d21f09c3c4336803c43b382bd4acf94ba13a7a23d1d6dacbc236aaebcd54027bd8abe9d3bb0707b3e869d43bd98b1c3bcb6f2ad3c736bc83dddab0b3cea144f3e7f2098bcd2d1813dde450b3cfa6321bdcbed453be52d01beb42a0abec0b768bd152ea7bdd856053de096a3bd204e40bdd417063d1c3d4e3e4967a4bd0b0972bc504ab13d02ca90bdbb7060bdba08063e4b9f61bd7ea7273ed5640f3d0ec4823d5c6724bd45ce1ebe4b53d3bd25e5b23d9369c2bd1fc0073d1c4b93bde3f5da3d2c83c3bd4ea5073d268b2d3c90cf003e34cf4f3e2184c8bdac5d64bec11c393d5a11ecbc8632023c552012bc59113fbd2d91cabd7410a13d8c92973deed4d73de9de7a3d0f1e91bdb6bff63da52a03be0abafc3d37c5563e309ad6bc645cafbb4aaf37bc750a8abc05b3b0bd2de7093e63c210bd478c4a3eca2d043df53997bdda0fb6bdbbca3d3d7a662abd40f92dbde3756e3d7f03a0bcd95b513e142a72bcb072ff3cbda1fc3dc6100cbde63e4f3e52bc673eea020f3dc2265639788c4d3c0eb01a3d6c8a6b3d01e2903d2c1aa3bd43d408bccf91353c5628a7bd2123563eb32d8ebdec162c3ea8a1a6bdffd3953ccec494bdaae3a6bdad7493bd91fc0f3e5e49543debec9dbd66d7bf3d36b64bba4dd0463d46feeabbf58622bde64a98bcb30c1a3e69db01badd8b533dcf5820bd07ac353d89b9b43c04fa49bd9a105b3df85710bdb3b94fbd3f09593df91946bc857cf03d566044beec6430bd2246ecbd7a27d23d6c627d3bc486983d522025befcf413bcc0b1123e9bc90a3ef0f1f43df951f13d56e33cbe330b0c3d614bbbbcc4dfc83b02c0b63da425e73dffac15bd46d30d3e8dcf99bdf275dd3df1891a3e9a1937bd6104e13cbc9de3bb71b291bccbb307bd23a439be7498d7bde8356abd462e613dc30278bdbdc11c3e385694be03cb06be9310d53b92787b3df0f3103c57a7733dfbf6afbdfff1053e0b869ebd9b7250bc2c9cfe3c3916e53d44bab43d3423c3bd24d580bdb5a15ebe0800b23ed3731bbd115846bb5ea2163df7c3cabc1fb95fbd12be04bea389e33cdcab263e58d4f83c95c3b03b50f7823e4b022abe9925a53e71ec833e8059b73d7eb3cebc35d411be09f7903d0cf00e3d3d644c3d860a58be959b463d4f1b1a3e435412bd6e470e3ed933363cee77cbbd0b76c2bd2180243d4355883ec60be6bdde31d43cbb228f3dcb462abef71204bd478ab53c6904913dec8128bd318707bea4db343c1733633c26ac463d350727befa94383e4b5ca1bc5497cd3d39314f3ef7eb573ead4903be8c554abeefeda9be2ace073e61ec353eb0272c3ef19512becbad0abd2866673e952b763ca37dabbc8a9f853ca876cebd667136bd8a0a133e81aa963c732f943c2d0b8ebccc2f4d3dfd9d13bdc6c2a0bdcd6a79bd88b4553edeccb2bccf4243beb2f0b43df4888abd40c402beda7c67bd3f0a8a3dc24d78399419bfbcc436903df70863be1095123e0bf0663e7c80473ea50d69bc5fd7bf3d3bd615bc5c421b3e42d6913e6566863d950a583e268f20beacd34abe6db0613cad5f8d3d4647b0bd98e9213d6ee814be0439943e304f78bd9cbc9bbe551a023c9524413d73af8dbe0572b23d0ff6dbbb53bc0fbdfcfef9bd19474c3d752e933dcd331abe0ad0f93d7d273cbd968483bdb20e55bdd0b801beed4a8dba8db9c0bed53a643e35d3473d18db4dbd941f063e0b5a65be9d982d3e01fcdb3c9b47433d4a4739bd69bf3f3e714814bca2dc80be385a023d73b86e3eee7a7f3d90fbb23d00da493ea28408be804cb03d99a748be435eb23d6ff407bd27d0b5bd8f8ad2bdc5ca463e1bf75d3b3571603e26e33f3e32f0ac3d42e3bf3dcebd0b3e77d82e3d657827be5e4baebbbe49ab3cddfa17bd3bb8903edf3d8c3d3ccd8cba3ef5643d553672be41e5c03d18507fbd2fa5413e841df63ef701373dfdf21b3e41f6c2be8b2d0bbec48287bd4f522e3ddd922bbe68da413e0521513e1e5f813e05346a3de3854cbd97d564bece4315be0a1b893e58804a3cf305853ce682303e26a939bee482b3bd3d6e453ed49836be3d3e59be1164ea3d1b25a0bdc4733cbe6ba3a83df30b09bed635a9bc31fdf23d8a5ad9bd89a6273dd914b93bbf0f9cbe8534823e2b84343c73606bbeca19de3db70f0abea46f1b3e4581b53e3962e03aacab9cbe0711c3bd0711c3bdcb3e9fbb1fa4f83d26e7e4bd1a84253dab0d14bd18e730be9b9b2d3ec3e6253d532df1bdc3a430be64ab16be3d2f1bbeb54c943e52e88e3d29a450befe5194bc067d413e096c30be55bb54be7bd5c4bbb10e243dadc9c23e771c29bdf59bfdbdf54430be2fb8d8bd87c3a0bd989b223e3509163e0d22693be8c0b1bd604514bd593477be55328bbd03f52a3eb532d7bd9c9e11be9d0f2bbde176c6bd8463d73d94d5973bdd0e183ddd0e183d18fe003c518d6dbe7aa8753e9fcf9b3c1e702dbd8f4c91bd491ee8bd151dc2be53a4e33d75fa81bd6b619fbe8353153e024c65bdca92483c1fd55a3ef21c783e2288803da474dcbbfd8cd13bfc0f5abe95731cbd532ee2bc866b87be2dc3113ecbce94bc89f8403d52c1723c2a977d3e1299313e8f5b2abd789f763d7b62f03c31fbbebe4189fabcfe4539be27564f3e82e3f73c5ce7e73d33ce233e87f108bd7be5543e56cc433e0f76a13d0fb1bebda786063d30ea0c3da428c13edb23833db122f6bc4647ff3de1ace93d3526acbd6c818bbdaf7cee3d0fef03be3d7f073df66ddfbb6507463d0f64173dc0c57b3eeb7d4fbeb0aa9cbeda2895bee73d5c3e660c50be86c21ebe4b3b82bb8be67bbec27c89bd92f8bfbc6ef252bc42d7a1be9648613ed629cc3b40f090bd9233803b615f5cbe8c270e3ec17bdd3dfc2540be327e5fbe25bc74bef52d3b3dbcaf4d3ebf3d5ebe23ec83bd23ec83bd942248bded3bf93c2b891dbd0175bb3d7b187d3eba2bd13db32d243e81eed83c33aa4ebe282f843ac02d973e24221b3da4a807be8fcbb1bcb3972dbea24a333c8463003d4016c5bde1ed313ec571653e6ff649be7a7b853e469df23d408f63be5b894abe5040423e8e24ccbdf205bdbd1e7691bebdfb9cbeb6a4c1bd20b795be25d1743e591615bd04fd823ec50aafbdf5a1003df2e7c73d1e761c3e4edb3f3ed955893ee0083c3d2cba723eabd3a3bd1161a93ee85afdbdf51eba3da9ce1a3eee3780bdaa6c57be04538cbe18d003be3b271a3e92d043be66cd00be3ff047bd61200f3cceaf453d9d39f13bb63b2d3e12349abd9862753d2662c23db5e778be5058923e081852beafcc41be306754becd5396bd4b4c04be6405c1bd65306b3ec897823dcfe3c83dc494b43ee139b8be0d9e0f3dbf4085be9241c1bd0b94c83de2eb1e3e2226b33da778ba3e75a52e3e53a42ebd53a42ebd0a1328bea32689bee865b23ea6c72cbe415afebc357630be2763d6bccddc3b3ecca30bbee92796be0f09933dff93b33d96234d3e2dad733e818276be956686bd599da2bcc8d950be9d85903dfecaf73e87c3b1bd65af9b3dda81d23d4820063ec9370dbe8e84993de480303d8f4c553e33d29c3d37fe99bed0c5edbd983e9a3b36aef9bdff6fae3e171976be582f06be9f8f0a3d82a646bd28ae9bbde3d3e4bde3d3e4bd9d351f3ea7abb6be450f89bdd1cf0abdfe2f9ebe5ef2d5bc0c063a3d3883aabdea21afbc7ef85e3d3194dc3e0f0999bdf1e74cbe6fdc7dbef4eb343ecc8f123e6787d83d47328cbe768f0dbec63421be1c685a3ecfbe1a3df175993d75563dbe33478ebe5abb10be2cf8bfbde879b7bdc2a654bddee84cbcbc7c363eb39cb53cf9e091becf34febc739356be5fe067bd11ab01be11ab01bef50d2dbe0c020f3ec39b473cf4502b3ee9409abe7127f43de20e9d3c4152d93eb8c906bee185403eb9f29bbe472d5a3e817917be2b7951be5705a13da6f56bbed759023d0ad6803c2a7c05be73c6153e683f1d3ed593b5bec8e4c23da60b95bec01934be2c6f363df8d480be36cd743c695749bed075d33c5f4e78be495c54bd1e2a803d2536e8bdff7d64be17b60cbe669476bee20f6bbe8c495d3c8ef990be86139c3c1222983e7b920cbec71d083f78a74fbe035fafbec1ce853e4366563eda2d9d3eb8b5a23e369493beb592eb3ea23f52be3a51323e9989e4bde938bdbcf5a642bb058b3e3de0a0f1bc6e0d683e66de42be6cb67f3efdc80a3d8d4391bc636d833e1ad5b8bd242591bd4f2097be987d253d2a7e22bcc6f193be429915bd5511d0bd29c1eabdc81913bd74a29dbdee4e95bde57a85be89469bbdc2170cbecffafc3cec8512bddc1644be7487753cd1fd853cdb0fc4be4ff7b63e9db30d3ebdd36b3d728dc2bdb91a2c3ecbbda73e8c298f3e2b55013e5f5bbbbdaf7168be6beeaa3c74315fbc78252f3ca3c036bec69d38bc83964abeac7a82bee85a93bdaf15543d81e93cbe5aad4abef58755bcf0d247beedc4b03d49f9bf3ea09394be4734bdbdd62b57be269ebc3c1d4b3bbd980221bc1444a03d06ed1abeba58323e5bd730be70eb9a3eee2defbdd075b43db960d9bded9898be7b7e9ebe9c3d963eea6f103e7c7b0f3c4ee36b3e50d1b7beb4b589bedfc0cfbebd5d1a3eccbf0fbe107f223e9acbcebeeeb83ebc7111223e498241bdfe55acbe6ebd233eb36fa83d57c378bd397a883e37ebfabd799b923edf615d3ee8fad3bc953deebd5c8fa0be02ad87bdb4658dbe2353b6bd63a8473ea07b56bec22b7e3e32dd473e8409043fbd5d1a3e2ddaac3ea698313ddddb3dbe6c0c94be32aaaf3ec018123f454e6c3e107dc2bdc1fc96bd137b6b3eeaa97b3ed965a73d7733b4bc8195f3bdab4646bed806123ed4a78bbe4b73633eb63dc83d9eb46a3eb0daab3e8c5fa6bd3a8fcbbd0d39e43def25983cc6d650beeead5cbd62929f3d25f1073db99415be0b8b183e7a1e963c37d7e0bd5346e83dfc1308bed7f94abee7133d3ead6293be56d2bebe96b02e3d22877bbe0a99143ea94dd73ecf8687be55a739be04c1fdbe45b9683e8817a8bc2b1c19be77429bbe58b736bca958ff3d737ab6bd7e153a3da1f25f3df2e97a3da7da96be9b2ad13ebf400ebe9afc133c2252a8bdaf27803e7051cb3efe9fda3cee4e95bdb01514bd05e68c3edf2f76bef10e82bdf5d350be8c32773ec34720bdafa7a33e9b9610be04b9a83eec69163ea84094bd705ba0bd0248073f2cdb5c3e15b70a3d5f5d92bddd5491be8b0ca9be7c01643e9b5871bef6ac97be46f36b3d6f0a1737c627afbe16c26e3e099e333edf509cbd99aaac3d8f347b3d39a6223d07f250befcce68bec8b9073d54388c3e1f153c3ed94192bdb36b0e3d264d4d3e75d1e83db6e7a43d2eaebbbe1e5bf53e200469be93ada73d76ee33be696b3fbd14d6db3d824f433ee99d3d3e8dcfa2becf298c3ec7ec313d3fe4b2be8d8db73e6c8e25be4cf5b03eb2a956be57821dbdcd4780be572f67ba87dec1bdc91e5a3ee5a9103da6a47a3e2571adbea97791bcf36d5fbe8fe0b53e5fd2bc3e727dbbbec00a673d86db40beeb41eebc2c7f4fbe4d8d823ed2b5ebbe971980bef3a1f7bdfbce4d3ed03c6e3e3d9eaabda0bcb23e1b62683ec4ce30bdce1bbf3e5c679b3cb00e5bbdfc0509be696c133ed177d5be546f05be24feb9bccc99a53e4c0321bea5ce2abe5eeea4bee3ea153e55f0243d11034ebe91c89abe38dfa73d918c063f628087be897bde3e38105bbe5cf395bd1c020cbeb31e2abe6df1cabd44ad6cbe5e03c8bdb057033daae7523e466117be5835a8bd4cb3a93c9c504fbe1aaa8dbe7510acbe8662143f5350193d5350193d219b92bcb5aae83b4f5613befa1f183e18a7853c5e47f83e1e7463be2841e7bd70b429be16e913be86aa083e86aa083e8342173cbb7f50be9241333e9241333ece9028be57b0713dc852903dc21c1ebe6c2c9a3ea7f9153d4f4f2ebe61dd2fbd5dedba3e03a4a3be9c16013d1bbe2dbe9eae88bdcf804a3d5a7bd33d5dc0303e1f16cdbd99e652bc2120df3e48d7973eea8a173e81dde83eb154e3bd2d281b3e356def3a279784becfd7d93e44e0963ec9e39d3eab0603bef6958dbec74c12bed753d03dd2c1ba3e7dca0fbeacf9293d35a44bbe97df823dd0aa9dbed17881bd0563e8bdedb5f03d882a083ef16686be2f028cbd77032a3ea6b9ae3c7f4a3e3e2d2307be9b0d023d273c9e3e6eb0c2be56cb4cbef06623be1d37013ed335863dceb0063ee642233e2a1598beab0f32be70a94b3d0501b4beb79c693ef382c0beaff6bebecd041f3efae0d73d4412abbd886d443ef58985bbf2268cbe9705cd3e001b0a3fb6e481bd4710263ea187e5be123d8fbe051b6a3d479480beed24fd3da1752fbe0e629cbee0fae6bd2fabb03e9e00fabdcb94263ee92781bb7d6f69be1279c93b59f222bef2e541be8de0b33c67511cbe8a3b713dbafe493edeb4a43c4ab808bc8ccdb7bdc436c33e970a2a3e3ad08f3ee6526bbd3810103e3810103e03aae83d7d54b53d2a00903dc417973df91524be3aff08bd7fdbc03b5d30b83e100fbdbd05ddcf3e4573073e11ca413e77b704bd5c40f0bd74293abe802a9a3e955363be2ecf89be94299bbd79431bbe79431bbe204719be8bb191be44f1a7bdfe4538be111f403d9bd8023cfe5e28bebf6595bd7b6171be0059533e7ec2753d3ab6cebe126d44bea1dc10bed4a40d3eb0359b3e0a4b99bed95a24bef44c0f3edb0b813d420bb83efeb9083df4bcb83e054c89bd688c0abeca8a10bd1047f2bdc5780abed38beb3ef7bb4e3e6eff0bbe793d5ebdf433b4bef51af2be1aaf0d3ef1af15be4430563ea37eaf3e5ffa2abe3a0c34be997be73dd9b6743d0507cabe62b99a3e4ecb0cbfafa924bcd6573cbdf790973e6ad899be08e9df3dc772c3bc46e9ea3b4df8103c6de81c3e9618dcbd5dc59a3db8f9d6bd39f31ebe6c42a53ddcfe003e1af6f4bd4c8b44be1a93f73b9f570b3ea422013ec45f26be13cf5abe03babf3d88031b3e537efcbd403b62bd8f405bbe0000000029a213be4305ba3e75ec96be32758cbe910fd7bdb2d6cabd81bf1a3ebd62513e99f9493e8559453e2f85c5bd83919bbd2c36aabdb84b1dbe30932bbe2e437abeb2a1423dfb3f3fbec38d0a3f045acc3d239153be0627023ebb13c5bef15055bed3440cbed3440cbeabab1d3fc2c35e3d707a3cbee037c5bd90e0adbe373c793e1fac08bda1937fbc11dc433e3623053ec284ea3ef1db23beaf4c25beb1679abe01302e3d5f1f6a3ee8e1b93ecb0d403ebfe821bdf13da7be513d323ffd1b89be5da988be933a1dbe4fed7abe59c4b9bee7139bbdbadc6d3e4597ae3dd19fa4be981f6cbee18c4fbd61f9c7beeaf9483e7ddbff3df77e51be684ce6bd4af7df3e43e044bea2cb93bc2a0d0e3f29ae7c3edd73fc3d521d20be0bb81c3efc8791be937d933e83a7f2bd6ab1a33d7e7d5c3e480503be0de915beaf499cbec5f029bdd0951e3ee41a0bbd4aa69f3c1fe58cbd6b6d91bd265ada3ec8c523be8ed1aa3bde710f3d0df4543c88ae7c3db16bdbbeb516183e0a1f5f3e64435b3d7f7f3a3edd39a0bd986f7c3ef4c02b3e416ea93d0840fdbce09ba4bea05251bdffa491becdca99bd3c92fc3ddd2e05bea1fb703d7483723d4791123e68e43c3e57d5bcbefe8381be45ba833eedc6183e5ddef73e985b6ebef7e26cbe00000000e7304a3e0cadc43ec085763e1f90213e80d74fbe80d74fbe9bc8f73d1337a43e8922753e1c618a3e3c96323e0df039be8a398d3e7ee6f13d0bdd17bd41a10abeb3067bbd96219cbee41d9c3ce514b0be108d19bd9785d73edb200ebed908043e5ee1243e51adec3e5f8f433efd73823ddc2ee13e1a28923e04799ebd3757c13d8bbdb0be6c8102be97f9b73de8379cbed3a6b4bda99f0cbe2652c1be0880843d13d0523e1cee803e0f0f6ebe30c8c1bdad45803e389048bec1dfeb3decf463be000981bd496419bd08ab513eef1d04bea8a2a73e16294fbe49e0033ef5297c3eae4b463e0ee611be5e8131bedf98aa3e79775dbeb6377abe78484ebe1c60cdbd4332a3bdfbd4b43d2052b7be2646993eeaf210bd0735963ef856a8beb787173d899a5a3e39734ebe233a87be117d153e802a9a3e031e7abe52bf91be3d12f73e7fd920be6621cdbe3517d93e6ea493be990720bdf3c2443e03fb1fbdee4db43ea6d619be74843f3d56439fbd3f0198be91bd42bd74616bbe1f9dafbebd2ce83c6fcc96bd9e66a5bd1241073e4de1a33e5e33043f1b1987beba7683be9d99c8bd685439bcc042cdbe8e4a243e748f09be9347d3be9e3a12be3b36b03efc635cbebfd6b53ccae31e3c0d2b153e0d2b153ebe083bbe0da486bd73f80d3ed1d2b13d8d9a5f3e279ded3c3fad17bedc4300bedaf49abea101c6be00000000c4a86e3d517573bc517573bc517573bc517573bc1b9c53bd1b9c53bd1b9c53bd8f0bd13cd92c043f159096bc3a72de3e0823a3bdf36158be6b8e40be221c383e197537be5a75d2beb760abbe05f1ffbdc6566a3d5807953e7698df3e8dfd8ebc25952ebe0b17f1bd3e2b90bd43e0683d5344853d388ce43d787e08be7e4be93e6ce36ebef2a5143fa5c886bebdcb9d3ed18c873edd17b33dd05558be4e9323be8bfa80be069c83bd3c3885be2a30b7be0000000043cba9be000000008b057a3d3e70603cba4aa5bd237395bed72b7ebe1d25f0be6f5e1e3e76749fbd25d2a33e8271a43c8271a43cc38b2abeda9837beeb219bbe584b0b3f846a853e3f27f23d517a073ea0ac333e3b4304bec01b97be5c5d10be5a9a02bf4a21b0bdc186893e2dc30dbd000000008b3a87be5f575ebdfa5511bd4aab063edee8cf3e746facbea26755bda3e9c73d06191dbe86851d3eec16903dec512bbce7bfbb3e4f4da73e403141be361b65be23fa3dbe1544da3ec789fb3d5dad543e97668cbe50c24e3e177952bc04d900bd179c943d293f9abefb1bb13e9fb9ddbd30f7043e2b873e3e62a63ebe9c70433e6cfe1c3d2b724bbe1206b73e4bd59f3efd3de7be25739ebee829febde829febddad469be737b5bbe085f8d3dba4d453e5a4fa5be77fc6c3e6f42473eada629becb0d403e1b687f3e5bc724bdd1349abe22c748be98cdebbd0e0708be3f2073bedeca67be911a81be996c683d47a9cd3ea60d8abe5ac7dd3cf3a3ae3ede7acdbda6dd6fbe9621dbbd9f6d923c80c320be0a9984be0f3068bed19e68bece7a48be9532f9bcd81d8abc23d10fbe1e02b63dbffc41bdadf3493e2d6b8bbc6c33563e6c33563e21ef29be59b7c53d6d9243be0dd9033e39302ebe51bcdbbdf3a3ae3ea400113de1a003bee1a003bedc0367be9ce3d83d596211bd4597173fd24b0a3e17f2e4bd3b6ac9bd883909be46221a3ed01c35be0f7c8dbea2ecc33e84d38b3eec33aabed6558cbe3eab47be3bcdefbdfb44303e2f6b37be25639f3e05e1de3d540c9a3eacb0de3eda01f73d2121f1bd4a3781be00000000b9ee4cbe5f0fe43df4207a3e4aaa65bde14987beb0d8a9bea192b23ee7d3c73eeae7713e96e9a6be179950bd9e231cbec94cdcbdcd9b79be3378b13ea7da4f3ef7dd473e4205f4bd24f5873c93e29d3e50933d3edb6bc93c5bc27abdc2f928bef2aae93c64503c3cbb98b6be6eb2a13e34a0b23eced6bf3ec7166dbe7df8cc3e2e2ebf3e8ed1783e7948313e0ec069be23f32a3e83b1373d22d6fa3ef3a08dbea70926bebe3b48be9acbc0bd8517dbbdce2c0fbd55681cbe676cc6beaabe4c3ed8b9e1bee515e93db885ee3eac736f3eba6f8c3e71647c3d2b5cadbe5d5b043ec8cf9f3e4b37663ef90658bebbe313bea83e8e3ec3228f3ee21110be1cf789be765065bd4e56c6bc8c4fc83dd60d6c3e95092c3e0da85abeb8e9ab3ebbc58c3d47acd83e2f3f83bed6319ebd0e043cbee322053ed1121c3eeb1b8f3eeb1b8f3ebab78abd70bd14bedc7a38becb5e933e84c7d7be515f64beb0ed6a3e98fd30be258b9ebe00a28ebe4a63c03e4a63c03e1abb353e1abb353efac7d4bd8337a33e6e90523ec7e8cbbeacd14d3eacd14d3eab668f3ea9cf5f3e8e6151be796603be928d05be9bdc063edbc8313efbcc13bcc4cc18bec4cc18be3fc0b2bd2d0fb2be4b67b7bd04f66dbe75aa36be3c17d5bdea97bcbd2ded343e435f77bed1e694bd1b440f3f4aece93e29bd24be5a52c93e0222a7be02a8853eaaba61beaaba61be3ce9c9bd08c927bebc8fad3d616475bee1de91be544c68bee27f6a3ea61982be00000000c8a24ebefa50aabbe15eb73e945e0d3f23a9583e23a9583e005684bd9accadbd660c81be21e2b7bd46b1193e2d57e43e574ef03df6df9a3ede6ffe3de41f16be6e61203e0eed663e06de85bedec269bd4bec94bdc4e6abbd93b8a93eb413723ede710f3d91df1b3ff094753ef34b50bea99f033e06c6383e00000000000000009807573d391e3d3e01764b3ee7bce6bd71e93d3d9b0e003f804707be804707be804707bec16d71be3b80cc3e9aec7cbe02bf7d3ee4012b3e5a63db3e12a626bebf7c173f7c64793d12be59be932724be8f1d343e8c39e2bcf8dde63d389048be5ef03c3e7423523e0000000062c8493e4d23113e59eb6e3d59eb6e3d59eb6e3dba92863e8e8eb33dfc74be3ea088953d45a56c3d19dc283e202079bef792143e5b9f67bcc4b1ee3ceffe26be647b51be02f475be54efd4bdb3c2d3bd525e40be54c610be5b296ebeaf14c13e3f5077bd8dfd2dbd1889c3be0aa70a3e0aa70a3eac9e533e88b728bec4d06b3d4b3ed73c1268203ec651c5bd99ac613e064aadbe0457993e700e8bbdc0239a3edcdf253e31cf92becbc5b1bd045933bd045933bdb0b9a83e74375ebe000000007d24e5be49203f3ed0ab55be0c9eef3ec92a923ec92a923e4ee5bf3edaf5db3ec6c7db3e0000000099db9a3e76d3d93e8bc27cbef3d17fbe8ae359be81ee82be38b889be74293abe9cc21a3e32824ebe162c113e2f3191beef3a85be7b48133e407a73be2ea57ebe8da6cbbdabf685bec1b036bec1b036bece3374be2ed43abe2203883c2203883c9fbad73e9656e93e5a14dbbda781d4bd8fb5753e44cca2bc23390bbe98012abe000000001e53433d086976be471480be1423e43e4f9c4cbeaf39a7be14558abc14558abca3009abe9836fd3b96e7623d2bae80be421708bd00000000c19c96bd46342dbe51a8063e08971abec0452fbec21f3bbe9b46853eee335a3eae29a1bdae29a1bd8d1058be6a15b4beccc4ad3e37c0ccbd25b529be452c873e4555ddbce3823cbeed5eaebc3d8397be472e89be4fe271bef64857bddb5473be11bb323e67e9c23e895525bedc24df3def85253e46d303bee291ab3edf10703db596413e36f891be0c6839be0b53f83ca00d0e3f9c57c0bd8b5838bea12cb43e9cf9f83e19d4bfbcbde076bea6c7a73e619c6a3e619a8b3efb32a2bd0a0e9bbe8922753eed2a9fbd7cc93abef516cbbc4ed53bbdc99b82bd1d6e62be7e1a92beb40b4ebe94cc393e32aa5dbc000000005f1a0b3f383c8bbcc54533be71724fbe71724fbe0da486bdc3d12cbec3d12cbe043b76bdd46c2fbe8b7e1bbe8ecb55beb12a09be4065023f6cc9043e395482be000000001df46fbe2adf61bedcef86bd4fa1bfbc051ea1be518584bee1a71e3e591afa3ddabc6e3e43f8bebed4da9bbc39fd3abe4429b03ed113043dcf626e3e83036ebedc9f25be0823a3bd61dcff3eca51603d641553be579e4abeac112fbec7d03b3d9011b1bd40e3f3bd0000000000000000c9aa923d10fd92bea1ef1fbe8740a2bd5347033e7f70efbd0000000000000000db553cbe9c79833e1cedd2bd05f1ffbd13630ebe022ebebeb6ca56be1db391bdd76ca13e135261bede062bbe8402b2be868c1a3d6dec593e57f340bef14e3c3e84c1b33de1b48a3e749125bedee9ec3e87a393bee51684beebad5bbe9fcd7c3e49e5c03e494e143e0ed0acbe8bdf41be7e419cbe8c4895beb1d491bed27bec3ee3e02ebe0000000062fab33e043be13ea968873b709ed4bd354460bea05fb53e6af6c8bd0ec98dbe9c6742be473c75bd000000003a70a83de09269befefd2b3efefd2b3e8773703e16f9c0be37d9923e7354413e0b228bbe28f7febd510788be5e0a87bead148b3e971068be0000000000000000941bc73e1576b0bd00000000fd403ebe37c510beb54cad3e1616b3bd41e822be528d09be528d09beb3100cbe5f78a9bc5f78a9bc13cb4c3ed83c1dbeab4c0f3fe2450abefe3a8c3d05bf1dbe62ee25bcea136ebeb7f9df3edc2f9abe8f2942bec157debe07f14c3e238f18be000000006d42b8bd6e4f973ed5b54cbe383a91be70caa2be39659bbc34c3bd3c1aa2c5bef86a8b3d11045cbe2a47eabd2a47eabd628983bd1362e63e4bb76fbef72714be38463dbe3bf8ddbd8ae23cbe8ae23cbe88ce2cbdec09bdbd715f4d3e1ba9bcbd1ba9bcbd2d25973e00000000f2cf023e0a80f4bde31627be0000000045866e3de47194bc88b2183f8a5934bec6f52fbee574583e9f459f3e04b2e8bdd20ec6bd228c4fbec60884be08c6bc3c7c5b43be79fbccbd50de723e021ab8bd021ab8bdf4fd25beb4370d3e3a7284be29c0a2bc5aef023d00000000ba07993edb6f803eda3fd8bb9d2e39be1ed1853e361c5a3e54c795beb3de27be7645eabdfa4757be00000000b8b856be733da13cf051a33d78b613bd775990be333b29bd4d1e913ed07c8a3c678d123fb7492dbe2b1a45be9a82abbd870817be8f7537be464b033f347a74bd69b79dbdcfd928be417d93be000000002c6631becf8d01befbcf78befc4ea8bc0c5ccd3d4f9c69be076032bec9ba873efd70413e000000006a34453e6831ef3efc1c21bd7ad750be3b55263cb36c053e3e7fe7bd2c259e3db9dc4e3e7901aabe5dcca7bcc26accbd39c55e3ea34561be86ac95bea1e29bbe1a6b26be7d8583be6e8c833e18a033be18a033beea436ebe3368a53d00000000e07869bef8bfa63ec19a93be24930fbe24930fbefc286e3e653281be07360ebee5b6b1bec1b7a7be19d13ebe13188d3edcce763eccc6ac3dd15988bd5bea93beb48d92bea32d1ebe00000000ebe7cabd0845943e9c2990be82169a3d9b353cbe1ae401bead5486bec3dc62be00000000bff15dbe0000000000000000ae06323ef8b071be3801b83efb37c8bd87fdbfbeefbf823e4c280bbcc94a9c3e50a2de3d2a1282bee76a67be7324c8bd000000007636c73e0f08b13e4d798fbe12129dbe4399bdbef816f3bd7a2c44be2c5860be234955bef7670abeed1c6c3edabb08be9ee163bea0e98dbe70e410bdc61526be219945be43a30cbee86510be0b47c5bddff45b3c66406bbe067d3f3e1800b9be1acb483ef3274dbefa298bbe13b0f93d079a5dbd13ae97bed47988be1a5826be77898dbdc2f721be6efba0be03df06be6b6627beabaad6bd99272abe99272abe8d2b2cbe1f8354be1d799bbef24ebf3efad230bde4063e3eaa99a1bdc3d0173f00000000789db03eb471fd3db471fd3d5ce6c2bd37e2e2bd2a88c0bd63b73dbecf890fbe547d9dbe0a5cd23e3dfe52be91a7013f00000000c054c3becf875f3e0000000000000000e69b16bd170e813e9f3ccdbd8c2e153d09938fbd0b981d3d97c4853e95629e3e95629e3e6bbe523efdfbc3be94918a3e7d2c723e84845fbeb74c78be8b72f83d6b3c9abdbcc127be9c606c3eb18a26beac7739bdcaa133be3db24b3ea3df18bd826a9abd826a9abd00000000a4785ebe5b805cbd2a01213e05e4a4bd0395fb3da2235e3e000000002ff22f3eee8fe9bda52cf23c492bfebd947bcc3eb4f2643d3a008c3dae2d65bed13a553e7c0d8bbee8c930be00000000d5a563be9fc601bdc7f0a33ebd4c81be66fafe3e79582b3d4e69b33d65cc90bedaeb61be3dfa95be39d989be90b956be8f90b1be1f27cebe746b2cbe4fd948beb0aef53e6087aebd0fbd18beffdfd93d02bd03bee847773e0c195bbeaa91a3be1cca8abefa71aabdd0b89bbd580c203de584813d7385d43ed7b005be5ab6ca3e72964cbdcdd52ebe42dd7cbe8cdb14be616f743e21b07abd21b07abdaf49c7bc304afbbd46ef2bbee944a53d4932d0bde11b15be9cbf733e872b113ee883c63ee7a4ddbd6f414bbed02e213e7dbaee3e73d12fbe7ee975bdba1324be71a3013f8ba253bef3ec02beb409cebd572133bea8b51d3e83f2113eac922f3ee804b6bd99fa9abe4932bcbe84b3033e23bc573ecd1cb93ee99a73be744ad53c2a778bbee274663e46250bbe46250bbe46250bbe711ea73eaaa2a0be7b20133f83ad0c3ffc88e63e0000000011e474bdbcb17d3e0974abbe2c7353be8c79b8bd8c79b8bd298b8fbe11525dbe2ec360bef116a5bd5d79dd3e09fba63e042509be1b4982be5d4186bcce0e58be677caebec53ec1bc1b74373ef50a9fbdacbe4b3e4e4c93bee8131dbee8131dbee8131dbe53e0d13dd9038c3e9040403e9040403e9e22543e0000000093288fbef09e67beab6117be01e2963e3ec7b33dfb07283f42c6b23d6d8a083e9325463e2aef2f3de4056ebdccfa8fbe632c643e43aba4be3d5f453e2c0a6e3e3bab7dbd202298be697a71bd3ceaae3e1466af3e28b2583e542d373e2d499c3e4428d83e50b455be3d954abe595838be0eba363e33f7abbdf7685dbe79a9afbe001e4cbe43d6b9bdcf727ebef13959bd625f113d7d248fbe7e5bbd3d7e5bbd3d49b5a7be8d623e3e03d025be00000000d3a225be0000000000000000c469b93ec469b93e59ce5b3d000000007daf51be30db42be44d6b1bd8bfc673d5ae00bbec690873e8319b5bd5689173f6aa2c33ee891093fbdb0dc3e7bba833e997e3a3e0b56f23ed7164fbdf3ec02be9ad67dbe332059be57cea3be74d1b63dbaf02a3e330b13be80a35bbe80a804be883db23e79f58dbea83aeebd10308bbe3ee1d5bc2abec8bd7b9f1b3fbdb4dbbdfed79d3e000000004056c9be5533f83d7a57c1bdfea61fbd3b3e5cbe7fc26cbd7fc26cbd35dc463e46cc46be769aadbec2e3743e079c503ea69580bda37958be16815cbe352381be280149bed743123ed90784be4d8d61be5156e83db1d69a3e03262abec13174be3953063e70170abefd76a4be0000000033acc33ecbc215be934c9b3efef2c73e0aafcc3e45b7343e00000000a781dbbd178f783ecb20913ea6bab6be00000000aac89fbed1c3bb3e0000000054907fbe4ce910be4ce910be4ce910be9f14b43ec4c963bec4c963bef6a989be98f96cbe000000002b5eacbe203ba83ead080a3fad080a3f057b9fbd637984be722fca3ed6d40abe970834beaa5ed43e528396bebccb17bd48fea3beb3144e3dcfd0673dc409f03e8d11b2bef1fdacbea28739be0e6b8ebea28f8dbdd276e8bed611ab3e52d823be7c0da53e0809afbe0000000050cea03eb9c03f3e0568acbd6aa4c33ee6b9993ee6b9993efa946a3efa946a3efa946a3edc72e23ddc72e23ddc72e23d000000002887acbd2887acbd00000000d46de43ed300843ee8cacf3dc24f36be0ffd003f7c6730be00000000bf49ac3e7b8fc4bd00000000dacfb0bddacfb0bddacfb0bd30680ebe0559f33c1de3eb3e867d9bbd867d9bbd867d9bbd02dfdd3e000000000000000000000000c1d71fbec1d71fbee21110bee21110be00000000765065bd765065bdb6d46dbeb6d46dbeef8d4fbeef8d4fbed15a673ed15a673e250384bd250384bd250384bd250384bd8b44d73ec99921bd000000000000000000000000f7e89c3efd97bb3e57c71cbe00000000871d13be46587cbe00000000687f66be9dce68be000000007fc2ba3e00000000000000004ee974be00000000fa449e3ed9ac24bed9ac24beae2f753eae2f753e0c0bcbbd0c0bcbbdd0e217be65b3993ef48cb63e2b3a8cbd38e1e3bdb21d99be4dde4ebe000000000000000000a28ebe00000000cf53a83eb4750d3eb4750d3e0000000000000000e7257d3e4993a4beab668f3efa85c7bdfa85c7bda9cf5f3ec56586bee78b9cbee78b9cbe00000000b6aab13eb6aab13ec0f3f3bdc0f3f3bd48e08dbe00000000000000001349a5bd9df009be00000000a40d19be3e65ab3ee6a1f03cda0f103f847163bea8d5b33e4539d93e83c199bedbc8313e6d5f15becd92203ecd92203eb041b83ee506c5bd7e2c8abed321b43e00000000c06ff5bdac7feebd00000000606060be606060be29761ebe00000000158fe8bc0293c73e000000005ee710be25e401bef655cd3e1cf2083fd6e4e7bd0946b93e0000000000000000f8598d3e0000000000000000e005493e63b982be7a3ea13e0000000098c48d3e0000000000000000000000000df039be0df039be000000001a27bbbd59bacabedd1ef03e4e4cc5bec2f58c3ea00e87be96f74ebd00000000e134953ec0802cbefc90b53ebe738c3ebe738c3eff7506be4c766fbe4c766fbe000000006c87053f000000000000000091d442be073166be073166be4088a3bef4ea8a3dc532983e817c993e000000000000000024098b3d0000000037bbac3e6e7240be6e7240be21e2b7bd8c95b23e8d739cbc8d739cbc796e753ee6b403bee6b403be0a6304be0a6304bea7aa34be1c4a353e1c4a353e1c4a353e4dad8ebea05d5ebe00000000513d923e97a81ebe4bec94bd4bec94bdcc24f83dcc24f83dc9578a3ed121fe3e17227fbe6daaf43e8a4c9b3e00000000c59fd83e033ae43e4af1cf3e00000000dce0adbef6c3b4be60c2003e000000004a13d13ecc2f773ef564d73e00000000a10ee63ef47756be00000000a853f1bd968a89bcd17e1bbed17e1bbe0000000000000000e92781bbe92781bb0852fbbd0852fbbd0852fbbd00000000000000007f1143be8711b6bd6bf10bbe809d4fbe58a3eb3ef8596c3ede2a60be3919af3e804707be090d4fbe0be673bea3ee3dbe00000000758ae73dd52b0e3e4c34f43e4e8d6fbed98984be870b843e0000000071c228bedca452be63b921be63b921bec87719bef4dfa4bebd3174bd0000000053fce9bd815819be46aaa63e8db58a3df6779c3eaa99483eaa99483e4d3d213e8c9588be63ed6f3e63ed6f3e0000000000000000e9cc8dbed270bc3ec0588a3e15bc2b3e15bc2b3e15bc2b3e06fc90be00000000000000006184a73e98849b3e93b280bea4bf83be0de915be0de915be0000000000000000623c3d3d000000002a2229be119c8f3e119c8f3e119c8f3ec0a30ebec0a30ebeaa0a46beaa0a46be5d4521be00000000799c713e799c713e7ad1c93eec9a6cbe01a8f73d9f8b39be41cbfd3eedbd9b3ed49cafbdd21ae9bdd21ae9bd4ff7c63ec7d8e5bc190fecbd190fecbdd71bc03e000000000000000005df3bbedcb099be00000000894f29be0000000017423fbe0a6655be9c5872be6794d93e817b15be5e21b8bd5e21b8bd6d9b01be6807b53e76638cbe60861ebe095489be9b2834be9b2834be49a796bd49a796bdabc852bea821dbbda821dbbd000000004f78b53b4f78b53b0091dc3e598157be5b06e6bc5b06e6bc00000000a7e33abd08d71cbe00000000000000001ce041be6ad686be7e9eb8bc808fe53ee69a87bdf9568c3ef9568c3e9f30573eafd720bef54442beb6fff73e4afd72be85c676be893732bebc393abe6d0d32be6d0d32be1577b63eb253e2be3a07153e00000000168941be168941be00000000000000007500663e4b7c913e93abbc3e9b58edbcb07a0ebe60fea3be000000009450c43e9450c43eb8e34fbeb8e34fbe99c51cbe25dbdc3e6f7c18bef671c33ef671c33e8635e9bdd328913e72a04dbc33ed743e33ed743e0ba3773e0ba3773ec54e42beecd8c73e00000000e6920dbce6920dbce6920dbc80866a3e80866a3ecb489b3e52782fbe03547ebe5b6ba1be000000003d8f383ef21c993ec9a88bbe1287b63ee35891be13c8b53eeb40c33e068f8c3e8fb5753e7b0a96bd7b0a96bd39b4363c6517c13d6517c13d000000004315babc4723a3bd81541cbd741ed2bb741ed2bb741ed2bb0000000000000000dd6c9f3ec602b1bd7b9381becb893abec808603e86a97fbe000000003090cb3e000000000000000054ff38be377845bed1ce16be58a07dbd82dcecbd00000000a69211be7053d9bd21caf93ea21ac93eeb4262bc2bae3fbe74468a3d00000000d17dfd3e0000000000000000000000007a21a63e7a21a63e0000000012bea63e552e4bbe00000000dd7eb3bdf1939dbdf1939dbd000000000000000005f8c73c0000000000000000b641d4bdecec00be64ac1d3e2cba0ebebd0d9dbec211a1bc4f120ebe7aa019bed7edc8bda208f2bd060bcebd0000000000000000cf59603b2ee49c3b0000000009c239bcdf3e2bbeeea20fbe7a9c823c0619103f65270bbed86f4abe00000000afded6bdafded6bd87ddb8bd27544bbd27544bbdfc99b7be00000000706846be00000000c53e2ebeb637b4bd573122be00000000be2ab8bda4f1b63e25bb46beb9520fbe6ef1d53ecbacbd3e970ab23e00000000e258033f89a5fd3d00000000069bab3eb0b89a3e000000000000000064ea2dbd64ea2dbd0575943e0575943e0000000000000000000000004cff033e4cff033e4cff033e4cff033e000000007640f13e892bef3ed130d93e2ac031bed4430fbe0047f43eee5521be00000000000000000fda43bedd2833be895525be0000000000000000453c0abea56152bea48c85bd1f2dd2bd00000000a0e9dbbd0000000000000000319eaabdd0e1c0bd51fcaf3ec510d83e6b8dd63e604638be000000004a5699bd87142ebec923d3bec55a6ebecb5daf3ea06e343ea06e343ea3d76bbd44d9fabd00000000000000000000000076461dbe8ec857be6c728a3ea9247cbe00000000000000005c7d4cbe847746be231c94be5e248fbe93a335be622ea23e0b53f83c00000000168666be397013be016cc23ea48d15bea802a9bd0bea42be6ed3ec3e9659973e9659973e39994ebe6cfa77bea3a0ec3e341f9cbe4c3a5bbd4c3a5bbd187f2fbe000000002b669f3e00000000000000001af3cf3ecb6c84be000000001492bf3e1d6e62be907365be00000000d7c7c33e5345cdbdbb3a98be00000000000000000000000000000000000000005b6f5a3c5b6f5a3c2b6d50beda5af1bdda5af1bdda5af1bdc5780abe239982be6a8d42be00000000ccf34ebd82b520bea36b1cbea36b1cbe888ab2bec00232beee4029be94cc393e94cc393e00000000e9c6863e3af4b3bd3af4b3bd3af4b3bd3af4b3bd3af4b3bdd3cfa13bfab5e4bdbdad42be1cd99d3e01ccab3e7f5e45be6e7b9abd233305bd233305bd73f80d3e73f80d3e73f80d3eb9427abe7737c03e528ae73ee0ce37bdc8f9e1bd000000000f8fc93e4dea29beafdc873ed88d0bbe58979fbe89634abe18bc0abe18bc0abe000000000000000000000000000000009390453d16d4d7bd16d4d7bdf87b15bedcef86bd39165abc00000000000000000df604bdbd2fdebdbd2fdebd561471beb4be2dbe00000000c21baabe0000000001e5a3bea541af3ed55c833ddf10163e000000000000000000000000d6d198be1ee1923e4a2238be4a2238be6df2763e6df2763e2935ecbd2935ecbd9583a23e000000000eca303ef03c603ed83ccebec134463e54079d3efc39c03e563727be563727be563727be1ce9a6be000000003e789abd00000000000000001e8bc73e000000003e63823c2e1956bea525343cfb27d73e3ec3b73efb24b03ebc5fcd3e579e4abe941e5dbd000000002a4e31be2525e0bd6c9fcb3ea51e45bede4692bdde4692bdb99af53e00000000b2a83ebe00000000000000003b4d98be1ba787bb1ba787bbd57a5fbdd57a5fbd00000000509aa03e509aa03e680671be90d410be90d410bec60fdfbdc60fdfbdf49c3dbdb6ca56be74186d3ef0365ebe9cfde9bdd76ca13e20ec39be20ec39be000000002c2da33ea7e499bdf86dcc3e371b7c3ef777ab3e65d3c43e9a6612be9a6612be62b0dc3e933b2cbecb591bbe000000008bb674bedd5aa13e0000000000000000f7796bbef7796bbe00000000000000000000000000000000000000001356a33c1dda803e00000000a805e6bd00000000aae6fabd23a480be1ae68ebe85214abe6b6893bee724e6bd3d1c45be629ad53e1f373fbe5e7636be000000005c2bcc3e654403be654403bea59c4a3ca59c4a3c971e97bd08bac93e2a20b73e67e3a83e67e3a83eb4b064bdaf2493beaf2493be44bf0ebef573aabe0000000092340ebe112e0e3e000000006b27e23e7334a03efd9831be00000000000000003480f4bedfd17dbe64c61f3e9ad3153e6c24bd3eeb77e73e86fbe1bd86fbe1bd49150ebe0000000097b0023ea1b6c63e0c1e97be0318b43e54a4c13eb14847bef2701bbe6d739bbd245b48bc707590bd47e686befbf821befbf821be16b6c83eb4c865bdff6aec3a217eb23e60f61ebeb64b3dbeb64b3dbeff0e9b3eff0e9b3e00000000950c06be950c06becf7aafbdcf7aafbd08f8debd1d7d0bbe0000000000000000ca572f3d0000000000000000c9a3083f91aec2bd00000000c1f4ef3e0000000000df09befd5971be0000000000000000000000000000000000000000360fe3bdc84f4abe00000000de130cbe442710bc442710bc80ae103c639c97bd89dec23ed98af33eb13214beb13214be714208be00000000bd84dc3e0000000030d4ba3d000000009928abbe565359be8fb052bef63189be6fc9a9be0000000044d512be115363be0000000000000000667305be9b800cbe224ecebdbed84cbe0000000000000000a0dc2abc6089f13efae22dbe00000000869a293e869a293e0000000000000000a220b8bd37c510be56e5bdbc000000009805dcbd000000007376d4bdcfa60abecfa60abe9245083f00000000320eb5be5c8ace3e0000000000000000a3ff4b3e0000000000000000b906ca3e884f80be7bb0d93edd9e86beee8f24bec3dcbabe69c7c03e00000000000000002c26ec3e8eae68be298daa3e000000005d2185beafbfcc3e82f8d13ed60022be17b228be0481ba3e0000000061a00abe28a26ebd28a26ebd1ce9b03eb2bec63ecfb1b73e00000000591f3d3ea553093df335a6bbcc62643d9c1f8dbdfd7b51be55bf2cbe487c2ebe65e200be9735b0bd52c5edbd787ff1bd383a91be7c1343be7c1343be4f854bbe4f854bbe8209a4be2a3ea1be38ca97be81166e3dd043843e5228a33e00000000f623843ef623843ede35973e000000004f508bbeb21915beb21915be35e3793e35e3793e0000000011045cbef51f723ef51f723ed32c353ed32c353ed32c353e29a6863e29a6863ea7c9743ea7c9743e000000004537f6bd664e9abd00000000816d90be548b4bbeb7ff38be5581f23e356c0fbe356c0fbe00000000521fb5bd521fb5bde40e2dbe00000000ba19623eba19623eba19623ef0ffd8bd11d630be99da1fbe27b0e03eaef2a73e0000000000000000733c6ebee33f2abd000000000000000000000000000000009bf500be3e6a593c47ba5cbedddfa8bde62cecbd66b47cbd00000000d20ec6bd540e3dbe540e3dbea12478be000000000000000005f0b0bd00000000247e57be22ed26bd031cdd3e32f9db3e000000003a9432bd3a9432bdfb9f1cbeed0f1fbe9170e8bd00000000ec72e13ef7ea67bd0000000000000000e2dfb53ee42a44bde42a44bdcd028b3ebecdf8bd00000000dcf0bbbe19a150beb50a1abe00000000e03fb63eb8ffb4bd0000000009d6c43e00000000000000009c529fbccd730cbe2b6ed03e2f39fa3e93f687bec171563ec171563ef556a63d3dbc8bbe00000000a9b67e3ea9b67e3ebe6ed63eca978abeb87389be11b2e53e6110d53e0c6f8dbe2ea50dbe1ebbf0bd49792fbe369e01be00000000000000000000000073c552ba81fcb03e6fb596be171ddc3e776b1bbe776b1bbec0cc58befe270e3efe270e3ee9c90dbe000000000000000041a10abe41a10abecd9750be00000000333b29bd7b1e81be86543dbe00000000000000004e50c43e000000003efc50be9518f0bdc25933be4f7cd2bd50ae07be000000000000000000000000332408be49e56d3c00000000000000006c19bd3e6d42b8bd34d339be000000004a5b4abefdc6d2bc677b24beb16f8dbdb16f8dbdb16f8dbd370c58becc0f963e34f0afbd34f0afbd6fdbdfbd9a08cc3e0000000000000000000000000000000071e0083fdec269bddec269bd6787e73e65ad1ebe00000000358e2bbe8821cd3e080e25be666f14bd666f14bd3dcddcbdddc27ebe0287b8bd000000000352893e0352893eb0bc61bece736dbeee35ccbdee35ccbdee35ccbdff35473eff35473e00000000eacfdfbd2b442fbe032bdc3e00000000a20f02bbdc392ebe000000003977afbe273828bee22cd43e3886b43e5a0d9b3e00000000fb30913cfb30913c2127b73ee20c96be00000000061ad7be0000000006adf6bd06adf6bd1d8933be000000001d20db3ebb86433ebb86433e21b5cb3e74db2fbd74db2fbd9b3a65bed2a7c53e00000000e80a7bbe1d56833e90d5a63e471bb23e98a5743e60ba413df001ac3e74e8e13d529077bd3339b43e8b03acbe283fcb3e98ec6f3e775ac4be8d21c8be00000000000000005dcca7bc1433f1bd0848cabdb88ac0bd2481ec3eeb7e47be769454be07ffa23eb43f1abe15079f3e43a132be7db9bebd6e8c833e1f4d38bee9c18bbe00000000861222be00000000c3437ebe00000000f862af3e4c11673e4c11673eaaca43be00000000e1830ebee1830ebe0000000051ee31be51ee31be88918ebe00000000874686be114b8cbe0000000000000000bdffbc3e00000000a53038bed7eb1dbec23f48befc286e3efc286e3e00000000048a44be21d430be88b5c73e3891ccbda6053dbebdf346be7d1e39be0000000000000000000000005ee88bbe9dd0febd9dd0febd9dd0febd004f14be004f14be2953603e7ba3efbd7ba3efbda8fd973ea8fd973e8801a03e8801a03e9ad4cd3eab0fd83e05e00dbeb2f594be0fb522bd6d36e1bde49e14beb96d74bd10bc08be00000000d9e20abe5c7ead3d7724d3bdfc36a23ea8d4a6be8e949d3e8342fe3eff226fbe694353be48fad33d02627d3d02627d3d6ef5c0bd45b623be8b9105bd00000000036b5c3e036b5c3e036b5c3e3c8c8d3c000000000a7280be00000000001594bed3da5abe0b67b53e6b639ebd3a455abe0a70cd3e4d351cbe4ef000be57c6ca3e000000003620f93e3731df3e604bcabd604bcabdbea412be000000003429ce3e90d5dcbbd9c793bd45c11cbe45c11cbe67fe58beabec6cbe855739beabc78c3e0000000000000000eda1773eeda1773e506d593ee3804e3ee3804e3e6e82833e672f6b3e571bb23e335895becb5a9a3e992288bd992288bd992288bde88f35bd687a2ebe0000000000000000c4d7c93ed33613be696e443e696e443e21f7bebe4d798fbed36f20be26c73abeb413b7bdf51a9f3e714e8c3d327149bef7670abec25345becd8fd0bd24a9413ce440c53e1613bf3c619aa03e40afac3ec44829be26b537bebff996bc01d0b0bda99c6ebe099e53bea800ad3ecb1a51bec3bf84bec82b56beb30b8cbe808b61beb4a9ac3ec84c4b3e00000000a8952cbe5a0bcdbdacd783be00000000a4bc0cbee9ea63be98e3b9bd09f8933e00000000d3f0d33e8284b33e5ff6c93e1cbbac3ea969a93e34aece3eb62eb73eae4fd13e285fb03edb18ad3ef34b50be464ccd3e0000000096a3c73ee708cf3ebf544e3ebf544e3e3c7a8f3e6f5098be000000005bdea93e2b3ac2bdee2696bdc06ed0bd7b3597bdaf1c70becf41dc3e6e3ce6be1030763ed1ebaebc000000003c2363bd1576b0bd00000000e3cd49be0aaa3a3d7d8364be2e3e3bbeabaad6bda5bbc0bd7d6ad1bdfa8847be0b3bb73eaa07ac3e00000000425eb63e0000000000000000cdf987bdcdf987bdcd91003f7acf1fbe098282be24c62fbe1284f0bd0000000000000000d10796bc10e539bef2623dbdee0736bd7c3e6ebe2c2481be82e495be29f86dbe00000000ccf169be109006bebbd3cabd00000000f1f466be909996be00000000ee50bfbdee50bfbd407fe43e0842b33e8747c93e8747c93eba6952bec11308be000000000b6e08bedf0d5fbe817f61be3bded1b88ab2b1be00000000a3ae3cbead8c04be0000000000000000000000001baeaebdc93455bdc93455bd17132dbe8a93e5bd0000000000000000000000004c9a08bdbcff73bdc085cbbe33241abea0de8dbc59a5cfbd59a5cfbd59a5cfbd9fd5d9bd9fd5d9bdaf11e3bda82a703ea82a703ea82a703e674b58bdfad4babd2f20813e2f20813e3e3140bd00000000c90c54be6a4d0cbea2d9a83e00000000000000004e044cbe4e044cbe00000000000000008db6a9bd4b6d8c3e4b6d8c3e7353a23ec8e412be21ac8abd096aeb3e3ef1103f2451c23ec6edaf3eaf16af3e91599ebd9c7860bee58ca9be00000000000000003b8a97be9795afbe285546be00000000a41393be3ef12fbec594e7bdc594e7bd775cf43c55d3dbbd2000d8bd13d0523e7d2c723e753341bd8c46ab3ee661bb3ee661bb3ebaa33cbe4b7b81be8d6892be00000000ab0bc0be566a41beca5a973d2d9cf7bcf74700bef74700bec0a79abe8e54cc3e000000005745acbd5745acbd5745acbd000000000000000025929bbd25929bbd5fb9b63e000000008d2d33be8d2d33be73301cbd73301cbd00000000000000009e5cdfbd9e5cdfbd9e5cdfbd037ce7bd2b2cdcbd2b2cdcbd9c606c3e9c606c3e0000000047b8e0bd9df5e4bd0b981d3d0000000041dd3ebe054fc6bda557d0bd162694beb8ef5abee06d08be00000000932a4cbe0000000000000000b8fd83bebbec5ebe3827d6ba5b781abe000000000000000000000000c37b04be0000000072bbf7bd6cc327bedc44efbd7a2c25be4bca45be4bca45be00757cbe272882bd272882bd71b1fa3e0000000047fdd73e411e90bd4ea66cbe24a38bbd24a38bbdaf4ba33ef18bb3bdf18bb3bdead79a3e000000000000000098f1eebd6007fe3e9d1a06be39e16cbe39e16cbed98be23e06ac42bedd5f3e3e48bad63e0000000027e449be30d54abe4435f3bd7f46cf3ea86b1ebe48a389be4cf2b4bdcd062abe00000000dd8537be000000001b11fabd0a77d9bdc7869dbd4172ae3e000000008b6d24bd8b6d24bd0000000014c118beee3d683ec78f24bec78f24bec78f24be58274bbe9fc601bd00000000148aa5be000000000000000008bf96be4822babd4822babd583422bd583422bd583422bd624b05bd624b05bd00000000000000003505c13e4edd4dbe4edd4dbe000000005d6f7ebd0000000000000000b1b48fbe6305093e6305093e229201bd52e214be55a93bbe3a99ebbd17af2cbe17af2cbe8912ec3e582e27be029bd83e00000000000000008d5407be225437be732ab43e0abbc43e158ab23e00000000646299be7147923e1a7e56be00000000407d1cbee036e3bde036e3bd50e2f73e9b7f9dbdc38533bddb6f803e9e703c3e3f7e363ea2f33a3ec628363e00000000000000000c5186bea2933bbe227d46bead58a6bec11489bdc11489bd5e08853e5e08853e5e08853ecbd70dbecbd70dbe0896983d0896983d00000000abd882befb59083f00000000248a46be5b9f9fbe80e4adbd277766bd000000004dcc46becfe638becfe638be121cb03e2451a2bd4589f0bd32adebbd0000000000000000788ec63e00000000000000008eb7303e8eb7303e0000000002dcb13e976551bd15eb0dbe5bb8af3e455288bc455288bc4d4dcbbd4d4dcbbdc80a50be00000000faea6f3e7d0554be7d0554bec0ecafbdb46a57beb46a57be3dfbb63edfaab2be65719fbee059b0bebe3dddbd9b0927be000000005d5710be5d5710be006c41bd006c41bd006c41bd485064be00000000936c05bef1e9f23e33cdbcbd33cdbcbddb553cbe00000000000000000f7bfc3ee33f0ebe925611be00000000d8af553e3f7ba6bee81797bde81797bd37e8e9bd37e8e9bd00000000000000000654873e000000000e3cb23e5c9e24be5c9e24be0000000046f1b33e0000000000000000ebaf1cbe756a82be6c6fe23eb6d1ee3e00000000fbbbcf3e4905cdbb76354fbeff5f1dbeff5f1dbe201c13be649417be649417be00000000045b03be00000000013337be50dd45be464ec43e5a75e33e00000000000000005f8c00be86996ebea2521ebedabc6e3e84d019bef299a33eb7d490be8cdb14be0000000086a5aa3e0000000040a492becf235e3e139776bd139776bd9fcdb4bd78e526be00000000000000000000000014d3c63e441d1abe691abfbe00000000a5f5a6be5c21ec3e674419be0343e13e0000000000000000b39ec23eb39ec23ecb3fe7bb907306be00000000e70856be260a98bcff1fdd3e00000000000000000546d2bd977812be563d63be9e50173ed1d336bed1d336bed1d336be9870993e395f823e849049bec6074ebe43404abe0000000016e8b23e00000000d5b0383ed5b0383ed5b0383e000000007d50d43eed5721beed5721be355ea73e16fb36be16fb36be2877763e75841e3e75841e3ee7a4ddbdcd5e3fbe00000000000000000000000048da1a3e48da1a3e462b7b3e462b7b3e00000000931fde3e1da544be000000000000000036010f3e36010f3e36010f3e00000000000000009168c4bd9168c4bd9168c4bdc0e2ad3e6284e5bd6284e5bddc2314be7ee975bd7ee975bdefe94dbe916815be13d8ccbd13d8ccbd66dc14beab44f93ee75c38bee671e7bd6e1404befea3b4be00000000576d05becc30633dcc30633d6ad6bbbe2abe683e28e4ccbe00000000000000006175a6be469143beb35ec5bd00000000000000000000000077a72e3c00000000f086483df086483d0ab04d3e0ab04d3e0ab04d3ea490a0be9b3e5abed6a1e4bd00000000a52787be55f14dbe123828bec09595bd0b15933e0b15933ef647863ef647863e07a3db3d07a3db3d00000000d205c73ec44ea43ec44ea43e00000000d091e83c0000000000000000a8350f3e23a443be23a443be5115e5bd990680becf00393ed31435bed31435be165f7b3e165f7b3e435b9bbec286173ec286173e22a880be45dff4bd45dff4bdf21e89bed1bdc1bd39805fbe00000000c601933ec601933e6a9280be56ee3ebe00000000000000000000000026c94fbe4ed53bbd4ed53bbdf37841be23390bbe23390bbe17f783be00000000d17ec9bd231b2cbe231b2cbe0000000054b53ebcf8d4e3bdf8d4e3bd93766cbe5dc456becd27c2bd9a3d46be00000000c56324bef8f02ebe00000000330b34bef98ac53eca03f73ed60ebabd9a9d2fbee33759be2d33e2bd31afd3bd31afd3bd805eb13e702a8e3edd8836bedd8836be817ad83d99ae5dbe490d953e9cfb92bea6ae57becb2c9cbe5e4135be1fb5dc3e00000000ed7413bef0999f3ef0999f3ea985e83e3871ffbd13a7f9bdaff9543dd00d363dd00d363dd00d363d826f5dbe4b7e77be7f5058bed699dd3d3a955ebe45fb8fbc2cb2e93e78b1d83e43196abea48f17be3adb413e98f4d63e8c6328be8c6328be00000000d8ed4ebe2955003e00000000a5e25ebe694221be00000000000000009127e7bd786468be0000000038ef653e0000000000000000c736c7bdbcc127bec7c69bbdc7c69bbdc7c69bbd000000008f3203be00000000391add3e0000000000000000ed0f66beed0f66be61b070bd61b070bd4415f23e0671a63e85d6c4bd85d6c4bd7865e8bd7865e8bd4ab65bbe52100abe52100abe55b8e9baeffea93eb02bab3e5ff75dbe1dfd77be97d263bd8e67d63ef45f20bef45f20bef5a380beecdf9e3ebe2c953e00000000dd89c8beb95296bdab51eabdab51eabd102464be000000008231a3bedd1d9bbec55787bea83c87bec9a569be0000000000000000000000008d148a3ef4795bbe792175beaa37bdbd9ac073be58fab3bee34b6abe1f2f5abdc4f9973e37f3e8bd37f3e8bd37f3e8bdda0424be9b46853e46c4843e6f8becbd6f8becbd000000006ccee2bd126468bea0333f3ea0333f3ea0333f3e0000000034389fbdac23e9be5298653eeeee893dbb7f223ebb7f223ee94e82bcea4e81be639ac33b6eaf9e3e6eaf9e3e6eaf9e3e8bf411be58d18d3e38972dbdc24bc23ee6929abe55dbc63ed7886fbef402c43e4cedab3e00000000000000006bd1dabd00000000766259befa71aabddfe6b5bd00000000d6493abe09938fbd341ae2bddb1dbdbc00000000db596dbe101f52be9e7ea4bea6a213beac7739bdac7739bdfc6a4fbebaa867be7f5ab33e77a4a53ecdf8a43e5f3bff3e000000002645b4bd0000000000000000000000000bf80f3e000000006409cdbd3bcdefbd3bcdefbdc5318b3ecb0fdf3ef2fb44be00000000de7acdbd9d3c3d3e7d3b01be7ee2133c0000000000000000000000004a4a08beff7ebe3e00000000f79b51be0000000000000000e80d7dbde80d7dbd0000000000000000c40db23d0000000052224fbd26ac9bbd26ac9bbdc112aabdc3c3363dc3c3363dfbc41bbefbc41bbe3d41df3ef9e9f2bdeb2c2abe00000000acac1bbe000000009325463e00000000000000002efa9bbe00000000cb5e933e13bca33e6bfe553e6bfe553e00000000e4056ebd0ecf11be281f40be60ebc7bdb3cb75be4af91ebe4af91ebe0000000067c4393e8f5dab3e83857e3e00000000686948bd00000000e7e81cbe000000001452013f00000000000000004a8acbbd07ae53be0c533c3d404b903e404b903e4d2cd43ea22906be000000004464e33ee66744be6866a7bc6866a7bc1f0bde3e00000000094ef3bdf43a9838074f573ce05d6cbe9acdd9bd37a69dbefc1b25be84b6ae3e40be86be57704c3e6691a13e6691a13e0000000003825dbe4af731be0643003fc8ec1abeb6991bbea3b536be00000000000000009c72413c9c72413ce85b52be4f65b7bd9aea4fbd520989bd00000000b4d6cabd0000000000000000eb7bf53e21fe163e21fe163ea0f427be092bc63eaaf8a7be00000000099b0bbe4d6976beb3520dbe000000000000000000000000000000003c4ac83e0000000000000000eea8e33e00000000c364c63e000000001c6d51bd32564bbd32564bbd727729bee8bd84be000000004ada36be5a1fe1bdc59122bec7d4b4bd2f4578be5f196fbd5f196fbd5f196fbd000000004549fdbd000000000000000048f9f1bd48f9f1bd000000000000000000000000279bfb3e0318cfbd0318cfbd0000000022cf0dbe10fc13bc10fc13bc000000000000000000000000eae7713e03fd63be75edb93e75edb93ee1f682bee1f682bec7c5f3bdc7c5f3bdc0ca623e21ed9c3e3cde55be391e96beb7053dbd000000009ca5623e00000000e822f5bd5d0719be00000000761387be00000000e6bc40be366ea4bdcefefb3e3d4cde3b000000004cc5efbd6166aebc31775f3e31775f3eb23b7abdb23b7abd00000000000000007b0cb83ef254943ecf626e3e718810be0000000024d7abbd24d7abbd24d7abbd24d7abbdb2006bbe84eb5ebe0319b03e7edc4cbe9384b23ea2be7dbe5c4a963e00000000a95f93bd05e4a4bd7bffebbdb2ddf8bdb2ddf8bd59845c3e59845c3e59845c3eb4bdd8bdb4bdd8bdb531033d919febbd4b6da93c4b6da93c4b6da93c18d8a63e0000000040a6e83eb01063be2009a2be937451be2ef408be2ef408bed798e1bcb016d2bdb016d2bd8613edbd8613edbdf6ff24be4f3b9bbe0000000000000000000000000000000025b196bd000000000000000000e963be00000000000000000000000010dcd6be702bdabd702bdabd702bdabd0149723c8867a0bd8867a0bd0000000044eeb93e9b51863eae54c73e83e85dbe365544be99a343bed75c8cbec432c7bebd8d6e3b0000000000000000fe297dbecb4a26bee209d8bd7ed939be002030bed5c289beb5ed63be000000006476cebd6476cebdf70013be3fe5a43e3fe5a43e000000000000000000000000c0c6a5be66305ebe66305ebe000000005fba05bd0000000015ca4b3e15e272be86b15ebe47582abeebcede3e5a33a7bd53158cbeb352a9be84b12dbefaaf3fbefaaf3fbef2561abedea96ebe7d28debd716b45be2eb9bf3ed41aa93e82acac3e000000006eaf82be18a4a9bea33198bd3239d43e00000000460551be0c6677be8d0c32be44ee61be6f68f4bc0000000000000000000000000000000068c3f2bd68c3f2bda136dd3e3d852ebeeb4f38bd0000000085ec83bef9813dbe94f304be7daf51be3d2358be0665b2bdb1d3883eb1d3883ea0eb36be000000000000000075238fbe00000000885261be5c3e4fbe000000003dd04ebe6fe091bed9a427bec4944bbef611b93e23a0ba3e000000004ae5cd3e4ae5cd3e4d1815bec37c4cbead5522be3fe3e93e3209bd3e4f321ebe86a59abd0bd8aabe1fe3b0bd1dbb17becbc89abd000000000000000000000000640789be026c33be000000001b8a8dbe1226b53e65668dbe5abb87bedee2b13ef12c40bdf12c40bd965f2cbe87b82dbed7d6dbbd504908be504908be504908be5254ffbd4fa912be470d833e470d833e000000000743c63e184079be5ae00bbe747cc53e86cf2fbe00000000fb03adbe236226be7da7c63eef1087be0000000000000000c690873ec690873e87ef12beae6b8dbd4c32803e46430f3fb450743d000000000000000023bf8e3e22f9cf3ebcc70e3ebcc70e3ebcc70e3e00000000dea37bbd5654803e59305dbec2159d3ec2159d3ea5ad8b3ea5ad8b3ecfcbd63e5899f0bd5bfba63e37ef9ebd37ef9ebdb1b2b93efa10db3ec7e78a3e37f9f7bd37f9f7bdca841cbe0000000038bbbd3e00000000d6dd9abb446dd73e6be705bedbda883edbda883ee0c6f1bde0c6f1bde11b15be565fa7bdd519edbdd519edbd6160d0bdade17d3eade17d3ec24de93ac24de93a33f7abbd00000000cbf434be74ad27be74ad27be483b4b3e483b4b3e92735cbe00000000a85681bec9975a3ec9975a3ec9975a3e000000007389a4bd2e9e0dbe2e9e0dbe0000000000000000e4ff6fbee4ff6fbe03d025be4ffb88be15b0af3e000000003c7d39bdd931693ed931693ed931693eca1cea3e00000000e8b1b4bd833d1bbefbcf50be00000000b07fc0bed2e727be79a60fbe653c87bd0546f3bd0546f3bd0000000000000000484b12bec36928bec36928be46972abedd2558be1128bdbd1128bdbd774a93be8ac2db3e2bf68dbe832b74be9b9ad53e000000003398adbe0000000025c2e23e28682dbdcf254cbe000000005fd430be1e201ebe00000000e71bd23e00000000000000000000000037fe723eff5f22bdff5f22bde151a23ee151a23ed95367bdd95367bd427e51bd427e51bd427e51bd427e51bdf1ce563e757c36be09e2fbbd09e2fbbd8978d53ec64fa93e9287b1bd9287b1bd7373683e7373683e7373683e8d9f28be1027963e1027963ef79f36be9108c5bcdec91fbe00000000000000000000000000000000f0025d3ef0025d3e90c0d53e0000000000000000fc8430be05b597be036036be415242be00000000daa67ebe1bd7983e6bd077be315148be048409be7e001abe2113e73ea91025be06be3dbe000000000c4ded3e0000000000000000d249b4bedcc36c3edcc36c3e453f5bbe189dca3e14c863be2b7b0abe2b7b0abe2b7b0abedb056bbe000000000000000032fe21be82ff5abe6a2c5dbe614fd8bdd7d9e93c0000000000000000000000005598043fe0ebeebd6436063fd82ce53eeb873ebedf8b3bbddf8b3bbd2e6ef13e069955bd9d968e3d223dc5bd047bddbd812e963d00000000f5e9833b2f1808be00000000a07d0f3f11e474bd472335be40c082bd00000000000000000000000000000000000000000000000000000000fed79d3e36eb3abeada7e7bd02bd53be000000000000000023fed53e00f45abed9276e3e4700e83e0000000022821cbe6814663e6814663e00000000ec18a6bdec18a6bdec18a6bdd9dfe1bdd9dfe1bdc8db25bea1bf87beaa8c10bcff95693eff95693e4b9e73bea284b53e000000008b5e75becb427dbe33b8c6bd4bfad63e8cda983e000000004ea43bbe0000000000000000000000000000000094ca5abe376e43bd49553abea4b9a43eb4f80ebefe0181bd6f6365be533d98397e1734be22e683be586f4cbefa7a2cbe7147f5bd357c133f243c643ee8c6febd26bab83e00000000000000000000000001898bbdf65d993e83916e3e00000000d38418be292168bebde2e2bdd1e280bd032288be2f07df3ec5d40fbec5d40fbec5d40fbef7c43b3ef7c43b3ea125833d1b363cbe572a6dbd3d4c0a3f4727bd3e000000006b8dc73e3fea7f3eed55b23e02dab83edad18cbe27ed303e27ed303e2148843e8df0b83ec48e883ec48e883edadc25bedadc25be83bab4bd83bab4bd696bf7bd18ff0fbe1d99f03ec9dd8fbdc9dd8fbd9905a73e6e3629be6e3629be0000000038b51cbe38b51cbe00000000995e55bd1e2a523e6ac1b2bd626231bed966a3ba2dda98be00000000a716e1be00000000fd1dac3ec00557be8883ae3e00000000875c05be875c05be875c05beea6202be0000000000000000303bd33d5cd8b23eebc9bbbdebc9bbbdebc9bbbdd984d03e23c7d13e5479c73e0000000000000000b942a03e538774bd9b3112bd9b3112bd9b3112bd794b9b3e794b9b3e000000008d087abe00000000116eb3bd2544eabdbecfa5bd090f723e84769b3eedc754be85c353be029837be00000000000000000000000033b474bde4f1a93e000000000a39993e000000009a0d82be00000000d3f3f6bdd3f3f6bdd3f3f6bd0000000035c724be0948d93ec2a0823ef12855ba3269a03eacd56d3eacd56d3e3f009e3e8e2c97be28db63bd28db63bd4d0232be9974ec3e16820bbea69580bd094f3fbe314d27beb4a72ebe4e61a43ea7ddca3e31cfdd3e021a8ebe2fa9c53eb1b7db3e42ad19bd42ad19bdaaa3bc3ee3b970bb6656c83e040c6a3e00000000bb2c833ebb2c833ee79097be22eeb3be950f92be00000000666d73beec3dddbd12bcdf3ede69d33ee180b83e0000000000000000f0f1843e5849af3e909ff7bdf8f46fbe67f2de3e832222be7922da3eae717ebec8e3953ec1e843be000000007bea9ebe57b4963e57b4963e8d2be43ef918e83e7ff6b23e7ff6b23e1484823e000000000000000000000000bb2185be0986a8be86db8fbd1feea6bd00000000fe9e88bec832843c29a213be5c2308be5c2308be01d6acbcf03347beb718c63eb718c63e00000000bad2eb3e36de95bd36de95bd36de95bd36de95bdeeac883eeeac883e8f3ca23e73005abef9e938bd55b4ddbd706f6c3c0f7203be0f7203be00000000000000001edcadbd7e246ebeecf534beecf534be00000000000000008ae68f3e8ae68f3ea66d993e4386803e87268abeb3eb663e69e3aebe42aaae3e476221be476221be000000000845943eafdb5abe5c09853e5c09853ee4d69a3e1d4c21be633eb03ea2908cbec172573e00000000a6bf713ea6bf713e00000000000000001665913e1665913e163cc4bdb65816beb65816be97f3aa3e9f0a89be24c278befa82993e540ebfbe2ab6303eae64a83e08edc03ec9bccd3e3c5564be0c531a3e0000000000000000000000007740cb3cab760bbc2996e33e1cedde3e1e5d8e3e65b46bbebb22e8bd91afb93e30f9ba3eef80d83eae9c11be4594833ebba692bec72aff3dc72aff3d2bcf77beb93a83bea48a64bd89376ebeb34081be354dbe3d000000000000000000000000a34608be8ac2e93eb75109be109822be0000000000000000bbad7bbd53d6bc3eb2968dbe3e2a00be10b27cbe10a8abbea9164cbe000000000000000000000000108c64be1117a33e3b53da3ef77b37bef77b37be32eca4be2098c3bd59bb90be1f6348be923e2bbe0505dc3e21b67dbdd2540bbe00000000296768be000000000000000097ae8fbe1ab650bea054f93eb96f59bd1cabb13a1cabb13a1cabb13afe912fbe168ca2be59c64fbe2b4990bd6531be3c6531be3c6531be3c515b60bd34e886bebf2837be886299bec3001cbe0000000095fa2bbe7d19b8bd411b00be0000000010463fbe45128bbe0b4a41be0000000084eea7be491397be16556a3e0000000014bc28be14bc28be32dfacbe4efd38bee2fc37be0000000000000000c4b329bedf0267bd69de8bbe tensor 6203x1xf32 tensor 6203x1xf32 cst 2 std constant value dense 0 137156427 0 0727723241 0 0427678488 2 75064172e 4 0 0233619846 0 0394954272 0 0791109725 tensor 7x1xf32 tensor 7x1xf32 cst 3 std constant value dense 0 131277829 tensor 1xf32 tensor 1xf32 cst 4 std constant value dense tensor 0xf32 tensor 0xf32 cst 5 std constant value dense lat long month price year tensor 5x tf string tensor 5x tf string cst 6 std constant value dense tensor 0x tf string tensor 0x tf string cst 7 std constant value dense category i d description gender host i d size i d tensor 5x tf string tensor 5x tf string cst 8 std constant value dense 1 tensor tensor cst 9 std constant value dense 1 tensor tensor cst 10 std constant value dense 1 1 tensor 2xi32 tensor 2xi32 cst 11 std constant value dense 1 tensor 1xi32 tensor 1xi32 cst 12 std constant value dense 1 tensor tensor cst 13 std constant value dense 0 tensor tensor cst 14 std constant value dense 0 0035018248 tensor 1x1xf32 tensor 1x1xf32 cst 15 std constant value dense 0 tensor 1xi32 tensor 1xi32 cst 16 std constant value dense 0 tensor 2xi32 tensor 2xi32 cst 17 std constant value dense 0 1 tensor 2xi32 tensor 2xi32 cst 18 std constant value dense 1 tensor 2xi32 tensor 2xi32 cst 19 std constant value none cst 20 std constant value dense 2 tensor 1xi32 tensor 1xi32 cst 21 std constant value dense 1 tensor 1xi32 tensor 1xi32 sparse indice 5 sparse value 5 sparse shape 5 dense value 5 tf parseexamplev2 arg0 cst 6 cst 7 cst 5 cst 6 cst 4 cst 4 cst 4 cst 4 cst 4 dense shape tf shape 1 tf shape 1 tf shape 1 tf shape 1 tf shape 1 device num sparse 5 i64 result segment size dense 5 5 5 5 0 0 vector 6xi32 tensor tensor 0x tf string tensor 5x tf string tensor 5x tf string tensor 0x tf string tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor 0xf32 tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor 2xi64 tensor tensor tensor tensor tensor 0 tfl cast sparse shape 0 tensor 2xi64 tensor 2xi32 1 tfl cast sparse shape 1 tensor 2xi64 tensor 2xi32 2 tfl cast sparse shape 3 tensor 2xi64 tensor 2xi32 3 tfl cast sparse shape 4 tensor 2xi64 tensor 2xi32 4 tf hashtablev2 container device key dtype i64 share name hash table 8d6f1b8e 423d 4fff 8a54 69f4ddbecf04 load 0 197 use node name share true value dtype i64 tensor 5 tf lookuptablefindv2 4 sparse value 0 cst 9 device tensor tensor tensor tensor xi64 6 tfl stride slice 0 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 7 tfl pack 6 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 8 tfl cast 7 tensor 2xi32 tensor 2xi64 output indice output shape tf sparsereshape sparse indice 0 sparse shape 0 8 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 9 tfl cast output shape tensor 2xi64 tensor 2xi32 10 tfl gather output shape cst 12 axis 0 i32 tensor 2xi64 tensor tensor 11 tfl greater equal 5 cst 13 tensor xi64 tensor tensor xi1 12 tfl where 11 tensor xi1 tensor 13 tfl reshape 12 cst 11 tensor tensor 1xi32 tensor 14 tfl gather 5 13 axis 0 i32 tensor xi64 tensor tensor xi64 15 tfl slice output shape cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 16 tfl reduce prod 15 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 17 tfl pack 16 10 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 22 output shape 23 tf sparsereshape output indice output shape 17 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 18 tfl gather output indice 22 13 axis 0 i32 tensor tensor tensor 19 tfl slice 9 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 24 output value empty row indicator reverse index map tf sparsefillemptyrow 18 14 output shape 23 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 20 tfl reshape empty row indicator cst 10 tensor tensor 2xi32 tensor output idx tfl unique output value tensor tensor tensor 21 tfl strided slice output indice 24 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 22 tf hashtablev2 container device key dtype tf string share name hash table fc7c2e70 8a89 4115 84d4 2f713273e69c load 0 198 use node name share true value dtype i64 tensor 23 tf lookuptablefindv2 22 sparse value 1 cst 9 device tensor tensor tensor tensor xi64 24 tfl stride slice 1 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 25 tfl pack 24 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 26 tfl cast 25 tensor 2xi32 tensor 2xi64 output indice 25 output shape 26 tf sparsereshape sparse indice 1 sparse shape 1 26 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 27 tfl cast output shape 26 tensor 2xi64 tensor 2xi32 28 tfl gather output shape 26 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 29 tfl greater equal 23 cst 13 tensor xi64 tensor tensor xi1 30 tfl where 29 tensor xi1 tensor 31 tfl reshape 30 cst 11 tensor tensor 1xi32 tensor 32 tfl gather 23 31 axis 0 i32 tensor xi64 tensor tensor xi64 33 tfl slice output shape 26 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 34 tfl reduce prod 33 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 35 tfl pack 34 28 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 27 output shape 28 tf sparsereshape output indice 25 output shape 26 35 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 36 tfl gather output indice 27 31 axis 0 i32 tensor tensor tensor 37 tfl slice 27 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 29 output value 30 empty row indicator 31 reverse index map 32 tf sparsefillemptyrow 36 32 output shape 28 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 38 tfl reshape empty row indicator 31 cst 10 tensor tensor 2xi32 tensor output 33 idx 34 tfl unique output value 30 tensor tensor tensor 39 tfl strided slice output indice 29 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 40 tf hashtablev2 container device key dtype i64 share name hash table b60d3bcd 14f8 4085 a3b2 85948ec09373 load 0 199 use node name share true value dtype i64 tensor 41 tf lookuptablefindv2 40 sparse value 3 cst 9 device tensor tensor tensor tensor xi64 42 tfl stride slice 2 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 43 tfl pack 42 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 44 tfl cast 43 tensor 2xi32 tensor 2xi64 output indice 35 output shape 36 tf sparsereshape sparse indice 3 sparse shape 3 44 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 45 tfl cast output shape 36 tensor 2xi64 tensor 2xi32 46 tfl gather output shape 36 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 47 tfl greater equal 41 cst 13 tensor xi64 tensor tensor xi1 48 tfl where 47 tensor xi1 tensor 49 tfl reshape 48 cst 11 tensor tensor 1xi32 tensor 50 tfl gather 41 49 axis 0 i32 tensor xi64 tensor tensor xi64 51 tfl slice output shape 36 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 52 tfl reduce prod 51 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 53 tfl pack 52 46 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 37 output shape 38 tf sparsereshape output indice 35 output shape 36 53 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 54 tfl gather output indice 37 49 axis 0 i32 tensor tensor tensor 55 tfl slice 45 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 39 output value 40 empty row indicator 41 reverse index map 42 tf sparsefillemptyrow 54 50 output shape 38 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 56 tfl reshape empty row indicator 41 cst 10 tensor tensor 2xi32 tensor output 43 idx 44 tfl unique output value 40 tensor tensor tensor 57 tfl strided slice output indice 39 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 58 tf hashtablev2 container device key dtype i64 share name hash table cb0918fe 8c8e 41f5 9aad 3750ec00bdad load 0 200 use node name share true value dtype i64 tensor 59 tf lookuptablefindv2 58 sparse value 4 cst 9 device tensor tensor tensor tensor xi64 60 tfl stride slice 3 cst 15 cst 21 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 61 tfl pack 60 cst 8 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 62 tfl cast 61 tensor 2xi32 tensor 2xi64 output indice 45 output shape 46 tf sparsereshape sparse indice 4 sparse shape 4 62 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 63 tfl cast output shape 46 tensor 2xi64 tensor 2xi32 64 tfl gather output shape 46 cst 12 axis 0 i32 tensor 2xi64 tensor tensor 65 tfl greater equal 59 cst 13 tensor xi64 tensor tensor xi1 66 tfl where 65 tensor xi1 tensor 67 tfl reshape 66 cst 11 tensor tensor 1xi32 tensor 68 tfl gather 59 67 axis 0 i32 tensor xi64 tensor tensor xi64 69 tfl slice output shape 46 cst 15 cst 21 tensor 2xi64 tensor 1xi32 tensor 1xi32 tensor 1xi64 70 tfl reduce prod 69 cst 15 keep dim false tensor 1xi64 tensor 1xi32 tensor 71 tfl pack 70 64 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi64 output indice 47 output shape 48 tf sparsereshape output indice 45 output shape 46 71 device tensor tensor 2xi64 tensor 2xi64 tensor tensor 2xi64 72 tfl gather output indice 47 67 axis 0 i32 tensor tensor tensor 73 tfl slice 63 cst 15 cst 21 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 output indice 49 output value 50 empty row indicator 51 reverse index map 52 tf sparsefillemptyrow 72 68 output shape 48 cst 13 device tensor tensor xi64 tensor 2xi64 tensor tensor tensor tensor tensor 74 tfl reshape empty row indicator 51 cst 10 tensor tensor 2xi32 tensor output 53 idx 54 tfl unique output value 50 tensor tensor tensor 75 tfl strided slice output indice 49 cst 16 cst 17 cst 18 begin mask 1 i32 ellipsis mask 0 i32 end mask 1 i32 new axis mask 0 i32 shrink axis mask 2 i32 tensor tensor 2xi32 tensor 2xi32 tensor 2xi32 tensor 76 tfl gather cst 2 output axis 0 i32 tensor 7x1xf32 tensor tensor 77 tfl custom tf 76 idx 21 127 tf sparsesegmentsum 76 idx 21 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 78 tfl shape 77 tensor tensor 2xi32 79 tfl stride slice 78 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 80 tfl pack cst 12 79 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 81 tfl tile 20 80 tensor tensor 2xi32 tensor 82 tfl zero like 77 tensor tensor 83 tfl select 81 82 77 tensor tensor tensor tensor 84 tfl shape 83 tensor tensor 2xi32 85 tfl slice 84 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 86 tfl concatenation 19 85 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 87 tfl reshape 83 86 tensor tensor 2xi32 tensor 88 tfl gather cst 1 output 33 axis 0 i32 tensor 6203x1xf32 tensor tensor 89 tfl custom tf 88 idx 34 39 127 tf sparsesegmentsum 88 idx 34 39 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 90 tfl shape 89 tensor tensor 2xi32 91 tfl stride slice 90 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 92 tfl pack cst 12 91 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 93 tfl tile 38 92 tensor tensor 2xi32 tensor 94 tfl zero like 89 tensor tensor 95 tfl select 93 94 89 tensor tensor tensor tensor 96 tfl shape 95 tensor tensor 2xi32 97 tfl slice 96 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 98 tfl concatenation 37 97 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 99 tfl reshape 95 98 tensor tensor 2xi32 tensor 100 tfl gather cst 0 output 43 axis 0 i32 tensor 2x1xf32 tensor tensor 101 tfl custom tf 100 idx 44 57 127 tf sparsesegmentsum 100 idx 44 57 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 102 tfl shape 101 tensor tensor 2xi32 103 tfl stride slice 102 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 104 tfl pack cst 12 103 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 105 tfl tile 56 104 tensor tensor 2xi32 tensor 106 tfl zero like 101 tensor tensor 107 tfl select 105 106 101 tensor tensor tensor tensor 108 tfl shape 107 tensor tensor 2xi32 109 tfl slice 108 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 110 tfl concatenation 55 109 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 111 tfl reshape 107 110 tensor tensor 2xi32 tensor 112 tfl fully connect dense value 3 cst 14 cst 19 fuse activation function none keep num dim false weight format default tensor tensor 1x1xf32 none tensor 113 tfl gather cst output 53 axis 0 i32 tensor 80x1xf32 tensor tensor 114 tfl custom tf 113 idx 54 75 127 tf sparsesegmentsum 113 idx 54 75 t f32 tidx i32 tsegmentids i64 device tensor tensor tensor tensor tfl yield 127 tensor tensor tensor tensor tensor 115 tfl shape 114 tensor tensor 2xi32 116 tfl strided slice 115 cst 21 cst 20 cst 21 begin mask 0 i32 ellipsis mask 0 i32 end mask 0 i32 new axis mask 0 i32 shrink axis mask 1 i32 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 tensor 117 tfl pack cst 12 116 axis 0 i32 value count 2 i32 tensor tensor tensor 2xi32 118 tfl tile 74 117 tensor tensor 2xi32 tensor 119 tfl zero like 114 tensor tensor 120 tfl select 118 119 114 tensor tensor tensor tensor 121 tfl shape 120 tensor tensor 2xi32 122 tfl slice 121 cst 21 cst 11 tensor 2xi32 tensor 1xi32 tensor 1xi32 tensor 1xi32 123 tfl concatenation 73 122 axis 0 i32 fuse activation function none tensor 1xi32 tensor 1xi32 tensor 2xi32 124 tfl reshape 120 123 tensor tensor 2xi32 tensor 125 tfl add n 87 99 111 112 124 tensor tensor tensor tensor tensor tensor 126 tfl add 125 cst 3 fuse activation function none tensor tensor 1xf32 tensor std return 126 tensor arg0 tf save model index path example result0 tf save model index path prediction sym name main tf entry function control output input serve default example 0 output statefulpartitionedcall 1 0 tf save model export name serve default type tensor tensor also please include a link to the save model or graphdef model zip failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.