repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
tensorflow gpu 2 0 pip package outdate miss name wrong
Bug
url s with the issue description of issue what need change the current pip package that exist on pypi be tensorflow gpu 2 0 0rc2 the documentation say tensorflow gpu 2 0 0 rc1 which be 2 typo the command to install it be pip install tensorflow gpu 2 0 0rc2 the document say pip install tensorflow gpu 2 0 0 rc1 which be two typo this bug have exist for all release candidate python error could not find a version that satisfy the requirement tensorflow gpu do not exist from version 0 12 1 1 0 0 1 0 1 1 1 0rc1 1 1 0rc2 1 1 0 1 2 0rc0 1 2 0rc1 1 2 0rc2 1 2 0 1 2 1 1 3 0rc0 1 3 0rc1 1 3 0rc2 1 3 0 1 4 0rc0 1 4 0rc1 1 4 0 1 4 1 1 5 0rc0 1 5 0rc1 1 5 0 1 5 1 1 6 0rc0 1 6 0rc1 1 6 0 1 7 0rc0 1 7 0rc1 1 7 0 1 7 1 1 8 0rc0 1 8 0rc1 1 8 0 1 9 0rc0 1 9 0rc1 1 9 0rc2 1 9 0 1 10 0rc0 1 10 0rc1 1 10 0 1 10 1 1 11 0rc0 1 11 0rc1 1 11 0rc2 1 11 0 1 12 0rc0 1 12 0rc1 1 12 0rc2 1 12 0 1 12 2 1 12 3 1 13 0rc0 1 13 0rc1 1 13 0rc2 1 13 1 1 13 2 1 14 0rc0 1 14 0rc1 1 14 0 1 15 0rc0 1 15 0rc1 2 0 0a0 2 0 0b0 2 0 0b1 2 0 0rc0 2 0 0rc1 2 0 0rc2
tensorflowtensorflow
tf keras layers input have undefined shape when set sparse true make it impossible to use in a model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 15 0rc1 python version 3 5 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory nvidia geforce gtx 1050 4 gb describe the current behavior the tensor create by tensorflow keras layers input specify shape sparse true have shape none none len specify shape and can not be use as input to e g dense layer describe the expect behavior the tensor create by tensorflow keras layers input specify shape sparse true have shape none specify shape and can be use as input to e g dense layer this be the default behaviour in keras code to reproduce the issue python from tensorflow keras layers import input dense I input shape 1 sparse true o dense 1 I other info log stack trace traceback most recent call last file test py line 7 in o dense 1 I file usr local lib python3 5 dist package tensorflow core python keras engine base layer py line 824 in call self maybe build input file usr local lib python3 5 dist package tensorflow core python keras engine base layer py line 2146 in maybe build self build input shape file usr local lib python3 5 dist package tensorflow core python keras layers core py line 1009 in build raise valueerror the last dimension of the input to dense valueerror the last dimension of the input to dense should be define find none
tensorflowtensorflow
tf nn relu on nan input return zero on gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see minimal example code section below os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 13 1 and 1 14 0 python version 3 6 5 cuda cudnn version 10 0 7 6 3 gpu model and memory geforce gtx 1080 8 gb describe the current behavior the behavior of tf nn relu when feed with nan value input be inconsistent if the input be a constant tensor relu return nan if the input be a variable tensor like nan wrap into tf variable or multiply by a random tensor it return zero this behavior can only be observe on the gpu on the cpu relu consistently return nan the behavior on the cpu be also consistent with other activation function describe the expect behavior tf nn relu should handle nan input from all source consistently to keep consistency with other activation function it should return nan in all case code to reproduce the issue the assertion below all pass the second and third assertion the not assertion would be expect to fail python x1 tf nn relu np nan x2 tf nn relu np nan tf random normal shape x3 tf nn relu tf variable np nan with tf device cpu 0 x1 cpu tf nn relu np nan x2 cpu tf nn relu np nan tf random normal shape x3 cpu tf nn relu tf variable np nan with tf session as sess sess run tf global variable initializer assert np all np isnan sess run x1 assert not np any np isnan sess run x2 should fail but do not assert not np any np isnan sess run x3 should fail but do not assert np all np allclose sess run x2 0 0 assert np all np allclose sess run x3 0 0 assert np all np isnan sess run x1 cpu assert np all np isnan sess run x2 cpu assert np all np isnan sess run x3 cpu
tensorflowtensorflow
how to avoid tensorflow gpu copy output to cpu
Bug
I use tensorflow 1 7 0 at gpu I call session run function the input tensor be already gpu tensor so input and forward calculate be well but before output I see that every output have be copy from gpu to cpu and this be not what I hope I just want the output be all gpu tensor how can I do tensorflow core public session h class session virtual status run const std vector input const std vector output tensor name const std vector target node name std vector output 0
tensorflowtensorflow
about fgsm implementation in the tutorial
Bug
url s with the issue description of issue what need change the fgsm implementaiton in the documentation seem to be incorrect clear description in the doc image prob which be equal to the value model predict image be use to calculate the perturbation python perturbation create adversarial pattern image image prob the create adversarial pattern function take input image and input label so the above code be the same as the blow code python perturbation create adversarial pattern input image image input label model predict image however input label must be not the predict probability of the model but the one hot encode correct label of input image I think in fact it be calculate that python prediction pretraine model input image loss loss object input label prediction in the create adversarial pattern function sorry if I have misunderstood ref explain and harness adversarial example p3 submit a pull request if my understanding be correct I will submit a pr but I do not have confidence that my understanding be correct yet
tensorflowtensorflow
tpustrategy make dataset iterator do not work for a dataset create by tf datum dataset from generator
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab default environment tensorflow instal from source or binary colab default tensorflow version use command below 1 14 python version python 3 describe the current behavior I first create a dataset with an iterator then create a tpu iterator with the dataset after that the iterator be use as the second input of tpustrategy experimental run as I run the output of experimental run an error occur usr local lib python3 6 dist package tensorflow python client session py in run fn feed dict fetch list target list option run metadata 1340 return self call tf sessionrun 1341 option feed dict fetch list target list run metadata 1342 usr local lib python3 6 dist package tensorflow python client session py in call tf sessionrun self option feed dict fetch list target list run metadata 1428 self session option feed dict fetch list target list 1429 run metadata 1430 abortederror session 5c40387f44056164 be not find during handling of the above exception another exception occur abortederror traceback most recent call last in 18 custom training loop 19 session run train iterator init 20 session run dist train usr local lib python3 6 dist package tensorflow python client session py in run self fetch feed dict option run metadata 948 try 949 result self run none fetch feed dict option ptr 950 run metadata ptr 951 if run metadata 952 proto datum tf session tf getbuffer run metadata ptr usr local lib python3 6 dist package tensorflow python client session py in run self handle fetch feed dict option run metadata 1171 if final fetch or final target or handle and feed dict tensor 1172 result self do run handle final target final fetch 1173 feed dict tensor option run metadata 1174 else 1175 result usr local lib python3 6 dist package tensorflow python client session py in do run self handle target list fetch list feed dict option run metadata 1348 if handle be none 1349 return self do call run fn feed fetch target option 1350 run metadata 1351 else 1352 return self do call prun fn handle feed fetch usr local lib python3 6 dist package tensorflow python client session py in do call self fn args 1368 pass 1369 message error interpolation interpolate message self graph 1370 raise type e node def op message 1371 1372 def extend graph self abortederror session 5c40387f44056164 be not find describe the expect behavior run without error code to reproduce the issue import os import pprint import tensorflow as tf import numpy as np resolver tf contrib cluster resolver tpuclusterresolver grpc os environ colab tpu addr tf contrib distribute initialize tpu system resolver strategy tf contrib distribute tpustrategy resolver def my generator for I in range 100 x np random rand 28 28 astype float32 y np zero dtype int32 yield x y def train fn input return input 0 with strategy scope config tf configproto config allow soft placement true cluster spec resolver cluster spec if cluster spec config cluster def copyfrom cluster spec as cluster def print start training do all the computation inside a session as oppose to do eager mode with tf session target resolver master config config as session train dataset tf datum dataset from tensor slice x train y train batch 8 drop remainder true train dataset tf datum dataset from generator my generator float32 int32 28 28 batch 8 drop remainder true train iterator strategy make dataset iterator train dataset train iterator init train iterator initialize dist train strategy experimental run train fn train iterator value custom training loop session run train iterator init session run dist train this snippet of code be a simplified version of this colab notebook the original tf datum dataset from tensor slice work fine but tf datum dataset from generator fail at the same time I provide a colab notebook for reproduce this issue
tensorflowtensorflow
gpu never in use with java api
Bug
hi I follow the instruction with this and I add my implementation with bert pb model but the gpu be never use image I get tesla k80 and cuda 9 0 image image any suggestion
tensorflowtensorflow
nn
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
tf1 14 network compile localy but not on tpu with error invalidargumenterror undeclared output of tpu computation
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow custom os platform and distribution e g linux ubuntu 16 04 debian tensorflow instal from source or binary binary tensorflow version use command below 1 14 python version 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory tpu I create a densenet which I succesfully compile it locally with gpu but when I try to compile on a tpu device the follow error appear traceback most recent call last file attention dense py line 258 in model fit get training dataset validation datum get validation dataset initial epoch 0 step per epoch step per epoch validation step val step epoch epoch verbose 1 callback clbk file usr local lib python3 5 dist package tensorflow python keras engine training py line 649 in fit validation freq validation freq file usr local lib python3 5 dist package tensorflow python keras engine training distribute py line 128 in fit distribute validation freq validation freq file usr local lib python3 5 dist package tensorflow python keras engine training distribute py line 395 in experimental tpu fit loop callback call begin hook mode file usr local lib python3 5 dist package tensorflow python keras callbacks py line 262 in call begin hook self on train begin file usr local lib python3 5 dist package tensorflow python keras callbacks py line 378 in on train begin callback on train begin log file home frank lab clr py line 122 in on train begin k set value self model optimizer lr self base lr file usr local lib python3 5 dist package tensorflow python keras backend py line 3038 in set value get session run assign op feed dict assign placeholder value file usr local lib python3 5 dist package tensorflow python keras backend py line 462 in get session initialize variable session file usr local lib python3 5 dist package tensorflow python keras backend py line 879 in initialize variable variable module be variable initialize v for v in candidate var file usr local lib python3 5 dist package tensorflow python client session py line 950 in run run metadata ptr file usr local lib python3 5 dist package tensorflow python client session py line 1173 in run feed dict tensor option run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1350 in do run run metadata file usr local lib python3 5 dist package tensorflow python client session py line 1370 in do call raise type e node def op message tensorflow python framework error impl invalidargumenterror undeclared output of tpu computation a common cause of this error be variable initializer that depend on the tpu computation edge node dense 1 2 re lu 1 relu define at attention dense py 258 0 node tf op layer add add define at attention dense py 258 0 the code be the follow def conv input kernel filt stride dilation pad same x layer conv2d filter filt kernel size kernel strides stride dilation rate dilation padding pad kernel regularizer tf keras regularizer l2 l 0 01 input return x def conv down input filter x conv input 3 filter 2 1 x layer batchnormalization axis 1 fuse true x x layer leakyrelu alpha 0 1 x return x def conv block input filter stride 1 dilation 2 pad same bottleneck true x layer batchnormalization axis 1 fuse true input x layer leakyrelu alpha 0 1 x if bottleneck x conv x kernel 1 filt filter 4 dilation dilation stride 1 pad pad x layer batchnormalization axis 1 fuse true x x layer leakyrelu alpha 0 1 x x conv x kernel 3 filt filter stride stride dilation dilation pad pad return x def dense block x filter layer bottleneck true x list x for I in range layer cb conv block x filter dilation 2 bottleneck true x list append cb x tf keras layers concatenate x cb axis 1 x attention x return x def transition block input filter att true x layer batchnormalization axis 1 fuse true input x layer leakyrelu alpha 0 1 x x conv x kernel 1 filt filter stride 1 dilation 2 pad same x layer averagepooling2d 2 2 stride 2 2 x if att x attention x return x def attention input x channel att input x spatial att x return x def channel att input ratio 8 channel input get shape 1 avg pool tf keras layer globalaveragepooling2d input avg pool tf keras layer reshape 1 1 channel avg pool max pool tf keras layers globalmaxpooling2d input max pool tf keras layer reshape 1 1 channel max pool mlp 0 layer dense unit channel ratio activation layer relu mlp 1 layer dense unit channel activation layer relu avg mlp 1 mlp 0 avg pool max mlp 1 mlp 0 max pool scale keras activations sigmoid avg max return input scale def spatial att input kernel 7 avg pool tf math reduce mean input axis 3 keepdim true max pool tf math reduce max input axis 3 keepdim true concat tf concat avg pool max pool axis 3 concat layer conv2d filter 1 kernel size kernel padding same use bias false concat concat keras activation sigmoid concat return input concat def create model input layer input shape 540 540 3 x conv input kernel 3 filt 64 stride 1 dilation 2 for I in range 8 x dense block x filter 128 layer 3 bottleneck true x transition block x filter 128 att true x dense block x filter 128 layer 4 bottleneck true x layer batchnormalization axis 1 fuse true x x conv x kernel 1 filt 45 stride 1 dilation 1 model tf keras model input input output x model compile optimizer keras optimizer sgd loss custom loss return model with tpu strategy scope model create model model fit get training dataset validation datum get validation dataset initial epoch 0 step per epoch step per epoch validation step val step epoch epoch verbose 1 callback clbk
tensorflowtensorflow
tf 1 14 0 training crash with unimplemented conv2d error work fine in tf 1 13 2
Bug
environment ubuntu 16 04 docker base tensorflow tensorflow 1 14 0 gpu tensor2tensor 1 14 0 pip instal in container python 2 7 cuda cudnn version 10 7 default from docker image gpu test on many from 1080 to rtx titan issue change in tensorflow have break tensor2tensor librispeech training run librispeech training crash with unimplemented conv2d error 0 unimplemente the conv2d op currently only support the nhwc tensor format on the cpu the op be give the format nchw node conv2d 1 unimplemente the conv2d op currently only support the nhwc tensor format on the cpu the op be give the format nchw node conv2d shape 3 8 expect behavior this work fine in early version of tensorflow e g 1 13 2 code to reproduce the issue via nvidia docker hub run tensorflow tensorflow 1 14 0 gpu pip install tensorflow hub pip install tensor2tensor apt get update apt get install sox t2 t trainer problem librispeech clean small model transformer output dir model junk datum dir datum save checkpoint sec 1800 schedule train hparam set transformer librispeech note sox and generate be only need once to prep the dataset other info log relate to closed issue 32017
tensorflowtensorflow
tf function problem when slice tensor with variable
Bug
system information have I write custom code yes os platform and distribution window 10 tensorflow instal from binary tensorflow version v2 0 0 rc0 101 gd2d2566eef 2 0 0 rc1 python version 3 7 4 cuda cudnn version n a gpu model and memory n a describe the behavior slicing tensor use slice index by a tf variable do not work with tf function the problem do not occur when execute eagerly or when slice with tensor which be not variable code to reproduce the issue execute the code import tensorflow as tf pos tf variable 0 dtype tf int32 def ok return tf zero 5 pos 3 tf function def also ok return tf zero 5 pos 0 3 tf function def not ok return tf zero 5 pos 3 tf print ok tf print also ok tf print not ok produce the output 0 0 0 0 0 0 stagingerror detail traceback stagingerror traceback most recent call last in 12 13 tf print ok 14 tf print not ok conda envs tf2 lib site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace conda envs tf2 lib site package tensorflow core python eager def function py in call self args kwd 501 this be the first call of call so we have to initialize 502 initializer map object identity objectidentitydictionary 503 self initialize args kwd add initializer to initializer map 504 finally 505 at this point we know that the initialization be complete or less conda envs tf2 lib site package tensorflow core python eager def function py in initialize self args kwd add initializer to 406 self concrete stateful fn 407 self stateful fn get concrete function internal garbage collect pylint disable protect access 408 args kwd 409 410 def invalid creator scope unused args unused kwd conda envs tf2 lib site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1846 if self input signature 1847 args kwargs none none 1848 graph function self maybe define function args kwargs 1849 return graph function 1850 conda envs tf2 lib site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs conda envs tf2 lib site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2039 arg name arg name 2040 override flat arg shape override flat arg shape 2041 capture by value self capture by value 2042 self function attribute 2043 tell the concretefunction to clean up its graph once it go out of conda envs tf2 lib site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor conda envs tf2 lib site package tensorflow core python eager def function py in wrap fn args kwd 356 wrap allow autograph to swap in a converted function we give 357 the function a weak reference to itself to avoid a reference cycle 358 return weak wrap fn wrap args kwd 359 weak wrap fn weakref ref wrap fn 360 conda envs tf2 lib site package tensorflow core python framework func graph py in wrapper args kwargs 903 except exception as e pylint disable broad except 904 if hasattr e ag error metadata 905 raise e ag error metadata to exception e 906 else 907 raise stagingerror in convert code 11 not ok return tf zero 5 pos 3 c user daniel conda envs tf2 lib site package tensorflow core python op array op py 748 slice helper s start sys maxsize c user daniel conda envs tf2 lib site package tensorflow core python op variable py 1111 ne return gen math op not equal self other c user daniel conda envs tf2 lib site package tensorflow core python ops gen math op py 7012 not equal name name c user daniel conda envs tf2 lib site package tensorflow core python framework op def library py 527 apply op helper prefer dtype default dtype c user daniel conda envs tf2 lib site package tensorflow core python framework op py 1296 internal convert to tensor ret conversion func value dtype dtype name name as ref as ref c user daniel conda envs tf2 lib site package tensorflow core python framework tensor conversion registry py 52 default conversion function return constant op constant value dtype name name c user daniel conda envs tf2 lib site package tensorflow core python framework constant op py 227 constant allow broadcast true c user daniel conda envs tf2 lib site package tensorflow core python framework constant op py 265 constant impl allow broadcast allow broadcast c user daniel conda envs tf2 lib site package tensorflow core python framework tensor util py 450 make tensor proto nparray np array value dtype np dt overflowerror python int too large to convert to c long
tensorflowtensorflow
argument format error in 1 15 kera model save docs incorrect model format info docstre mismatch
Bug
url s with the issue save description of issue what need change 1 15 branch it appear that the docstring of keras model save do not match the doc in 1 15 content be different I think the bullet point for the input argument for the save method be not format make it hard to read although this may be poor format on the current source for the documentation as the docstring appear to be correctly format a side effect of this be that it also appear that the doc conflict with the release note that say the default format be as a tensorflow savedmodel tf however the doc say that the tf option be disable imply that only h5 format can be save which be contradictory an update of the doc from the seemingly correct docstring may fix this correct link the source code link appear to be correct despite the docstring not match that of the website doc parameter define format issue and also not update return define depend on model format raise list and define depend on model format submit a pull request I don t quite understand why this have happen so no
tensorflowtensorflow
tensorflow java api and tf function signature
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 7 bazel version if compile from source 25 1 gcc compiler version if compile from source 7 3 describe the current behavior the current tensorflow java api 1 14 or 2 0 doesn t seem to be able to do inference on tensorflow node create within a tf function signature on a relate note mention that java be plan to stay session centric ideally a signature inference centric api would be a good way to solve this issue and would be a good addition to the exist java api concretely here be the python snippet define a request and response full python code attach at the bottom of this ticket tf function input signature tf tensorspec shape none none name serve def serve self request feature tf identity self input receiver request name request output self call feature response tf identity self response receiver output name response return response after export my model to a savedmodelbundle and try to do some prediction in java via the session runner api I be get java lang illegalargumentexception no operation name request in the graph at org tensorflow session runner operationbyname session java 380 at org tensorflow session runner parseoutput session java 394 at org tensorflow session runner feed session java 131 34 elide where request be define within a tf function describe the expect behavior I would expect to not have a runtime failure at inference time on the jvm either operationbyname would recognize the request 0 node or another java api would exist to fulfill my requirement code to reproduce the issue python code model creation import tensorflow as tf from tensorflow keras import layer class mnistmodel tf keras model def init self super init self first dense layer dense 64 input shape 784 activation relu name dense 1 self out layer dense 10 activation softmax name prediction def call self inp f dense self first dense inp s dense self out f dense return s dense def input receiver self inp return inp def response receiver self output return output tf function input signature tf tensorspec shape none none name serve def serve self request feature tf identity self input receiver request name request output self call feature response tf identity self response receiver output name response return response model mnistmodel x train y train x test y test keras datasets mnist load datum x train x train reshape 60000 784 astype float32 255 x test x test reshape 10000 784 astype float32 255 model compile loss sparse categorical crossentropy optimizer keras optimizer rmsprop history model fit x train y train batch size 64 epoch 1 keras experimental export save model model local path serve only true jvm scala code inference code import org tensorflow savedmodelbundle session tensor import java nio bytebuffer import java lang float jfloat val savedmodelbundle savedmodelbundle load local path serve val session savedmodelbundle session val bytebuffer bytebuffer allocate 784 4 val tensor tensor create classof jfloat array 784 bytebuffer session runner feed request 0 tensor fetch output 0 run tensor close bytebuffer close I be get the follow error java lang illegalargumentexception no operation name request in the graph at org tensorflow session runner operationbyname session java 380 at org tensorflow session runner parseoutput session java 394 at org tensorflow session runner feed session java 131 34 elide other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
break link in top of page
Bug
top of page say for a collection of example run in eager execution see tensorflow contrib eager python example the link to return a 404 error
tensorflowtensorflow
documentation about xla backend be incomplete and inaccurate
Bug
look at scenario 3 in it have a link to the class streamexecutor but the link lead to the empty class with comment suggest that it have be remove
tensorflowtensorflow
break link in
Bug
in the swift early release link point to which show a 404 page
tensorflowtensorflow
tf distribute mirroredstrategy lead to an infinite polling cycle with 4 gpu
Bug
system information a physical tower with 4 gpu run ubuntu 18 04 over kubernete 256 gb of ram tensorflow test on tf nightly gpu 2 0 preview 2 0 0 dev20190902 to tf nightly gpu 2 0 preview 2 0 0 dev20190918 python 3 6 8 cuda 10 0 cudnn 7 6 3 30 also test with cudnn 7 5 0 56 nvidia gtx 1080 nvidia smi nvidia smi 410 78 driver version 410 78 cuda version 10 0 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 geforce gtx 108 off 00000000 02 00 0 off n a 53 70c p2 79w 250w 10889mib 11178mib 100 default 1 geforce gtx 108 off 00000000 03 00 0 off n a 52 69c p2 76w 250w 10893mib 11178mib 100 default 2 geforce gtx 108 off 00000000 82 00 0 off n a 48 65c p2 78w 250w 10889mib 11178mib 100 default 3 geforce gtx 108 off 00000000 83 00 0 off n a 45 62c p2 76w 250w 10893mib 11178mib 100 default problem I run the follow sample code python usr bin env python3 import sys import tensorflow as tf def main batch size 12 feature shape 372 558 3 label 10 sample tf random uniform feature shape def with shape t shape t tf squeeze t t set shape shape return t ds train tf datum dataset from tensor sample map lambda s s tf one label repeat batch batch size map lambda s l with shape s batch size feature shape with shape l batch size label ds val tf datum dataset from tensor sample map lambda s s tf one label repeat batch batch size take 10 map lambda s l with shape s batch size feature shape with shape l batch size label with tf distribute mirroredstrategy scope model tf keras application densenet121 weight none input shape feature shape class label model build batch size feature shape model summary optimizer tf keras optimizer rmsprop learn rate 0 001 cross entropy tf keras loss categoricalcrossentropy label smooth 0 1 model compile optimizer optimizer loss cross entropy metric accuracy model fit ds train validation datum ds val epoch 1 step per epoch 100 if name main sys exit main it output the follow log and hang for at least 9 hour I kill it after log 2019 09 19 11 22 16 548532 I tensorflow compiler xla service service cc 176 streamexecutor device 3 geforce gtx 1080 ti compute capability 6 1 2019 09 19 11 22 16 553080 I tensorflow core common runtime gpu gpu device cc 1632 find device 0 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 02 00 0 2019 09 19 11 22 16 554064 I tensorflow core common runtime gpu gpu device cc 1632 find device 1 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 03 00 0 2019 09 19 11 22 16 555051 I tensorflow core common runtime gpu gpu device cc 1632 find device 2 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 82 00 0 2019 09 19 11 22 16 555890 I tensorflow core common runtime gpu gpu device cc 1632 find device 3 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 83 00 0 2019 09 19 11 22 16 556021 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudart so 10 0 2019 09 19 11 22 16 556046 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcubla so 10 0 2019 09 19 11 22 16 556062 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcufft so 10 0 2019 09 19 11 22 16 556079 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcurand so 10 0 2019 09 19 11 22 16 556095 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcusolver so 10 0 2019 09 19 11 22 16 556111 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcusparse so 10 0 2019 09 19 11 22 16 556127 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudnn so 7 2019 09 19 11 22 16 562745 I tensorflow core common runtime gpu gpu device cc 1760 add visible gpu device 0 1 2 3 2019 09 19 11 22 16 562815 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudart so 10 0 2019 09 19 11 22 16 566634 I tensorflow core common runtime gpu gpu device cc 1173 device interconnect streamexecutorwith strength 1 edge matrix 2019 09 19 11 22 16 566650 I tensorflow core common runtime gpu gpu device cc 1179 0 1 2 3 2019 09 19 11 22 16 566657 I tensorflow core common runtime gpu gpu device cc 1192 0 n y n n 2019 09 19 11 22 16 566661 I tensorflow core common runtime gpu gpu device cc 1192 1 y n n n 2019 09 19 11 22 16 566666 I tensorflow core common runtime gpu gpu device cc 1192 2 n n n y 2019 09 19 11 22 16 566670 I tensorflow core common runtime gpu gpu device cc 1192 3 n n y n 2019 09 19 11 22 16 571630 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10470 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 02 00 0 compute capability 6 1 2019 09 19 11 22 16 573706 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 10470 mb memory physical gpu device 1 name geforce gtx 1080 ti pci bus i d 0000 03 00 0 compute capability 6 1 2019 09 19 11 22 16 575382 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 10470 mb memory physical gpu device 2 name geforce gtx 1080 ti pci bus i d 0000 82 00 0 compute capability 6 1 2019 09 19 11 22 16 576566 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 10470 mb memory physical gpu device 3 name geforce gtx 1080 ti pci bus i d 0000 83 00 0 compute capability 6 1 warn tensorflow entity at 0x7fe776f021e0 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause expect exactly one node node find 2019 09 19 11 22 17 393146 I tensorflow core common runtime gpu gpu device cc 1632 find device 0 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 02 00 0 2019 09 19 11 22 17 394380 I tensorflow core common runtime gpu gpu device cc 1632 find device 1 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 03 00 0 2019 09 19 11 22 17 395221 I tensorflow core common runtime gpu gpu device cc 1632 find device 2 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 82 00 0 2019 09 19 11 22 17 396088 I tensorflow core common runtime gpu gpu device cc 1632 find device 3 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 83 00 0 2019 09 19 11 22 17 396168 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudart so 10 0 2019 09 19 11 22 17 396202 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcubla so 10 0 2019 09 19 11 22 17 396218 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcufft so 10 0 2019 09 19 11 22 17 396233 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcurand so 10 0 2019 09 19 11 22 17 396263 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcusolver so 10 0 2019 09 19 11 22 17 396278 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcusparse so 10 0 2019 09 19 11 22 17 396293 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudnn so 7 2019 09 19 11 22 17 402450 I tensorflow core common runtime gpu gpu device cc 1760 add visible gpu device 0 1 2 3 2019 09 19 11 22 17 402599 I tensorflow core common runtime gpu gpu device cc 1173 device interconnect streamexecutorwith strength 1 edge matrix 2019 09 19 11 22 17 402611 I tensorflow core common runtime gpu gpu device cc 1179 0 1 2 3 2019 09 19 11 22 17 402619 I tensorflow core common runtime gpu gpu device cc 1192 0 n y n n 2019 09 19 11 22 17 402625 I tensorflow core common runtime gpu gpu device cc 1192 1 y n n n 2019 09 19 11 22 17 402631 I tensorflow core common runtime gpu gpu device cc 1192 2 n n n y 2019 09 19 11 22 17 402637 I tensorflow core common runtime gpu gpu device cc 1192 3 n n y n 2019 09 19 11 22 17 407338 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device device gpu 0 with 10470 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 02 00 0 compute capability 6 1 2019 09 19 11 22 17 408425 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device device gpu 1 with 10470 mb memory physical gpu device 1 name geforce gtx 1080 ti pci bus i d 0000 03 00 0 compute capability 6 1 2019 09 19 11 22 17 409430 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device device gpu 2 with 10470 mb memory physical gpu device 2 name geforce gtx 1080 ti pci bus i d 0000 82 00 0 compute capability 6 1 2019 09 19 11 22 17 410293 I tensorflow core common runtime gpu gpu device cc 1318 create tensorflow device device gpu 3 with 10470 mb memory physical gpu device 3 name geforce gtx 1080 ti pci bus i d 0000 83 00 0 compute capability 6 1 model densenet121 layer type output shape param connect to input 1 inputlayer none 372 558 3 0 zero padding2d zeropadding2d none 378 564 3 0 input 1 0 0 conv1 conv conv2d none 186 279 64 9408 zero padding2d 0 0 conv1 bn batchnormalization none 186 279 64 256 conv1 conv 0 0 conv1 relu activation none 186 279 64 0 conv1 bn 0 0 zero padding2d 1 zeropadding2d none 188 281 64 0 conv1 relu 0 0 pool1 maxpooling2d none 93 140 64 0 zero padding2d 1 0 0 conv2 block1 0 bn batchnormali none 93 140 64 256 pool1 0 0 conv2 block1 0 relu activation none 93 140 64 0 conv2 block1 0 bn 0 0 conv2 block1 1 conv conv2d none 93 140 128 8192 conv2 block1 0 relu 0 0 conv2 block1 1 bn batchnormali none 93 140 128 512 conv2 block1 1 conv 0 0 conv2 block1 1 relu activation none 93 140 128 0 conv2 block1 1 bn 0 0 conv2 block1 2 conv conv2d none 93 140 32 36864 conv2 block1 1 relu 0 0 conv2 block1 concat concatenat none 93 140 96 0 pool1 0 0 conv2 block1 2 conv 0 0 conv2 block2 0 bn batchnormali none 93 140 96 384 conv2 block1 concat 0 0 conv2 block2 0 relu activation none 93 140 96 0 conv2 block2 0 bn 0 0 conv2 block2 1 conv conv2d none 93 140 128 12288 conv2 block2 0 relu 0 0 conv2 block2 1 bn batchnormali none 93 140 128 512 conv2 block2 1 conv 0 0 conv2 block2 1 relu activation none 93 140 128 0 conv2 block2 1 bn 0 0 conv2 block2 2 conv conv2d none 93 140 32 36864 conv2 block2 1 relu 0 0 conv2 block2 concat concatenat none 93 140 128 0 conv2 block1 concat 0 0 conv2 block2 2 conv 0 0 conv2 block3 0 bn batchnormali none 93 140 128 512 conv2 block2 concat 0 0 conv2 block3 0 relu activation none 93 140 128 0 conv2 block3 0 bn 0 0 conv2 block3 1 conv conv2d none 93 140 128 16384 conv2 block3 0 relu 0 0 conv2 block3 1 bn batchnormali none 93 140 128 512 conv2 block3 1 conv 0 0 conv2 block3 1 relu activation none 93 140 128 0 conv2 block3 1 bn 0 0 conv2 block3 2 conv conv2d none 93 140 32 36864 conv2 block3 1 relu 0 0 conv2 block3 concat concatenat none 93 140 160 0 conv2 block2 concat 0 0 conv2 block3 2 conv 0 0 conv2 block4 0 bn batchnormali none 93 140 160 640 conv2 block3 concat 0 0 conv2 block4 0 relu activation none 93 140 160 0 conv2 block4 0 bn 0 0 conv2 block4 1 conv conv2d none 93 140 128 20480 conv2 block4 0 relu 0 0 conv2 block4 1 bn batchnormali none 93 140 128 512 conv2 block4 1 conv 0 0 conv2 block4 1 relu activation none 93 140 128 0 conv2 block4 1 bn 0 0 conv2 block4 2 conv conv2d none 93 140 32 36864 conv2 block4 1 relu 0 0 conv2 block4 concat concatenat none 93 140 192 0 conv2 block3 concat 0 0 conv2 block4 2 conv 0 0 conv2 block5 0 bn batchnormali none 93 140 192 768 conv2 block4 concat 0 0 conv2 block5 0 relu activation none 93 140 192 0 conv2 block5 0 bn 0 0 conv2 block5 1 conv conv2d none 93 140 128 24576 conv2 block5 0 relu 0 0 conv2 block5 1 bn batchnormali none 93 140 128 512 conv2 block5 1 conv 0 0 conv2 block5 1 relu activation none 93 140 128 0 conv2 block5 1 bn 0 0 conv2 block5 2 conv conv2d none 93 140 32 36864 conv2 block5 1 relu 0 0 conv2 block5 concat concatenat none 93 140 224 0 conv2 block4 concat 0 0 conv2 block5 2 conv 0 0 conv2 block6 0 bn batchnormali none 93 140 224 896 conv2 block5 concat 0 0 conv2 block6 0 relu activation none 93 140 224 0 conv2 block6 0 bn 0 0 conv2 block6 1 conv conv2d none 93 140 128 28672 conv2 block6 0 relu 0 0 conv2 block6 1 bn batchnormali none 93 140 128 512 conv2 block6 1 conv 0 0 conv2 block6 1 relu activation none 93 140 128 0 conv2 block6 1 bn 0 0 conv2 block6 2 conv conv2d none 93 140 32 36864 conv2 block6 1 relu 0 0 conv2 block6 concat concatenat none 93 140 256 0 conv2 block5 concat 0 0 conv2 block6 2 conv 0 0 pool2 bn batchnormalization none 93 140 256 1024 conv2 block6 concat 0 0 pool2 relu activation none 93 140 256 0 pool2 bn 0 0 pool2 conv conv2d none 93 140 128 32768 pool2 relu 0 0 pool2 pool averagepooling2d none 46 70 128 0 pool2 conv 0 0 conv3 block1 0 bn batchnormali none 46 70 128 512 pool2 pool 0 0 conv3 block1 0 relu activation none 46 70 128 0 conv3 block1 0 bn 0 0 conv3 block1 1 conv conv2d none 46 70 128 16384 conv3 block1 0 relu 0 0 conv3 block1 1 bn batchnormali none 46 70 128 512 conv3 block1 1 conv 0 0 conv3 block1 1 relu activation none 46 70 128 0 conv3 block1 1 bn 0 0 conv3 block1 2 conv conv2d none 46 70 32 36864 conv3 block1 1 relu 0 0 conv3 block1 concat concatenat none 46 70 160 0 pool2 pool 0 0 conv3 block1 2 conv 0 0 conv3 block2 0 bn batchnormali none 46 70 160 640 conv3 block1 concat 0 0 conv3 block2 0 relu activation none 46 70 160 0 conv3 block2 0 bn 0 0 conv3 block2 1 conv conv2d none 46 70 128 20480 conv3 block2 0 relu 0 0 conv3 block2 1 bn batchnormali none 46 70 128 512 conv3 block2 1 conv 0 0 conv3 block2 1 relu activation none 46 70 128 0 conv3 block2 1 bn 0 0 conv3 block2 2 conv conv2d none 46 70 32 36864 conv3 block2 1 relu 0 0 conv3 block2 concat concatenat none 46 70 192 0 conv3 block1 concat 0 0 conv3 block2 2 conv 0 0 conv3 block3 0 bn batchnormali none 46 70 192 768 conv3 block2 concat 0 0 conv3 block3 0 relu activation none 46 70 192 0 conv3 block3 0 bn 0 0 conv3 block3 1 conv conv2d none 46 70 128 24576 conv3 block3 0 relu 0 0 conv3 block3 1 bn batchnormali none 46 70 128 512 conv3 block3 1 conv 0 0 conv3 block3 1 relu activation none 46 70 128 0 conv3 block3 1 bn 0 0 conv3 block3 2 conv conv2d none 46 70 32 36864 conv3 block3 1 relu 0 0 conv3 block3 concat concatenat none 46 70 224 0 conv3 block2 concat 0 0 conv3 block3 2 conv 0 0 conv3 block4 0 bn batchnormali none 46 70 224 896 conv3 block3 concat 0 0 conv3 block4 0 relu activation none 46 70 224 0 conv3 block4 0 bn 0 0 conv3 block4 1 conv conv2d none 46 70 128 28672 conv3 block4 0 relu 0 0 conv3 block4 1 bn batchnormali none 46 70 128 512 conv3 block4 1 conv 0 0 conv3 block4 1 relu activation none 46 70 128 0 conv3 block4 1 bn 0 0 conv3 block4 2 conv conv2d none 46 70 32 36864 conv3 block4 1 relu 0 0 conv3 block4 concat concatenat none 46 70 256 0 conv3 block3 concat 0 0 conv3 block4 2 conv 0 0 conv3 block5 0 bn batchnormali none 46 70 256 1024 conv3 block4 concat 0 0 conv3 block5 0 relu activation none 46 70 256 0 conv3 block5 0 bn 0 0 conv3 block5 1 conv conv2d none 46 70 128 32768 conv3 block5 0 relu 0 0 conv3 block5 1 bn batchnormali none 46 70 128 512 conv3 block5 1 conv 0 0 conv3 block5 1 relu activation none 46 70 128 0 conv3 block5 1 bn 0 0 conv3 block5 2 conv conv2d none 46 70 32 36864 conv3 block5 1 relu 0 0 conv3 block5 concat concatenat none 46 70 288 0 conv3 block4 concat 0 0 conv3 block5 2 conv 0 0 conv3 block6 0 bn batchnormali none 46 70 288 1152 conv3 block5 concat 0 0 conv3 block6 0 relu activation none 46 70 288 0 conv3 block6 0 bn 0 0 conv3 block6 1 conv conv2d none 46 70 128 36864 conv3 block6 0 relu 0 0 conv3 block6 1 bn batchnormali none 46 70 128 512 conv3 block6 1 conv 0 0 conv3 block6 1 relu activation none 46 70 128 0 conv3 block6 1 bn 0 0 conv3 block6 2 conv conv2d none 46 70 32 36864 conv3 block6 1 relu 0 0 conv3 block6 concat concatenat none 46 70 320 0 conv3 block5 concat 0 0 conv3 block6 2 conv 0 0 conv3 block7 0 bn batchnormali none 46 70 320 1280 conv3 block6 concat 0 0 conv3 block7 0 relu activation none 46 70 320 0 conv3 block7 0 bn 0 0 conv3 block7 1 conv conv2d none 46 70 128 40960 conv3 block7 0 relu 0 0 conv3 block7 1 bn batchnormali none 46 70 128 512 conv3 block7 1 conv 0 0 conv3 block7 1 relu activation none 46 70 128 0 conv3 block7 1 bn 0 0 conv3 block7 2 conv conv2d none 46 70 32 36864 conv3 block7 1 relu 0 0 conv3 block7 concat concatenat none 46 70 352 0 conv3 block6 concat 0 0 conv3 block7 2 conv 0 0 conv3 block8 0 bn batchnormali none 46 70 352 1408 conv3 block7 concat 0 0 conv3 block8 0 relu activation none 46 70 352 0 conv3 block8 0 bn 0 0 conv3 block8 1 conv conv2d none 46 70 128 45056 conv3 block8 0 relu 0 0 conv3 block8 1 bn batchnormali none 46 70 128 512 conv3 block8 1 conv 0 0 conv3 block8 1 relu activation none 46 70 128 0 conv3 block8 1 bn 0 0 conv3 block8 2 conv conv2d none 46 70 32 36864 conv3 block8 1 relu 0 0 conv3 block8 concat concatenat none 46 70 384 0 conv3 block7 concat 0 0 conv3 block8 2 conv 0 0 conv3 block9 0 bn batchnormali none 46 70 384 1536 conv3 block8 concat 0 0 conv3 block9 0 relu activation none 46 70 384 0 conv3 block9 0 bn 0 0 conv3 block9 1 conv conv2d none 46 70 128 49152 conv3 block9 0 relu 0 0 conv3 block9 1 bn batchnormali none 46 70 128 512 conv3 block9 1 conv 0 0 conv3 block9 1 relu activation none 46 70 128 0 conv3 block9 1 bn 0 0 conv3 block9 2 conv conv2d none 46 70 32 36864 conv3 block9 1 relu 0 0 conv3 block9 concat concatenat none 46 70 416 0 conv3 block8 concat 0 0 conv3 block9 2 conv 0 0 conv3 block10 0 bn batchnormal none 46 70 416 1664 conv3 block9 concat 0 0 conv3 block10 0 relu activatio none 46 70 416 0 conv3 block10 0 bn 0 0 conv3 block10 1 conv conv2d none 46 70 128 53248 conv3 block10 0 relu 0 0 conv3 block10 1 bn batchnormal none 46 70 128 512 conv3 block10 1 conv 0 0 conv3 block10 1 relu activatio none 46 70 128 0 conv3 block10 1 bn 0 0 conv3 block10 2 conv conv2d none 46 70 32 36864 conv3 block10 1 relu 0 0 conv3 block10 concat concatena none 46 70 448 0 conv3 block9 concat 0 0 conv3 block10 2 conv 0 0 conv3 block11 0 bn batchnormal none 46 70 448 1792 conv3 block10 concat 0 0 conv3 block11 0 relu activatio none 46 70 448 0 conv3 block11 0 bn 0 0 conv3 block11 1 conv conv2d none 46 70 128 57344 conv3 block11 0 relu 0 0 conv3 block11 1 bn batchnormal none 46 70 128 512 conv3 block11 1 conv 0 0 conv3 block11 1 relu activatio none 46 70 128 0 conv3 block11 1 bn 0 0 conv3 block11 2 conv conv2d none 46 70 32 36864 conv3 block11 1 relu 0 0 conv3 block11 concat concatena none 46 70 480 0 conv3 block10 concat 0 0 conv3 block11 2 conv 0 0 conv3 block12 0 bn batchnormal none 46 70 480 1920 conv3 block11 concat 0 0 conv3 block12 0 relu activatio none 46 70 480 0 conv3 block12 0 bn 0 0 conv3 block12 1 conv conv2d none 46 70 128 61440 conv3 block12 0 relu 0 0 conv3 block12 1 bn batchnormal none 46 70 128 512 conv3 block12 1 conv 0 0 conv3 block12 1 relu activatio none 46 70 128 0 conv3 block12 1 bn 0 0 conv3 block12 2 conv conv2d none 46 70 32 36864 conv3 block12 1 relu 0 0 conv3 block12 concat concatena none 46 70 512 0 conv3 block11 concat 0 0 conv3 block12 2 conv 0 0 pool3 bn batchnormalization none 46 70 512 2048 conv3 block12 concat 0 0 pool3 relu activation none 46 70 512 0 pool3 bn 0 0 pool3 conv conv2d none 46 70 256 131072 pool3 relu 0 0 pool3 pool averagepooling2d none 23 35 256 0 pool3 conv 0 0 conv4 block1 0 bn batchnormali none 23 35 256 1024 pool3 pool 0 0 conv4 block1 0 relu activation none 23 35 256 0 conv4 block1 0 bn 0 0 conv4 block1 1 conv conv2d none 23 35 128 32768 conv4 block1 0 relu 0 0 conv4 block1 1 bn batchnormali none 23 35 128 512 conv4 block1 1 conv 0 0 conv4 block1 1 relu activation none 23 35 128 0 conv4 block1 1 bn 0 0 conv4 block1 2 conv conv2d none 23 35 32 36864 conv4 block1 1 relu 0 0 conv4 block1 concat concatenat none 23 35 288 0 pool3 pool 0 0 conv4 block1 2 conv 0 0 conv4 block2 0 bn batchnormali none 23 35 288 1152 conv4 block1 concat 0 0 conv4 block2 0 relu activation none 23 35 288 0 conv4 block2 0 bn 0 0 conv4 block2 1 conv conv2d none 23 35 128 36864 conv4 block2 0 relu 0 0 conv4 block2 1 bn batchnormali none 23 35 128 512 conv4 block2 1 conv 0 0 conv4 block2 1 relu activation none 23 35 128 0 conv4 block2 1 bn 0 0 conv4 block2 2 conv conv2d none 23 35 32 36864 conv4 block2 1 relu 0 0 conv4 block2 concat concatenat none 23 35 320 0 conv4 block1 concat 0 0 conv4 block2 2 conv 0 0 conv4 block3 0 bn batchnormali none 23 35 320 1280 conv4 block2 concat 0 0 conv4 block3 0 relu activation none 23 35 320 0 conv4 block3 0 bn 0 0 conv4 block3 1 conv conv2d none 23 35 128 40960 conv4 block3 0 relu 0 0 conv4 block3 1 bn batchnormali none 23 35 128 512 conv4 block3 1 conv 0 0 conv4 block3 1 relu activation none 23 35 128 0 conv4 block3 1 bn 0 0 conv4 block3 2 conv conv2d none 23 35 32 36864 conv4 block3 1 relu 0 0 conv4 block3 concat concatenat none 23 35 352 0 conv4 block2 concat 0 0 conv4 block3 2 conv 0 0 conv4 block4 0 bn batchnormali none 23 35 352 1408 conv4 block3 concat 0 0 conv4 block4 0 relu activation none 23 35 352 0 conv4 block4 0 bn 0 0 conv4 block4 1 conv conv2d none 23 35 128 45056 conv4 block4 0 relu 0 0 conv4 block4 1 bn batchnormali none 23 35 128 512 conv4 block4 1 conv 0 0 conv4 block4 1 relu activation none 23 35 128 0 conv4 block4 1 bn 0 0 conv4 block4 2 conv conv2d none 23 35 32 36864 conv4 block4 1 relu 0 0 conv4 block4 concat concatenat none 23 35 384 0 conv4 block3 concat 0 0 conv4 block4 2 conv 0 0 conv4 block5 0 bn batchnormali none 23 35 384 1536 conv4 block4 concat 0 0 conv4 block5 0 relu activation none 23 35 384 0 conv4 block5 0 bn 0 0 conv4 block5 1 conv conv2d none 23 35 128 49152 conv4 block5 0 relu 0 0 conv4 block5 1 bn batchnormali none 23 35 128 512 conv4 block5 1 conv 0 0 conv4 block5 1 relu activation none 23 35 128 0 conv4 block5 1 bn 0 0 conv4 block5 2 conv conv2d none 23 35 32 36864 conv4 block5 1 relu 0 0 conv4 block5 concat concatenat none 23 35 416 0 conv4 block4 concat 0 0 conv4 block5 2 conv 0 0 conv4 block6 0 bn batchnormali none 23 35 416 1664 conv4 block5 concat 0 0 conv4 block6 0 relu activation none 23 35 416 0 conv4 block6 0 bn 0 0 conv4 block6 1 conv conv2d none 23 35 128 53248 conv4 block6 0 relu 0 0 conv4 block6 1 bn batchnormali none 23 35 128 512 conv4 block6 1 conv 0 0 conv4 block6 1 relu activation none 23 35 128 0 conv4 block6 1 bn 0 0 conv4 block6 2 conv conv2d none 23 35 32 36864 conv4 block6 1 relu 0 0 conv4 block6 concat concatenat none 23 35 448 0 conv4 block5 concat 0 0 conv4 block6 2 conv 0 0 conv4 block7 0 bn batchnormali none 23 35 448 1792 conv4 block6 concat 0 0 conv4 block7 0 relu activation none 23 35 448 0 conv4 block7 0 bn 0 0 conv4 block7 1 conv conv2d none 23 35 128 57344 conv4 block7 0 relu 0 0 conv4 block7 1 bn batchnormali none 23 35 128 512 conv4 block7 1 conv 0 0 conv4 block7 1 relu activation none 23 35 128 0 conv4 block7 1 bn 0 0 conv4 block7 2 conv conv2d none 23 35 32 36864 conv4 block7 1 relu 0 0 conv4 block7 concat concatenat none 23 35 480 0 conv4 block6 concat 0 0 conv4 block7 2 conv 0 0 conv4 block8 0 bn batchnormali none 23 35 480 1920 conv4 block7 concat 0 0 conv4 block8 0 relu activation none 23 35 480 0 conv4 block8 0 bn 0 0 conv4 block8 1 conv conv2d none 23 35 128 61440 conv4 block8 0 relu 0 0 conv4 block8 1 bn batchnormali none 23 35 128 512 conv4 block8 1 conv 0 0 conv4 block8 1 relu activation none 23 35 128 0 conv4 block8 1 bn 0 0 conv4 block8 2 conv conv2d none 23 35 32 36864 conv4 block8 1 relu 0 0 conv4 block8 concat concatenat none 23 35 512 0 conv4 block7 concat 0 0 conv4 block8 2 conv 0 0 conv4 block9 0 bn batchnormali none 23 35 512 2048 conv4 block8 concat 0 0 conv4 block9 0 relu activation none 23 35 512 0 conv4 block9 0 bn 0 0 conv4 block9 1 conv conv2d none 23 35 128 65536 conv4 block9 0 relu 0 0 conv4 block9 1 bn batchnormali none 23 35 128 512 conv4 block9 1 conv 0 0 conv4 block9 1 relu activation none 23 35 128 0 conv4 block9 1 bn 0 0 conv4 block9 2 conv conv2d none 23 35 32 36864 conv4 block9 1 relu 0 0 conv4 block9 concat concatenat none 23 35 544 0 conv4 block8 concat 0 0 conv4 block9 2 conv 0 0 conv4 block10 0 bn batchnormal none 23 35 544 2176 conv4 block9 concat 0 0 conv4 block10 0 relu activatio none 23 35 544 0 conv4 block10 0 bn 0 0 conv4 block10 1 conv conv2d none 23 35 128 69632 conv4 block10 0 relu 0 0 conv4 block10 1 bn batchnormal none 23 35 128 512 conv4 block10 1 conv 0 0 conv4 block10 1 relu activatio none 23 35 128 0 conv4 block10 1 bn 0 0 conv4 block10 2 conv conv2d none 23 35 32 36864 conv4 block10 1 relu 0 0 conv4 block10 concat concatena none 23 35 576 0 conv4 block9 concat 0 0 conv4 block10 2 conv 0 0 conv4 block11 0 bn batchnormal none 23 35 576 2304 conv4 block10 concat 0 0 conv4 block11 0 relu activatio none 23 35 576 0 conv4 block11 0 bn 0 0 conv4 block11 1 conv conv2d none 23 35 128 73728 conv4 block11 0 relu 0 0 conv4 block11 1 bn batchnormal none 23 35 128 512 conv4 block11 1 conv 0 0 conv4 block11 1 relu activatio none 23 35 128 0 conv4 block11 1 bn 0 0 conv4 block11 2 conv conv2d none 23 35 32 36864 conv4 block11 1 relu 0 0 conv4 block11 concat concatena none 23 35 608 0 conv4 block10 concat 0 0 conv4 block11 2 conv 0 0 conv4 block12 0 bn batchnormal none 23 35 608 2432 conv4 block11 concat 0 0 conv4 block12 0 relu activatio none 23 35 608 0 conv4 block12 0 bn 0 0 conv4 block12 1 conv conv2d none 23 35 128 77824 conv4 block12 0 relu 0 0 conv4 block12 1 bn batchnormal none 23 35 128 512 conv4 block12 1 conv 0 0 conv4 block12 1 relu activatio none 23 35 128 0 conv4 block12 1 bn 0 0 conv4 block12 2 conv conv2d none 23 35 32 36864 conv4 block12 1 relu 0 0 conv4 block12 concat concatena none 23 35 640 0 conv4 block11 concat 0 0 conv4 block12 2 conv 0 0 conv4 block13 0 bn batchnormal none 23 35 640 2560 conv4 block12 concat 0 0 conv4 block13 0 relu activatio none 23 35 640 0 conv4 block13 0 bn 0 0 conv4 block13 1 conv conv2d none 23 35 128 81920 conv4 block13 0 relu 0 0 conv4 block13 1 bn batchnormal none 23 35 128 512 conv4 block13 1 conv 0 0 conv4 block13 1 relu activatio none 23 35 128 0 conv4 block13 1 bn 0 0 conv4 block13 2 conv conv2d none 23 35 32 36864 conv4 block13 1 relu 0 0 conv4 block13 concat concatena none 23 35 672 0 conv4 block12 concat 0 0 conv4 block13 2 conv 0 0 conv4 block14 0 bn batchnormal none 23 35 672 2688 conv4 block13 concat 0 0 conv4 block14 0 relu activatio none 23 35 672 0 conv4 block14 0 bn 0 0 conv4 block14 1 conv conv2d none 23 35 128 86016 conv4 block14 0 relu 0 0 conv4 block14 1 bn batchnormal none 23 35 128 512 conv4 block14 1 conv 0 0 conv4 block14 1 relu activatio none 23 35 128 0 conv4 block14 1 bn 0 0 conv4 block14 2 conv conv2d none 23 35 32 36864 conv4 block14 1 relu 0 0 conv4 block14 concat concatena none 23 35 704 0 conv4 block13 concat 0 0 conv4 block14 2 conv 0 0 conv4 block15 0 bn batchnormal none 23 35 704 2816 conv4 block14 concat 0 0 conv4 block15 0 relu activatio none 23 35 704 0 conv4 block15 0 bn 0 0 conv4 block15 1 conv conv2d none 23 35 128 90112 conv4 block15 0 relu 0 0 conv4 block15 1 bn batchnormal none 23 35 128 512 conv4 block15 1 conv 0 0 conv4 block15 1 relu activatio none 23 35 128 0 conv4 block15 1 bn 0 0 conv4 block15 2 conv conv2d none 23 35 32 36864 conv4 block15 1 relu 0 0 conv4 block15 concat concatena none 23 35 736 0 conv4 block14 concat 0 0 conv4 block15 2 conv 0 0 conv4 block16 0 bn batchnormal none 23 35 736 2944 conv4 block15 concat 0 0 conv4 block16 0 relu activatio none 23 35 736 0 conv4 block16 0 bn 0 0 conv4 block16 1 conv conv2d none 23 35 128 94208 conv4 block16 0 relu 0 0 conv4 block16 1 bn batchnormal none 23 35 128 512 conv4 block16 1 conv 0 0 conv4 block16 1 relu activatio none 23 35 128 0 conv4 block16 1 bn 0 0 conv4 block16 2 conv conv2d none 23 35 32 36864 conv4 block16 1 relu 0 0 conv4 block16 concat concatena none 23 35 768 0 conv4 block15 concat 0 0 conv4 block16 2 conv 0 0 conv4 block17 0 bn batchnormal none 23 35 768 3072 conv4 block16 concat 0 0 conv4 block17 0 relu activatio none 23 35 768 0 conv4 block17 0 bn 0 0 conv4 block17 1 conv conv2d none 23 35 128 98304 conv4 block17 0 relu 0 0 conv4 block17 1 bn batchnormal none 23 35 128 512 conv4 block17 1 conv 0 0 conv4 block17 1 relu activatio none 23 35 128 0 conv4 block17 1 bn 0 0 conv4 block17 2 conv conv2d none 23 35 32 36864 conv4 block17 1 relu 0 0 conv4 block17 concat concatena none 23 35 800 0 conv4 block16 concat 0 0 conv4 block17 2 conv 0 0 conv4 block18 0 bn batchnormal none 23 35 800 3200 conv4 block17 concat 0 0 conv4 block18 0 relu activatio none 23 35 800 0 conv4 block18 0 bn 0 0 conv4 block18 1 conv conv2d none 23 35 128 102400 conv4 block18 0 relu 0 0 conv4 block18 1 bn batchnormal none 23 35 128 512 conv4 block18 1 conv 0 0 conv4 block18 1 relu activatio none 23 35 128 0 conv4 block18 1 bn 0 0 conv4 block18 2 conv conv2d none 23 35 32 36864 conv4 block18 1 relu 0 0 conv4 block18 concat concatena none 23 35 832 0 conv4 block17 concat 0 0 conv4 block18 2 conv 0 0 conv4 block19 0 bn batchnormal none 23 35 832 3328 conv4 block18 concat 0 0 conv4 block19 0 relu activatio none 23 35 832 0 conv4 block19 0 bn 0 0 conv4 block19 1 conv conv2d none 23 35 128 106496 conv4 block19 0 relu 0 0 conv4 block19 1 bn batchnormal none 23 35 128 512 conv4 block19 1 conv 0 0 conv4 block19 1 relu activatio none 23 35 128 0 conv4 block19 1 bn 0 0 conv4 block19 2 conv conv2d none 23 35 32 36864 conv4 block19 1 relu 0 0 conv4 block19 concat concatena none 23 35 864 0 conv4 block18 concat 0 0 conv4 block19 2 conv 0 0 conv4 block20 0 bn batchnormal none 23 35 864 3456 conv4 block19 concat 0 0 conv4 block20 0 relu activatio none 23 35 864 0 conv4 block20 0 bn 0 0 conv4 block20 1 conv conv2d none 23 35 128 110592 conv4 block20 0 relu 0 0 conv4 block20 1 bn batchnormal none 23 35 128 512 conv4 block20 1 conv 0 0 conv4 block20 1 relu activatio none 23 35 128 0 conv4 block20 1 bn 0 0 conv4 block20 2 conv conv2d none 23 35 32 36864 conv4 block20 1 relu 0 0 conv4 block20 concat concatena none 23 35 896 0 conv4 block19 concat 0 0 conv4 block20 2 conv 0 0 conv4 block21 0 bn batchnormal none 23 35 896 3584 conv4 block20 concat 0 0 conv4 block21 0 relu activatio none 23 35 896 0 conv4 block21 0 bn 0 0 conv4 block21 1 conv conv2d none 23 35 128 114688 conv4 block21 0 relu 0 0 conv4 block21 1 bn batchnormal none 23 35 128 512 conv4 block21 1 conv 0 0 conv4 block21 1 relu activatio none 23 35 128 0 conv4 block21 1 bn 0 0 conv4 block21 2 conv conv2d none 23 35 32 36864 conv4 block21 1 relu 0 0 conv4 block21 concat concatena none 23 35 928 0 conv4 block20 concat 0 0 conv4 block21 2 conv 0 0 conv4 block22 0 bn batchnormal none 23 35 928 3712 conv4 block21 concat 0 0 conv4 block22 0 relu activatio none 23 35 928 0 conv4 block22 0 bn 0 0 conv4 block22 1 conv conv2d none 23 35 128 118784 conv4 block22 0 relu 0 0 conv4 block22 1 bn batchnormal none 23 35 128 512 conv4 block22 1 conv 0 0 conv4 block22 1 relu activatio none 23 35 128 0 conv4 block22 1 bn 0 0 conv4 block22 2 conv conv2d none 23 35 32 36864 conv4 block22 1 relu 0 0 conv4 block22 concat concatena none 23 35 960 0 conv4 block21 concat 0 0 conv4 block22 2 conv 0 0 conv4 block23 0 bn batchnormal none 23 35 960 3840 conv4 block22 concat 0 0 conv4 block23 0 relu activatio none 23 35 960 0 conv4 block23 0 bn 0 0 conv4 block23 1 conv conv2d none 23 35 128 122880 conv4 block23 0 relu 0 0 conv4 block23 1 bn batchnormal none 23 35 128 512 conv4 block23 1 conv 0 0 conv4 block23 1 relu activatio none 23 35 128 0 conv4 block23 1 bn 0 0 conv4 block23 2 conv conv2d none 23 35 32 36864 conv4 block23 1 relu 0 0 conv4 block23 concat concatena none 23 35 992 0 conv4 block22 concat 0 0 conv4 block23 2 conv 0 0 conv4 block24 0 bn batchnormal none 23 35 992 3968 conv4 block23 concat 0 0 conv4 block24 0 relu activatio none 23 35 992 0 conv4 block24 0 bn 0 0 conv4 block24 1 conv conv2d none 23 35 128 126976 conv4 block24 0 relu 0 0 conv4 block24 1 bn batchnormal none 23 35 128 512 conv4 block24 1 conv 0 0 conv4 block24 1 relu activatio none 23 35 128 0 conv4 block24 1 bn 0 0 conv4 block24 2 conv conv2d none 23 35 32 36864 conv4 block24 1 relu 0 0 conv4 block24 concat concatena none 23 35 1024 0 conv4 block23 concat 0 0 conv4 block24 2 conv 0 0 pool4 bn batchnormalization none 23 35 1024 4096 conv4 block24 concat 0 0 pool4 relu activation none 23 35 1024 0 pool4 bn 0 0 pool4 conv conv2d none 23 35 512 524288 pool4 relu 0 0 pool4 pool averagepooling2d none 11 17 512 0 pool4 conv 0 0 conv5 block1 0 bn batchnormali none 11 17 512 2048 pool4 pool 0 0 conv5 block1 0 relu activation none 11 17 512 0 conv5 block1 0 bn 0 0 conv5 block1 1 conv conv2d none 11 17 128 65536 conv5 block1 0 relu 0 0 conv5 block1 1 bn batchnormali none 11 17 128 512 conv5 block1 1 conv 0 0 conv5 block1 1 relu activation none 11 17 128 0 conv5 block1 1 bn 0 0 conv5 block1 2 conv conv2d none 11 17 32 36864 conv5 block1 1 relu 0 0 conv5 block1 concat concatenat none 11 17 544 0 pool4 pool 0 0 conv5 block1 2 conv 0 0 conv5 block2 0 bn batchnormali none 11 17 544 2176 conv5 block1 concat 0 0 conv5 block2 0 relu activation none 11 17 544 0 conv5 block2 0 bn 0 0 conv5 block2 1 conv conv2d none 11 17 128 69632 conv5 block2 0 relu 0 0 conv5 block2 1 bn batchnormali none 11 17 128 512 conv5 block2 1 conv 0 0 conv5 block2 1 relu activation none 11 17 128 0 conv5 block2 1 bn 0 0 conv5 block2 2 conv conv2d none 11 17 32 36864 conv5 block2 1 relu 0 0 conv5 block2 concat concatenat none 11 17 576 0 conv5 block1 concat 0 0 conv5 block2 2 conv 0 0 conv5 block3 0 bn batchnormali none 11 17 576 2304 conv5 block2 concat 0 0 conv5 block3 0 relu activation none 11 17 576 0 conv5 block3 0 bn 0 0 conv5 block3 1 conv conv2d none 11 17 128 73728 conv5 block3 0 relu 0 0 conv5 block3 1 bn batchnormali none 11 17 128 512 conv5 block3 1 conv 0 0 conv5 block3 1 relu activation none 11 17 128 0 conv5 block3 1 bn 0 0 conv5 block3 2 conv conv2d none 11 17 32 36864 conv5 block3 1 relu 0 0 conv5 block3 concat concatenat none 11 17 608 0 conv5 block2 concat 0 0 conv5 block3 2 conv 0 0 conv5 block4 0 bn batchnormali none 11 17 608 2432 conv5 block3 concat 0 0 conv5 block4 0 relu activation none 11 17 608 0 conv5 block4 0 bn 0 0 conv5 block4 1 conv conv2d none 11 17 128 77824 conv5 block4 0 relu 0 0 conv5 block4 1 bn batchnormali none 11 17 128 512 conv5 block4 1 conv 0 0 conv5 block4 1 relu activation none 11 17 128 0 conv5 block4 1 bn 0 0 conv5 block4 2 conv conv2d none 11 17 32 36864 conv5 block4 1 relu 0 0 conv5 block4 concat concatenat none 11 17 640 0 conv5 block3 concat 0 0 conv5 block4 2 conv 0 0 conv5 block5 0 bn batchnormali none 11 17 640 2560 conv5 block4 concat 0 0 conv5 block5 0 relu activation none 11 17 640 0 conv5 block5 0 bn 0 0 conv5 block5 1 conv conv2d none 11 17 128 81920 conv5 block5 0 relu 0 0 conv5 block5 1 bn batchnormali none 11 17 128 512 conv5 block5 1 conv 0 0 conv5 block5 1 relu activation none 11 17 128 0 conv5 block5 1 bn 0 0 conv5 block5 2 conv conv2d none 11 17 32 36864 conv5 block5 1 relu 0 0 conv5 block5 concat concatenat none 11 17 672 0 conv5 block4 concat 0 0 conv5 block5 2 conv 0 0 conv5 block6 0 bn batchnormali none 11 17 672 2688 conv5 block5 concat 0 0 conv5 block6 0 relu activation none 11 17 672 0 conv5 block6 0 bn 0 0 conv5 block6 1 conv conv2d none 11 17 128 86016 conv5 block6 0 relu 0 0 conv5 block6 1 bn batchnormali none 11 17 128 512 conv5 block6 1 conv 0 0 conv5 block6 1 relu activation none 11 17 128 0 conv5 block6 1 bn 0 0 conv5 block6 2 conv conv2d none 11 17 32 36864 conv5 block6 1 relu 0 0 conv5 block6 concat concatenat none 11 17 704 0 conv5 block5 concat 0 0 conv5 block6 2 conv 0 0 conv5 block7 0 bn batchnormali none 11 17 704 2816 conv5 block6 concat 0 0 conv5 block7 0 relu activation none 11 17 704 0 conv5 block7 0 bn 0 0 conv5 block7 1 conv conv2d none 11 17 128 90112 conv5 block7 0 relu 0 0 conv5 block7 1 bn batchnormali none 11 17 128 512 conv5 block7 1 conv 0 0 conv5 block7 1 relu activation none 11 17 128 0 conv5 block7 1 bn 0 0 conv5 block7 2 conv conv2d none 11 17 32 36864 conv5 block7 1 relu 0 0 conv5 block7 concat concatenat none 11 17 736 0 conv5 block6 concat 0 0 conv5 block7 2 conv 0 0 conv5 block8 0 bn batchnormali none 11 17 736 2944 conv5 block7 concat 0 0 conv5 block8 0 relu activation none 11 17 736 0 conv5 block8 0 bn 0 0 conv5 block8 1 conv conv2d none 11 17 128 94208 conv5 block8 0 relu 0 0 conv5 block8 1 bn batchnormali none 11 17 128 512 conv5 block8 1 conv 0 0 conv5 block8 1 relu activation none 11 17 128 0 conv5 block8 1 bn 0 0 conv5 block8 2 conv conv2d none 11 17 32 36864 conv5 block8 1 relu 0 0 conv5 block8 concat concatenat none 11 17 768 0 conv5 block7 concat 0 0 conv5 block8 2 conv 0 0 conv5 block9 0 bn batchnormali none 11 17 768 3072 conv5 block8 concat 0 0 conv5 block9 0 relu activation none 11 17 768 0 conv5 block9 0 bn 0 0 conv5 block9 1 conv conv2d none 11 17 128 98304 conv5 block9 0 relu 0 0 conv5 block9 1 bn batchnormali none 11 17 128 512 conv5 block9 1 conv 0 0 conv5 block9 1 relu activation none 11 17 128 0 conv5 block9 1 bn 0 0 conv5 block9 2 conv conv2d none 11 17 32 36864 conv5 block9 1 relu 0 0 conv5 block9 concat concatenat none 11 17 800 0 conv5 block8 concat 0 0 conv5 block9 2 conv 0 0 conv5 block10 0 bn batchnormal none 11 17 800 3200 conv5 block9 concat 0 0 conv5 block10 0 relu activatio none 11 17 800 0 conv5 block10 0 bn 0 0 conv5 block10 1 conv conv2d none 11 17 128 102400 conv5 block10 0 relu 0 0 conv5 block10 1 bn batchnormal none 11 17 128 512 conv5 block10 1 conv 0 0 conv5 block10 1 relu activatio none 11 17 128 0 conv5 block10 1 bn 0 0 conv5 block10 2 conv conv2d none 11 17 32 36864 conv5 block10 1 relu 0 0 conv5 block10 concat concatena none 11 17 832 0 conv5 block9 concat 0 0 conv5 block10 2 conv 0 0 conv5 block11 0 bn batchnormal none 11 17 832 3328 conv5 block10 concat 0 0 conv5 block11 0 relu activatio none 11 17 832 0 conv5 block11 0 bn 0 0 conv5 block11 1 conv conv2d none 11 17 128 106496 conv5 block11 0 relu 0 0 conv5 block11 1 bn batchnormal none 11 17 128 512 conv5 block11 1 conv 0 0 conv5 block11 1 relu activatio none 11 17 128 0 conv5 block11 1 bn 0 0 conv5 block11 2 conv conv2d none 11 17 32 36864 conv5 block11 1 relu 0 0 conv5 block11 concat concatena none 11 17 864 0 conv5 block10 concat 0 0 conv5 block11 2 conv 0 0 conv5 block12 0 bn batchnormal none 11 17 864 3456 conv5 block11 concat 0 0 conv5 block12 0 relu activatio none 11 17 864 0 conv5 block12 0 bn 0 0 conv5 block12 1 conv conv2d none 11 17 128 110592 conv5 block12 0 relu 0 0 conv5 block12 1 bn batchnormal none 11 17 128 512 conv5 block12 1 conv 0 0 conv5 block12 1 relu activatio none 11 17 128 0 conv5 block12 1 bn 0 0 conv5 block12 2 conv conv2d none 11 17 32 36864 conv5 block12 1 relu 0 0 conv5 block12 concat concatena none 11 17 896 0 conv5 block11 concat 0 0 conv5 block12 2 conv 0 0 conv5 block13 0 bn batchnormal none 11 17 896 3584 conv5 block12 concat 0 0 conv5 block13 0 relu activatio none 11 17 896 0 conv5 block13 0 bn 0 0 conv5 block13 1 conv conv2d none 11 17 128 114688 conv5 block13 0 relu 0 0 conv5 block13 1 bn batchnormal none 11 17 128 512 conv5 block13 1 conv 0 0 conv5 block13 1 relu activatio none 11 17 128 0 conv5 block13 1 bn 0 0 conv5 block13 2 conv conv2d none 11 17 32 36864 conv5 block13 1 relu 0 0 conv5 block13 concat concatena none 11 17 928 0 conv5 block12 concat 0 0 conv5 block13 2 conv 0 0 conv5 block14 0 bn batchnormal none 11 17 928 3712 conv5 block13 concat 0 0 conv5 block14 0 relu activatio none 11 17 928 0 conv5 block14 0 bn 0 0 conv5 block14 1 conv conv2d none 11 17 128 118784 conv5 block14 0 relu 0 0 conv5 block14 1 bn batchnormal none 11 17 128 512 conv5 block14 1 conv 0 0 conv5 block14 1 relu activatio none 11 17 128 0 conv5 block14 1 bn 0 0 conv5 block14 2 conv conv2d none 11 17 32 36864 conv5 block14 1 relu 0 0 conv5 block14 concat concatena none 11 17 960 0 conv5 block13 concat 0 0 conv5 block14 2 conv 0 0 conv5 block15 0 bn batchnormal none 11 17 960 3840 conv5 block14 concat 0 0 conv5 block15 0 relu activatio none 11 17 960 0 conv5 block15 0 bn 0 0 conv5 block15 1 conv conv2d none 11 17 128 122880 conv5 block15 0 relu 0 0 conv5 block15 1 bn batchnormal none 11 17 128 512 conv5 block15 1 conv 0 0 conv5 block15 1 relu activatio none 11 17 128 0 conv5 block15 1 bn 0 0 conv5 block15 2 conv conv2d none 11 17 32 36864 conv5 block15 1 relu 0 0 conv5 block15 concat concatena none 11 17 992 0 conv5 block14 concat 0 0 conv5 block15 2 conv 0 0 conv5 block16 0 bn batchnormal none 11 17 992 3968 conv5 block15 concat 0 0 conv5 block16 0 relu activatio none 11 17 992 0 conv5 block16 0 bn 0 0 conv5 block16 1 conv conv2d none 11 17 128 126976 conv5 block16 0 relu 0 0 conv5 block16 1 bn batchnormal none 11 17 128 512 conv5 block16 1 conv 0 0 conv5 block16 1 relu activatio none 11 17 128 0 conv5 block16 1 bn 0 0 conv5 block16 2 conv conv2d none 11 17 32 36864 conv5 block16 1 relu 0 0 conv5 block16 concat concatena none 11 17 1024 0 conv5 block15 concat 0 0 conv5 block16 2 conv 0 0 bn batchnormalization none 11 17 1024 4096 conv5 block16 concat 0 0 relu activation none 11 17 1024 0 bn 0 0 avg pool globalaveragepooling2 none 1024 0 relu 0 0 fc1000 dense none 10 10250 avg pool 0 0 total param 7 047 754 trainable param 6 964 106 non trainable param 83 648 train for 100 step validate for 10 step warn tensorflow from usr local lib python3 6 dist package tensorflow core python keras layers normalization py 477 where from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where 2019 09 19 11 25 34 482086 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcubla so 10 0 2019 09 19 11 25 34 711640 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamiclibrary libcudnn so 7 2019 09 19 11 25 35 685779 w tensorflow stream executor gpu redzone allocator cc 312 not find bin ptxas not find rely on driver to perform ptx compilation this message will be only log once if I remove the mirroredstrategy scope the code run successfully and do not hang do meaningless training investigation top 3161 root 20 0 0 112 t 0 013 t 948384 s 24 0 5 3 181 17 23 python3 nvidia smi s output be the same that I use in the system information all the gpu be constantly 100 busy top h p 3161 thread of the running process thread 155 total 0 run 155 sleeping 0 stop 0 zombie cpu s 0 9 we 0 8 sy 0 0 ni 97 8 i d 0 0 wa 0 3 hi 0 2 si 0 0 st kib mem 26408952 total 99229216 free 21207464 use 14365283 buff cache kib swap 0 total 0 free 0 use 20145740 avail mem pid user pr ni virt res shr s cpu mem time command 3261 root 20 0 0 112 t 0 013 t 948360 s 6 3 5 3 42 18 36 python3 3255 root 20 0 0 112 t 0 013 t 948360 s 6 0 5 3 41 49 75 python3 3259 root 20 0 0 112 t 0 013 t 948360 s 6 0 5 3 42 09 41 python3 3257 root 20 0 0 112 t 0 013 t 948360 s 5 6 5 3 42 10 03 python3 3161 root 20 0 0 112 t 0 013 t 948360 s 0 0 5 3 2 11 62 python3 3165 root 20 0 0 112 t 0 013 t 948360 s 0 0 5 3 0 00 00 python3 3166 root 20 0 0 112 t 0 013 t 948360 s 0 0 5 3 0 15 45 python3 bt in gdb pid 3161 trace of the main thread 0 0x00007f26924c5839 in syscall from lib x86 64 linux gnu libc so 6 1 0x00007f264b30e53b in nsync nsync mu semaphore p with deadline nsync nsync semaphore s timespec from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 2 0x00007f264b30db59 in nsync nsync sem wait with cancel nsync waiter timespec nsync nsync note s from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 3 0x00007f264b30b11b in nsync nsync cv wait with deadline generic nsync nsync cv s void void void void void timespec nsync nsync note s from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 4 0x00007f264b30b5f3 in nsync nsync cv wait with deadline nsync nsync cv s nsync nsync mu s timespec nsync nsync note s from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 5 0x00007f264344f60c in tensorflow kernelanddevicefunc run tensorflow scopedstepcontainer absl inlinedvector const std vector tensorflow nodeexecstat tensorflow stepstat tensorflow graphcollector tensorflow cancellationmanager from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 6 0x00007f264344fa06 in tensorflow kernelanddevicefunc run absl inlinedvector const std vector tensorflow nodeexecstat tensorflow stepstat tensorflow graphcollector tensorflow cancellationmanager from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 7 0x00007f26434313f6 in tensorflow eagerkernelexecute tensorflow eagercontext absl inlinedvector const std unique ptr const tensorflow nodeexecstat tensorflow stepstat tensorflow graphcollector tensorflow cancellationmanager absl span from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 8 0x00007f2643431aed in tensorflow executenode run from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 9 0x00007f264346ca85 in tensorflow eagerexecutor runitem std unique ptr from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 10 0x00007f264346d18d in tensorflow eagerexecutor addorexecute std unique ptr from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 11 0x00007f264342cd86 in tensorflow anonymous namespace eagerlocalexecute tensorflow eageroperation tensorflow tensorhandle int from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 12 0x00007f264342ed00 in tensorflow eagerexecute tensorflow eageroperation tensorflow tensorhandle int type to continue or q to quit from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 13 0x00007f26432bc05d in tfe execute from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 14 0x00007f264324640c in tfe py executecancelable tfe context char const char const absl inlinedvector object tfe cancellationmanager absl inlinedvector tf status from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 15 0x00007f2643246941 in tfe py execute tfe context char const char const absl inlinedvector object absl inlinedvector tf status from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 16 0x00007f2642ddeb34 in wrap tfe py execute from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 17 0x00000000005097cf in pycfunction fastcalldict kwargs nargs args func obj at object methodobject c 234 18 pycfunction fastcallkeyword kwname nargs stack func at object methodobject c 294 19 call function lto priv at python ceval c 4851 20 0x000000000050b4a9 in pyeval evalframedefault at python ceval c 3335 21 0x0000000000507125 in pyeval evalframeex throwflag 0 f frame 0x62d109a8 for file usr local lib python3 6 dist package tensorflow core python eager execute py line 61 in quick execute op name inference distribute function 164755 num output 3 input to continue or q to quit 95 in call self eagerdefinedfunction name b inference distribute function 164755 function deleter eagerdefinedfunctiondeleter name b inference distribute function 164755 at remote 0x7f1e0e0df438 register on context true definition signature num output 3 output type 9 1 1 output shape control capture set func graph output group lock acquire release group lock acquire release waiter at remote0x7f2384537f60 num group 2 group member count 0 0 at remote 0x7f2384537c88 node by i d 1 input val i d value 1 original op none traceback device code location name distribute function autograph false autograph option none experimental relax shape false function cache primary group lock acquire at object abstract c 2310 45 pyobject call prepend kwargs args obj func at object abstract c 2373 46 method call lto priv at object classobject c 314 47 0x0000000000549f41 in pyobject call kwargs args tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter create resource sel truncate func at object abstract c 2261 48 slot tp call at object typeobject c 6207 49 0x000000000059f50e in pyobject call at object abstract c 2261 50 0x000000000050c854 in do call core kwdict type to continue or q to quit callargs tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter create resource sel truncate func function spec be method false default value none args to indice input iterator 0 arg name input iterator vararg name none arg index to default value input signature none at remote 0x7f25642e3630 name distribute function autograph false autograph option none experimental relax shape false function cache primary group lock acquire release waiter tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter truncate at python ceval c 754 53 pyeval evalcodewithname lto priv 1821 at python ceval c 4166 54 0x0000000000508794 in pyfunction fastcalldict at python ceval c 5084 55 0x00000000005940d1 in pyobject fastcalldict kwargs nargs 2 args 0x7ffcaa451e10 func at object abstract c 2310 56 pyobject call prepend kwargs args obj func at object abstract c 2373 57 method call lto priv at object classobject c 314 type to continue or q to quit 58 0x000000000059f50e in pyobject call at object abstract c 2261 59 0x000000000050c854 in do call core kwdict callargs tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter create resource sel truncate func at python ceval c 5120 60 pyeval evalframedefault at python ceval c 3404 61 0x0000000000507125 in pyeval evalframeex throwflag 0 f frame 0x7f2564359dd8 for file usr local lib python3 6 dist package tensorflow core python eager def function py line 480 in call self python function function spec be method false default value none args to indice input iterator 0 arg name input iterator vararg name none arg index to default value input signature none at remote 0x7f256435b400 autograph false experimental autograph option none experimental relax shape false experimental compile none create variable at object abstract c 2310 65 pyobject call prepend kwargs 0x0 args obj func at object abstract c 2373 66 method call lto priv at object classobject c 314 67 0x0000000000549f41 in pyobject call kwargs 0x0 args tensor variant tensor attr to continue or q to quit tensor at remote 0x7f263d5148d0 self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter create resource sel truncate func at object abstract c 2261 68 slot tp call at object typeobject c 6207 69 0x00000000005a95fc in pyobject fastcalldict kwargs nargs 1 args 0x7f25642fdc98 func python function function spec be method false default value none args to indice input iterator 0 arg name input iterator vararg name none arg index to default value input signature none at remote 0x7f256435b400 autograph false experimental autograph option none experimental relax shape false experimental compile none create variable tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource truncate at python ceval c 754 74 pyeval evalcodewithname lto priv 1821 at python ceval c 4166 75 0x0000000000508fa0 in fast function lto priv at python ceval c 4992 76 0x000000000050999d in call function lto priv at python ceval c 4872 77 0x000000000050b4a9 in pyeval evalframedefault at python ceval c 3335 78 0x0000000000507125 in pyeval evalframeex throwflag 0 f frame 0x689353d8 for file usr local lib python3 6 dist package tensorflow core python keras engine training v2 type to continue or q to quit py line 123 in run one epoch model group lock acquire release waiter at remote 0x7f260c5101d0 num group 2 group member count 0 0 at remote 0x7f260c510160 node by i d 1 input val none i d value 1 original op none traceback device code location model group lock acquire release waiter at remote 0x7f260c5101d0 num group 2 group member count 0 0 at remote 0x7f260c510160 node by i d 1 input val none i d value 1 original op none traceback device code location group lock acquire release waiter at remote 0x7f260c5101d0 num group 2 group member count 0 0 at remote 0x7f260c510160 node by i d 1 input val none i d value 1 original op none traceback device code location to continue or q to quit 89 pyeval evalcodewithname lto priv 1821 at python ceval c 4166 90 0x0000000000508fa0 in fast function lto priv at python ceval c 4992 91 0x000000000050999d in call function lto priv at python ceval c 4872 92 0x000000000050c36e in pyeval evalframedefault at python ceval c 3351 93 0x0000000000507125 in pyeval evalframeex throwflag 0 f frame 0x52a7658 for file user vmarkovtsev image hang py line 31 in main sample ds train tensor variant tensor attr self setattr track true self unconditional checkpoint dependency self unconditional dependency name variant tracker varianttracker resource handle resource device cpu resource deleter create resource self setattr track true self unconditional checkpoint dependency self unconditional dependency n truncate at python ceval c 754 94 pyeval evalcodewithname lto priv 1821 at python ceval c 4166 95 0x0000000000508fa0 in fast function lto priv at python ceval c 4992 96 0x000000000050999d in call function lto priv at python ceval c 4872 97 0x000000000050b4a9 in pyeval evalframedefault at python ceval c 3335 98 0x0000000000507125 in pyeval evalframeex throwflag 0 f frame 0x20509a8 for file user vmarkovtsev image hang py line 35 in at python ceval c 754 99 pyeval evalcodewithname lto priv 1821 at python ceval c 4166 100 0x000000000050a3b3 in pyeval evalcodeex closure 0x0 kwdef 0x0 defcount 0 def 0x0 kwcount 0 kws 0x0 argcount 0 args 0x0 local global co at python ceval c 4187 101 pyeval evalcode co global local at python ceval c 731 102 0x00000000006349e2 in run mod at python pythonrun c 1025 103 0x0000000000634a97 in pyrun fileexflag at python pythonrun c 978 104 0x000000000063824f in pyrun simplefileexflag at python pythonrun c 419 105 0x0000000000638425 in pyrun anyfileexflag at python pythonrun c 81 106 0x0000000000638df1 in run file p cf 0x7ffcaa45361c filename fp at module main c 340 107 py main at module main c 810 108 0x00000000004b0de0 in main argc 2 argv 0x7ffcaa453818 at program python c 69 bt of each of the 4 running thread 0 0x00007fa23e7989d0 in nanosleep from lib x86 64 linux gnu libc so 6 1 0x00007fa1ec03cffd in tensorflow anonymous namespace posixenv sleepformicrosecond long long from usr local lib python3 6 dist package tensorflow core python libtensorflow framework so 2 2 0x00007fa1f5d2dcd5 in tensorflow eventmgr pollloop from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 3 0x00007fa1ec0528d1 in eigen threadpooltempl workerloop int from usr local lib python3 6 dist package tensorflow core python libtensorflow framework so 2 4 0x00007fa1ec04feb8 in std function handler lambda 1 m invoke std any datum const from usr local lib python3 6 dist package tensorflow core python libtensorflow framework so 2 5 0x00007fa1ec6a58df in std execute native thread routine p 0x6360ed0 at dt7 src libstdc v3 src nonshared11 c 11 thread cc 83 6 0x00007fa23e49c6db in start thread from lib x86 64 linux gnu libpthread so 0 7 0x00007fa23e7d588f in clone from lib x86 64 linux gnu libc so 6 speculation as we see there be 4 thread I guess one for each of my gpu which be poll something they make 25 30 cpu load together there be more than a hundred other thread so I don t know which one I should bt additionally I try with different batch size which ofc influence the memory consumption but do not change anything with the hang I can provide the access to the hardware or execute arbitrary command if need
tensorflowtensorflow
tf1 14 tpu can not use custom tfrecord dataset on colab use tpu
Bug
I have create a tfrecord dataset file consist element and their corresponding label I want to use it for training model on colab use free tpu I can load the tfrecord file and even run an iterator just to see the content however before the beginning of the epoch it throw follow error unimplementederror from job worker replica 0 task 0 file system scheme local not implement file content gdrive my drive datum encodeddata ingzip tfrecord node multideviceiteratorgetnextfromshard remotecall iteratorgetnextasoptional 1 in my understanding it want the tfrecord file on the tpu bucket I don t know how to do that on colab how can one use a tfrecord file directly on colab tpu
tensorflowtensorflow
convert unsupported operation
Bug
os platform and distribution linux ubuntu 18 04 tensorflow instal from source tensorflow version tensorflow master bazel bin tensorflow lite toco toco input file my freeze12xshell pb output file my12 tflite output format tflite input shape 2 4 513 input array x mixed output array y out1 y out2 inference type quantize uint8 inference input type float std dev value 1 mean value 0 my pb file be here my freeze12xshell zip any one could help I
tensorflowtensorflow
miss output shape for inverse stft
Bug
link description of issue I be miss what the output dimension be clear description I do not know what the output dimension of inverse stft be and I do not know how they come to be example code audionp np random random 1 120800 astype np float32 frame length 2048 frame step 2048 4 stft tf contrib signal stft audionp 0 frame length frame step invstft tf contrib signal inverse stft stft frame length frame step window fn tf contrib signal inverse stft window fn frame step sess tf session stft invstft sess run stft invstft print invstft shape the output shape of invstft be then 120320 and I don t know how it get there in my opinion it should be again 120800
tensorflowtensorflow
fail to convert an rnn build with tf keras by tfliteconverter
Bug
system information os platform and distribution e g linux ubuntu 16 04 window 7 tensorflow instal from source or binary binary tensorflow version or github sha if from source tensorflow gpu 2 0 0rc1 provide the text output from tflite convert some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use reshape transpose here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while also please include a link to a graphdef or the model if possible code which would reproduce the error log from tensorflow keras import layer model import tensorflow as tf ipt layer input 10 5 x layer simplernn 1 return sequence true ipt model model model ipt x cvt tf lite tfliteconverter from keras model model tflite model cvt convert include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
be there any document of tf core
Bug
such as 1 global flow chart of tensorflow 2 graph modify for model optimization insert an op in backend 3 basic datum structure datum management 4 detail of communication for distribute strategy
tensorflowtensorflow
pix2pix tutorial be break with tf2 rc
Bug
the tutorial give here no long work on a clean install of the current tf2 rc it break around the 16th block I wasn t sure where to report this problem warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 warning entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause bad argument number for name 3 expect 4 operatornotallowedingrapherror traceback most recent call last in 2 train dataset train dataset shuffle buffer size 3 train dataset train dataset map load image train 4 num parallel call tf data experimental autotune 5 train dataset train dataset batch 1 anaconda3 envs playground lib python3 7 site package tensorflow core python data op dataset op py in map self map func num parallel call 1902 return datasetv1adapter 1903 parallelmapdataset 1904 self map func num parallel call preserve cardinality false 1905 1906 deprecation deprecate none use tf datum dataset map anaconda3 envs playground lib python3 7 site package tensorflow core python data op dataset op py in init self input dataset map func num parallel call use inter op parallelism preserve cardinality use legacy function 3452 self transformation name 3453 dataset input dataset 3454 use legacy function use legacy function 3455 self num parallel call op convert to tensor 3456 num parallel call dtype dtype int32 name num parallel call anaconda3 envs playground lib python3 7 site package tensorflow core python data op dataset op py in init self func transformation name dataset input class input shape input type input structure add to graph use legacy function defun kwargs 2693 resource tracker tracking resourcetracker 2694 with track resource tracker scope resource tracker 2695 self function wrapper fn get concrete function internal 2696 if add to graph 2697 self function add to graph op get default graph anaconda3 envs playground lib python3 7 site package tensorflow core python eager function py in get concrete function internal self args kwargs 1852 bypass error check when get a graph function 1853 graph function self get concrete function internal garbage collect 1854 args kwargs 1855 we re return this concrete function to someone and they may keep a 1856 reference to the funcgraph without keep a reference to the anaconda3 envs playground lib python3 7 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1846 if self input signature 1847 args kwargs none none 1848 graph function self maybe define function args kwargs 1849 return graph function 1850 anaconda3 envs playground lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs anaconda3 envs playground lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2039 arg name arg name 2040 override flat arg shape override flat arg shape 2041 capture by value self capture by value 2042 self function attribute 2043 tell the concretefunction to clean up its graph once it go out of anaconda3 envs playground lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor anaconda3 envs playground lib python3 7 site package tensorflow core python data op dataset op py in wrapper fn args 2687 attribute defun kwargs 2688 def wrapper fn args pylint disable miss docstre 2689 ret wrapper helper args 2690 ret structure to tensor list self output structure ret 2691 return op convert to tensor t for t in ret anaconda3 envs playground lib python3 7 site package tensorflow core python data op dataset op py in wrapper helper args 2632 nest args nest args 2633 2634 ret autograph tf convert func ag ctx nest args 2635 if func return a list of tensor nest flatten and 2636 op convert to tensor would conspire to attempt to stack anaconda3 envs playground lib python3 7 site package tensorflow core python autograph impl api py in wrapper args kwargs 235 except exception as e pylint disable broad except 236 if hasattr e ag error metadata 237 raise e ag error metadata to exception e 238 else 239 raise operatornotallowedingrapherror in convert code 3 load image train input image real image random jitter input image real image 8 random jitter if tf random uniform 0 5 user nathan anaconda3 envs playground lib python3 7 site package tensorflow core python framework op py 765 bool self disallow bool cast user nathan anaconda3 envs playground lib python3 7 site package tensorflow core python framework op py 528 disallow bool cast use a tf tensor as a python bool user nathan anaconda3 envs playground lib python3 7 site package tensorflow core python framework op py 513 disallow when autograph disable try decorate it directly with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow autograph be disabled in this function try decorate it directly with tf function
tensorflowtensorflow
in tf estimator training metric be not divide by the batch size
Bug
system information tensorflow version tf 1 12 python version 3 6 describe the current behavior I be train a model with tf estimator and it seem that the training metric root mean square error be not divide by the batch size for both training and validation the batch size be 100 indeed for comparison this be the mean square error loss for training and validation code to reproduce the issue I be use the standard structure for model fn in tf estimator here s the main part of the code python def model fn feature label mode param model layer rmse tf metrics root mean square error label prediction if mode tf estimator modekey predict prediction new state prediction return tf estimator estimatorspec mode prediction prediction loss tf loss mean square error label prediction metric rmse rmse if mode tf estimator modekeys eval return tf estimator estimatorspec mode loss loss eval metric op metric optimizer tf train adamoptimizer learning rate param learning rate train op optimizer minimize loss global step tf train get global step return tf estimator estimatorspec mode loss loss train op train op
tensorflowtensorflow
horrible energy impact when open tensorflow org with safari
Bug
not sure it appropriate issue this kind problem here but get nothing with google every time open the battery consume speed be unreasonably high why
tensorflowtensorflow
sparse softmax cross entropy with logit fail for symbolic tensor in eager mode
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 rc0 python version 3 6 this code tf nn sparse softmax cross entropy with logit tf keras layers input none dtype tf int32 tf keras layers input none none fail with this exception traceback most recent call last file line 1 in file usr local lib python3 6 dist package tensorflow core python op nn op py line 3477 in sparse softmax cross entropy with logit v2 label label logit logit name name file usr local lib python3 6 dist package tensorflow core python op nn op py line 3410 in sparse softmax cross entropy with logit array op shape logit 1 file usr local lib python3 6 dist package tensorflow core python op check op py line 506 in assert equal if not condition file usr local lib python3 6 dist package tensorflow core python framework op py line 765 in bool self disallow bool cast file usr local lib python3 6 dist package tensorflow core python framework op py line 534 in disallow bool cast self disallow in graph mode use a tf tensor as a python bool file usr local lib python3 6 dist package tensorflow core python framework op py line 523 in disallow in graph mode this function with tf function format task tensorflow python framework error impl operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function but this work tf compat v1 disable eager execution tf nn sparse softmax cross entropy with logit tf keras layers input none dtype tf int32 tf keras layers input none none seem similar to 31848 although the underlying issue here be assert equal not fare well with symbolic tensor in eager mode the issue seem to be from here l330 l334 here a tensor be cast to a boolean in eager mode but that be not possible in the case of a symbolic keras tensor even though tensorflow be execute eagerly
tensorflowtensorflow
tensorflow2rc1 import bug
Bug
hi I be try to convert one of my training template to tensorflow2 but encounter several issue I can not reach a point where my code be even execute because many import already terminate the application at start python from tensorflow core python client session import interactivesession def main print hello world if name main main this code result instantly in an error and the print be never reach 2019 09 17 09 18 51 362815 e tensorflow core lib monitor collection registry cc 77 can not register 2 metric with the same name tensorflow api python session create counter traceback most recent call last file home xxx dev tf2xxx bug py line 1 in from tensorflow core python client session import interactivesession file home xxx virtualenv tf2 lib python3 6 site package tensorflow core python client session py line 49 in counter for number of session create in python file home xxx virtualenv tf2 lib python3 6 site package tensorflow core python eager monitor py line 183 in init name description label file home xxx virtualenv tf2 lib python3 6 site package tensorflow core python eager monitor py line 121 in init self metric self metric method self label length create args tensorflow python framework error impl alreadyexistserror another metric with the same name already exist next to interactivesession there be multiple other import that cause problem with monitor py change to tf2 s eager mode and not worry about the session do not do the trick kera however run fine and do not cause any problem except the conv2d layer cuda what cause problem for a while already system information os platform and distribution ubuntu 18 04 tensorflow instal from pip3 tensorflow version 2 0 0rc1 python version 3 6 8
tensorflowtensorflow
assertion error when use mask with unrolled stacked lstm
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 14 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory no gpu 32 gb ram you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I receive an assertion error when create a forward pass for an unrolled multi layer lstm while use a mask describe the expect behavior no assertion error or at least a well explanation as to the cause code to reproduce the issue python import tensorflow as tf input tf placeholder tf float32 3 4 5 mask tf placeholder tf bool 3 4 single cell tf keras layer lstmcell 10 for in range 3 multi cell tf keras layers stackedrnncell cell single cell lstm tf keras layer rnn cell multi cell unroll true output state lstm inputs input mask mask assertion error occur here other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
load weight from checkpoint file fail python c api
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow stock example and costume code os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary model create with pip package training perform with c api from compile libtensorflow tensorflow version use command below 2 0rc python version 3 6 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script my environment tf env txt describe the current behavior I be use tensorflow s c api to train model create with tf keras in python after training in my c program a checkpoint file be create from which a model s weight can be restore for more training or prediction within my c program when try to restore my model in python train weight can not be load neither by python model load weight path to checkpoint nor with python checkpoint tf train checkpoint model model checkpoint restore path to checkpoint the model load successfully with python model keras experimental load from save model path to model folder but this be an untrained model to be precise this model have the same weight as when the model be first define describe the expect behavior in my c program I can load save restore my model with no problem I be expect to be able to load train model and make prediction in python as well from the model train in my c program code to reproduce the issue there be several step that I will describe to accurately reproduce my problem 1 create model with keras model py bash python keras model py this will create a keras model and export it in keras model this folder name be hard code in the c program 2 get datum with getdata py bash python getdata py this will download the fashion mnist dataset and store the training example and label as text file in datum txt and label txt these name be hardcode in the c program 3 compile c program the only requirement for my c program be the tensorflow library share object file libtensorflow so and libtensorflow framework so and the c api header file c api h these be compile with bazel via bash configure bazel build c opt tensorflow tool lib package libtensorflow extract the share object file from libtensorflow tar gz my c program can then be compile with bash gcc wall I path to libtensorflow include l path to libtensorflow lib o bellytf bellytf c ltensorflow assume datum txt label txt and keras model be in the same directory as the execute command run bash ld library path path to libtensorflow lib bellytf this will load save model and training datum print prediction from untrained model train for a number of epoch print prediction from train model and save weight as a checkpoint file in the variable folder inside keras model 4 load train mdel in python from checkpoint file this be where I encounter problem python import tensorflow as tf from tensorflow import kera get data fashion mnist keras dataset fashion mnist train image train label test image test label fashion mnist load datum scale and reshape train image train image 255 train image np reshape train image 60000 784 load model model keras experimental load from save model keras model prediction be wrong model predict train image 0 try to load weight this produce asserterror traceback at end of issue model load weight keras model variable belly assertionerror some object have attribute which be not restore try load model via tf train checkpoint checkpoint tf train checkpoint model model status checkpoint restore keras model variables belly prediction be wrong model predict train image 0 other info log traceback from python model load weight keras model variables belly assertionerror traceback most recent call last in 1 model load weight keras model variables belly project tensorflow env lib python3 6 site package tensorflow python keras engine train py in load weight self filepath by name 160 raise valueerror load weight be not yet support with tpustrategy 161 with step per run great than 1 162 return super model self load weight filepath by name 163 164 trackable no automatic dependency tracking project tensorflow env lib python3 6 site package tensorflow python keras engine network py in load weight self filepath by name 1396 streaming restore for any variable create in the future 1397 trackable util stream restore status status session session 1398 status assert nontrivial match 1399 return status 1400 if h5py be none project tensorflow env lib python3 6 site package tensorflow python training tracking util py in assert nontrivial match self 915 assert nontrivial match and assert consume and both be less 916 useful since we don t touch python object or python state 917 return self assert consume 918 919 def gather saveable object self project tensorflow env lib python3 6 site package tensorflow python training tracking util py in assert consume self 892 raise assertionerror 893 some object have attribute which be not restore s 894 unused attribute 895 for trackable in self graph view list object 896 pylint disable protect access assertionerror some object have attribute which be not restore dense 2 kernel dense 2 bias dense 1 1 kernel dense 1 1 bias
tensorflowtensorflow
python kernel restart when train xgboost estimator
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information I be use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 debian linux intel x86 64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary via pip install of tf2 0 0 rc1 tensorflow version use command below tf2 0 0 rc1 python version 3 7 4 cuda cudnn version how to determine gpu model and memory geforce rtx 2070 8 gb v2 0 0 rc0 101 gd2d2566 2 0 0 rc1 check python python version 3 7 4 python branch python build version default aug 13 2019 20 35 49 python compiler version gcc 7 3 0 python implementation cpython check os platform os linux os kernel version 1 smp debian 4 19 37 5 deb10u2 2019 08 08 os release version 4 19 0 5 amd64 os platform linux 4 19 0 5 amd64 x86 64 with debian 10 1 linux distribution debian 10 1 linux os distribution debian 10 1 mac version uname uname result system linux node gsd release 4 19 0 5 amd64 version 1 smp debian 4 19 37 5 deb10u2 2019 08 08 machine x86 64 processor architecture 64bit machine x86 64 be we in docker no compiler c debian 8 3 0 6 8 3 0 copyright c 2018 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 17 2 protobuf 3 9 1 tensorflow 2 0 0rc1 check for virtualenv false tensorflow import tf version version 2 0 0 rc1 tf version git version v2 0 0 rc0 101 gd2d2566 tf version compiler version 7 3 1 20180303 22191 find library libpthread so 0 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib tls haswell x86 64 home davis anaconda3 envs py3tf2 bin lib tls haswell home davis anaconda3 envs py3tf2 bin lib tls x86 64 home davis anaconda3 envs py3tf2 bin lib tls home davis anaconda3 envs py3tf2 bin lib haswell x86 64 home davis anaconda3 envs py3tf2 bin lib haswell home davis anaconda3 envs py3tf2 bin lib x86 64 home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib tls haswell x86 64 libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib tls haswell libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib tls x86 64 libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib tls libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib haswell x86 64 libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib haswell libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib x86 64 libpthread so 0 22191 try file home davis anaconda3 envs py3tf2 bin lib libpthread so 0 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu libpthread so 0 22191 22191 find library libc so 6 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libc so 6 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu libc so 6 22191 22191 find library libdl so 2 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libdl so 2 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu libdl so 2 22191 22191 find library libutil so 1 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libutil so 1 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu libutil so 1 22191 22191 find library librt so 1 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib librt so 1 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu librt so 1 22191 22191 find library libm so 6 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libm so 6 22191 search cache etc ld so cache 22191 try file lib x86 64 linux gnu libm so 6 22191 22191 22191 call init lib x86 64 linux gnu libpthread so 0 22191 22191 22191 call init lib x86 64 linux gnu libc so 6 22191 22191 22191 call init lib x86 64 linux gnu libm so 6 22191 22191 22191 call init lib x86 64 linux gnu librt so 1 22191 22191 22191 call init lib x86 64 linux gnu libutil so 1 22191 22191 22191 call init lib x86 64 linux gnu libdl so 2 22191 22191 22191 initialize program home davis anaconda3 envs py3tf2 bin python 22191 22191 22191 transfer control home davis anaconda3 envs py3tf2 bin python 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload heapq cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload opcode cpython 37 m x86 64 linux gnu so 22191 22191 find library libffi so 6 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls haswell home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls home davis anaconda3 envs py3tf2 lib python3 7 lib dynload haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 lib dynload haswell home davis anaconda3 envs py3tf2 lib python3 7 lib dynload x86 64 home davis anaconda3 envs py3tf2 lib python3 7 lib dynload rpath from file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls haswell x86 64 libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls haswell libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls x86 64 libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload tls libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload haswell x86 64 libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload haswell libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload x86 64 libffi so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libffi so 6 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libffi so 6 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload struct cpython 37 m x86 64 linux gnu so 22191 22191 find library libmkl rt so 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls home davis anaconda3 envs py3tf2 lib python3 7 site package numpy haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package numpy haswell home davis anaconda3 envs py3tf2 lib python3 7 site package numpy x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package numpy rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy mklinit cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls haswell x86 64 libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls haswell libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls x86 64 libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy tls libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy haswell x86 64 libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy haswell libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy x86 64 libmkl rt so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl rt so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl rt so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy mklinit cpython 37 m x86 64 linux gnu so 22191 22191 home davis anaconda3 envs py3tf2 lib python3 7 site package numpy mklinit cpython 37 m x86 64 linux gnu so error symbol lookup error undefined symbol omp get num thread fatal 22191 find library libiomp5 so 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package numpy rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy mklinit cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libiomp5 so 22191 22191 find library libgcc s so 1 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libgcc s so 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 bin lib libgcc s so 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libiomp5 so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy core multiarray umath cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload math cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload datetime cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload pickle cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy core multiarray test cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy linalg lapack lite cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy linalg umath linalg cpython 37 m x86 64 linux gnu so 22191 22191 find library libz so 1 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 lib dynload rpath from file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libz so 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libz so 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload zlib cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload bz2 cpython 37 m x86 64 linux gnu so 22191 22191 find library liblzma so 5 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 lib dynload rpath from file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload liblzma so 5 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload liblzma so 5 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload lzma cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload grp cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload decimal cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy fft fftpack lite cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package mkl fft pydfti cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy random mtrand cpython 37 m x86 64 linux gnu so 22191 22191 find library libcrypto so 1 1 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 lib dynload rpath from file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libcrypto so 1 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libcrypto so 1 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload hashlib cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload blake2 cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload sha3 cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload bisect cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload random cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl core so 22191 22191 find library libittnotify so 0 search 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libittnotify so 22191 search cache etc ld so cache 22191 search path lib x86 64 linux gnu tls haswell x86 64 lib x86 64 linux gnu tls haswell lib x86 64 linux gnu tls x86 64 lib x86 64 linux gnu tls lib x86 64 linux gnu haswell x86 64 lib x86 64 linux gnu haswell lib x86 64 linux gnu x86 64 lib x86 64 linux gnu usr lib x86 64 linux gnu tls haswell x86 64 usr lib x86 64 linux gnu tls haswell usr lib x86 64 linux gnu tls x86 64 usr lib x86 64 linux gnu tls usr lib x86 64 linux gnu haswell x86 64 usr lib x86 64 linux gnu haswell usr lib x86 64 linux gnu x86 64 usr lib x86 64 linux gnu lib tls haswell x86 64 lib tls haswell lib tls x86 64 lib tls lib haswell x86 64 lib haswell lib x86 64 lib usr lib tls haswell x86 64 usr lib tls haswell usr lib tls x86 64 usr lib tls usr lib haswell x86 64 usr lib haswell usr lib x86 64 usr lib system search path 22191 try file lib x86 64 linux gnu tls haswell x86 64 libittnotify so 22191 try file lib x86 64 linux gnu tls haswell libittnotify so 22191 try file lib x86 64 linux gnu tls x86 64 libittnotify so 22191 try file lib x86 64 linux gnu tls libittnotify so 22191 try file lib x86 64 linux gnu haswell x86 64 libittnotify so 22191 try file lib x86 64 linux gnu haswell libittnotify so 22191 try file lib x86 64 linux gnu x86 64 libittnotify so 22191 try file lib x86 64 linux gnu libittnotify so 22191 try file usr lib x86 64 linux gnu tls haswell x86 64 libittnotify so 22191 try file usr lib x86 64 linux gnu tls haswell libittnotify so 22191 try file usr lib x86 64 linux gnu tls x86 64 libittnotify so 22191 try file usr lib x86 64 linux gnu tls libittnotify so 22191 try file usr lib x86 64 linux gnu haswell x86 64 libittnotify so 22191 try file usr lib x86 64 linux gnu haswell libittnotify so 22191 try file usr lib x86 64 linux gnu x86 64 libittnotify so 22191 try file usr lib x86 64 linux gnu libittnotify so 22191 try file lib tls haswell x86 64 libittnotify so 22191 try file lib tls haswell libittnotify so 22191 try file lib tls x86 64 libittnotify so 22191 try file lib tls libittnotify so 22191 try file lib haswell x86 64 libittnotify so 22191 try file lib haswell libittnotify so 22191 try file lib x86 64 libittnotify so 22191 try file lib libittnotify so 22191 try file usr lib tls haswell x86 64 libittnotify so 22191 try file usr lib tls haswell libittnotify so 22191 try file usr lib tls x86 64 libittnotify so 22191 try file usr lib tls libittnotify so 22191 try file usr lib haswell x86 64 libittnotify so 22191 try file usr lib haswell libittnotify so 22191 try file usr lib x86 64 libittnotify so 22191 try file usr lib libittnotify so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl intel thread so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl intel lp64 so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl avx2 so 22191 22191 find library libtensorflow framework so 2 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python tls libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python haswell libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python x86 64 libtensorflow framework so 2 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libtensorflow framework so 2 22191 22191 find library libstdc so 6 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libstdc so 6 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libstdc so 6 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libstdc so 6 22191 22191 22191 call init home davis anaconda3 envs py3tf2 bin lib libstdc so 6 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libtensorflow framework so 2 22191 22191 find library libhdfs so 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libhdfs so 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libhdfs so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libhdfs so 22191 search path home davis anaconda3 envs py3tf2 bin lib rpath from file home davis anaconda3 envs py3tf2 bin python 22191 try file home davis anaconda3 envs py3tf2 bin lib libhdfs so 22191 search cache etc ld so cache 22191 search path lib x86 64 linux gnu usr lib x86 64 linux gnu lib usr lib system search path 22191 try file lib x86 64 linux gnu libhdfs so 22191 try file usr lib x86 64 linux gnu libhdfs so 22191 try file lib libhdfs so 22191 try file usr lib libhdfs so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload binascii cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload posixsubprocess cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload select cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload pyexpat cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload socket cpython 37 m x86 64 linux gnu so 22191 22191 22191 22191 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload csv cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload fcntl cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload termio cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python framework fast tensor util so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload queue cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package wrapt wrapper cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload json cpython 37 m x86 64 linux gnu so 22191 22191 find library libssl so 1 1 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 lib dynload rpath from file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libssl so 1 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libssl so 1 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ssl cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload array cpython 37 m x86 64 linux gnu so 22191 22191 find library libhdf5 978d01a9 so 103 0 0 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py error cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell x86 64 libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls x86 64 libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell x86 64 libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib x86 64 libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 22191 22191 find library libhdf5 hl db841637 so 100 1 1 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py error cpython 37 m x86 64 linux gnu so 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 22191 22191 find library libsz 1c7dd0cf so 2 0 1 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib x86 64 home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell x86 64 libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls haswell libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls x86 64 libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib tls libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell x86 64 libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib haswell libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib x86 64 libsz 1c7dd0cf so 2 0 1 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 22191 22191 find library libaec 2147abcd so 0 0 4 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libaec 2147abcd so 0 0 4 22191 22191 find library libz a147dcb0 so 1 2 3 0 search 22191 search path home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib rpath from file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 22191 try file home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libz a147dcb0 so 1 2 3 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libz a147dcb0 so 1 2 3 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libaec 2147abcd so 0 0 4 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py error cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py def cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py object cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py conv cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5r cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 t cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py util cpython 37 m x86 64 linux gnu so 22191 22192 find library libc so 6 0 search 22192 search cache etc ld so cache 22192 try file lib x86 64 linux gnu libc so 6 22192 22192 22192 call init lib x86 64 linux gnu libc so 6 22192 22192 22192 initialize program bin sh 22192 22192 22192 transfer control bin sh 22192 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5z cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5a cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5s cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5p cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5ac cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py proxy cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5d cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5ds cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5f cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 g cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5i cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5fd cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5pl cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5o cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5l cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core lite experimental microfrontend python op audio microfrontend op so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib conversion cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib c timestamp cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslibs nattype cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib np datetime cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timezone cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib tzconversion cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timedelta cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib offset cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib ccalendar cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib strptime cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib field cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib parse cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib period cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib frequency cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timestamp cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib resolution cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib hashtable cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib miss cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs lib cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs algo cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs interval cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs property cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib hash cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib op cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs index cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs join cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs sparse cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib indexing cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib internal cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload unicodedata cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 lib dynload mmap cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib reshape cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib window cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs skiplist cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs groupby cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib reduction cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib parser cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib json cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib writer cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda util move cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda io msgpack packer cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda io msgpack unpacker cpython 37 m x86 64 linux gnu so 22191 22191 22191 call init home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib testing cpython 37 m x86 64 linux gnu so 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 bin python 0 22191 22191 22191 call fini lib x86 64 linux gnu libutil so 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload heapq cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload opcode cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ctype cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libffi so 6 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload struct cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy mklinit cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libiomp5 so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy core multiarray umath cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload math cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload datetime cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload pickle cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy core multiarray test cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy linalg lapack lite cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy linalg umath linalg cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload zlib cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload bz2 cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload lzma cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload liblzma so 5 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload grp cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload decimal cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy fft fftpack lite cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package mkl fft pydfti cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy random mtrand cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload hashlib cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload blake2 cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload sha3 cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload bisect cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload random cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl intel lp64 so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl avx2 so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl intel thread so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl core so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package numpy libmkl rt so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python pywrap tensorflow internal so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload binascii cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libz so 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload posixsubprocess cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload select cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload pyexpat cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload socket cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 22191 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload csv cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload fcntl cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload termio cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python framework fast tensor util so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload queue cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package wrapt wrapper cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload json cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload ssl cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libssl so 1 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload libcrypto so 1 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload array cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py error cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py def cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py object cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py conv cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5r cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 t cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py util cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5z cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5a cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5s cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5p cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5ac cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py proxy cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5d cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5ds cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5f cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5 g cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5i cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5fd cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5pl cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5o cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py h5l cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 hl db841637 so 100 1 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libhdf5 978d01a9 so 103 0 0 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libz a147dcb0 so 1 2 3 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libsz 1c7dd0cf so 2 0 1 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package h5py lib libaec 2147abcd so 0 0 4 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core lite experimental microfrontend python op audio microfrontend op so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python libtensorflow framework so 2 0 22191 22191 22191 call fini lib x86 64 linux gnu librt so 1 0 22191 22191 22191 call fini lib x86 64 linux gnu libdl so 2 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib conversion cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib c timestamp cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslibs nattype cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib np datetime cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timezone cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib tzconversion cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timedelta cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib offset cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib ccalendar cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib strptime cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib field cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib parse cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib period cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib frequency cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib timestamp cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs tslib resolution cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib hashtable cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib miss cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs lib cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib tslib cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs algo cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs interval cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs property cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib hash cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib op cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs index cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs join cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs sparse cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib indexing cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib internal cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload unicodedata cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 lib dynload mmap cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib reshape cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib window cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 bin lib libstdc so 6 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 bin lib libgcc s so 1 0 22191 22191 22191 call fini lib x86 64 linux gnu libm so 6 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs skiplist cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda libs groupby cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib reduction cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib parser cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib json cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib writer cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda util move cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda io msgpack packer cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda io msgpack unpacker cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini home davis anaconda3 envs py3tf2 lib python3 7 site package panda lib testing cpython 37 m x86 64 linux gnu so 0 22191 22191 22191 call fini lib x86 64 linux gnu libpthread so 0 0 22191 env ld library path be unset dyld library path be unset nvidia smi mon sep 16 13 02 01 2019 nvidia smi 418 74 driver version 418 74 cuda version 10 1 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 geforce rtx 2070 on 00000000 01 00 0 on n a 38 30c p8 14w 175w 1249mib 7949mib 6 default process gpu memory gpu pid type process name usage 0 814 g usr lib xorg xorg 34mib 0 948 g usr bin gnome shell 24mib 0 1171 g usr lib xorg xorg 217mib 0 1250 g usr bin gnome shell 176mib 0 1551 g u channel token 12821291265308298198 8mib 0 2551 g equ channel token 447657113308850378 786mib cuda lib tensorflow instal from info name tensorflow version 2 0 0rc1 summary tensorflow be an open source machine learning framework for everyone home page author email license apache 2 0 location home davis anaconda3 envs py3tf2 lib python3 7 site package require by python version major minor micro releaselevel serial 3 7 4 final 0 bazel version describe the current behavior during the train of the xgboosted tree the kernel will abort and halt describe the expect behavior that the python kernel operate normally code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import numpy as np import panda as pd import tensorflow as tf import ssl dftrain pd read csv dfeval pd read csv y train dftrain pop survive y eval dfeval pop survive try tensorflow version 2 x except exception pass import tensorflow as tf tf random set seed 123 print tensorflow version format tf version import platform print platform python version vocabulary dftrain sex unique fc tf feature column categorical column sex n sibling spouse parch class deck embark town alone numeric column age fare def one hot cat column feature name vocab return tf feature column indicator column tf feature column categorical column with vocabulary list feature name vocab feature column for feature name in categorical column need to one hot encode categorical feature he find the unique value in each column and use that array as a vocabulary vocabulary dftrain feature name unique feature column append one hot cat column feature name vocabulary for feature name in numeric column feature column append tf feature column numeric column feature name dtype tf float32 example dict dftrain head 1 class fc tf feature column indicator column tf feature column categorical column with vocabulary list class first second third print feature value format example class iloc 0 print one hot encode tf keras layer densefeature class fc example numpy use entire batch since this be such a small dataset num example len y train def make input fn x y n epoch none shuffle true def input fn dataset tf datum dataset from tensor slice dict x y if shuffle dataset dataset shuffle num example for training cycle thru dataset as many time as need n epoch none dataset dataset repeat n epoch in memory training doesn t use batch dataset dataset batch num example return dataset return input fn training and evaluation input function train input fn make input fn dftrain y train eval input fn make input fn dfeval y eval shuffle false n epoch 1 linear est tf estimator linearclassifi feature column train model linear est train train input fn max step 100 evaluation result linear est evaluate eval input fn since datum fit into memory use entire dataset per layer it will be fast above one batch be define as the entire dataset n batch 1 e tf estimator boostedtreesclassifier feature column n batch per layer n batch the model will stop training once the specify number of tree be build not base on the number of step est train train input fn max step 100 this kill notebook kernel other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach output of the e train call at end of sample script this output be generate and then the jupyter notebook will pop up a dialog which say kernel have stop info tensorflow call model fn warn tensorflow from home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python feature column feature column py 2158 numericcolumn transform feature from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead warn tensorflow from home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python feature column feature column py 2158 indicatorcolumn transform feature from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead warn tensorflow from home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python feature column feature column v2 py 4302 vocabularylistcategoricalcolumn get sparse tensor from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead warn tensorflow from home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow core python feature column feature column py 2158 vocabularylistcategoricalcolumn transform feature from tensorflow python feature column feature column v2 be deprecate and will be remove in a future version instruction for update the old featurecolumn apis be be deprecate please use the new featurecolumn apis instead warn tensorflow from home davis anaconda3 envs py3tf2 lib python3 7 site package tensorflow estimator python estimator can boost tree py 161 to int32 from tensorflow python op math op be deprecate and will be remove in a future version instruction for update use tf cast instead info tensorflow do calling model fn info tensorflow create checkpointsaverhook warn tensorflow issue encounter when serialize resource type be unsupported or the type of the item don t match field type in collectiondef note this be a warning and probably safe to ignore resource object have no attribute name info tensorflow graph be finalize info tensorflow run local init op info tensorflow do run local init op warn tensorflow issue encounter when serialize resource type be unsupported or the type of the item don t match field type in collectiondef note this be a warning and probably safe to ignore resource object have no attribute name info tensorflow save checkpoint for 0 into tmp tmpr2q3edmx model ckpt warn tensorflow issue encounter when serialize resource type be unsupported or the type of the item don t match field type in collectiondef note this be a warning and probably safe to ignore resource object have no attribute name info tensorflow loss 0 6931468 step 0 warning tensorflow it seem that global step tf train get global step have not be increase current value could be stable 0 vs previous value 0 you could increase the global step by pass tf train get global step to optimizer apply gradient or optimizer minimize
tensorflowtensorflow
timeline report incorrect gpu memory usage
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary conda binary tensorflow version use command below 1 14 gpu python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior my objective be to measure the amount of gpu memory need by bert during the inference process I have already set the tf force gpu allow growth or a paraphrase thereof I don t remember the exact flag to true nvidia smi show a usage of 2500 mb for a batch size of 227 1 even if the allow growth be true the tensorflow allocation be take place in step I see the same peak usage in nvidia smi for batch size 16 32 64 and then it jump up suddenly usage for batch size 128 through 227 be as above while for batch size 228 it be suddenly 4495 mb hence this measurement do not serve my purpose it clearly depend more on the allocator behaviour than the network size activation size 2 timeline visualisation in chrome trace format show a peak allocation on gpu 0 bfc as 418 mb for a batch size of 227 this be way too low if I compare it with nvidia smi trace peaking at 2500 mb this make no sense 3 tensorboard visualisation somehow show the bert node as use 175 mb again it make no sense describe the expect behavior please help I get an accurate measurement of how much memory my current model and batch size be really take to run on a gpu with tf the only trustworthy metric I see in this labyrinth be nvidia smi but only for the batch size at which it suddenly step up code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem can use the google research bert and profile for memory use profilerhook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow contrib integrate odeint do not work with the default call method of subclass of tf keras layer layer
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 19 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below both 1 14 and 1 15 rc0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 0 gpu model and memory titan rtx 24 gb describe the current behavior for tensorflow 1 14 when define ode function use the default call method of any subclass of tf keras layer layer an valueerror be raise state varisinitializedop have be mark as not fetchable to avoid this issue the ode function have to be define use either pure tensorflow function or the call method of the subclass instead of call for tensorflow 1 15 rc0 the same code will run forever without error message the above solution use call instead of call also apply here describe the expect behavior tensorflow contrib integrate odeint should work with the default call method of any subclass of tf keras layer layer under both tensorflow 1 14 and 1 15 rc0 code to reproduce the issue python import tensorflow as tf from tensorflow python keras import backend layer input backend clear session from tensorflow contrib integrate import odeint x input shape 10 l0 layer dense unit 10 l0 build x shape this line be necessary only to make call work def ode func h t ode param return l0 h l0 call h avoid the error which be hacky ts tf constant 0 1 dtype tf float64 h ts odeint ode func x ts other info log for tensorflow 1 14 valueerror varisinitializedop not fetchable click here for detailed log python valueerror traceback most recent call last in 13 print code start 14 15 h ts odeint ode func x ts 16 17 print code finish 23 frame usr local lib python3 6 dist package tensorflow contrib integrate python op ode py in odeint func y0 t rtol atol method option full output name 538 full output full output 539 name scope 540 option 541 542 usr local lib python3 6 dist package tensorflow contrib integrate python op ode py in dopri5 func y0 t rtol atol full output first step safety ifactor dfactor max num step name 403 lambda I I num time 404 interpolate solution history rk state 1 405 name interpolate loop 406 407 y solution stack name scope usr local lib python3 6 dist package tensorflow python op control flow op py in while loop cond body loop var shape invariant parallel iteration back prop swap memory name maximum iteration return same structure 3499 op add to collection op graphkey while context loop context 3500 result loop context buildloop cond body loop var shape invariant 3501 return same structure 3502 if maximum iteration be not none 3503 return result 1 usr local lib python3 6 dist package tensorflow python op control flow op py in buildloop self pre body loop var shape invariant return same structure 3010 with op get default graph mutation lock pylint disable protect access 3011 original body result exit var self buildloop 3012 pre body original loop var loop var shape invariant 3013 finally 3014 self exit usr local lib python3 6 dist package tensorflow python op control flow op py in buildloop self pre body original loop var loop var shape invariant 2935 expand composite true 2936 pre summary op get collection op graphkey summary collection pylint disable protect access 2937 body result body pack var for body 2938 post summary op get collection op graphkey summary collection pylint disable protect access 2939 if not nest be sequence or composite body result usr local lib python3 6 dist package tensorflow contrib integrate python op ode py in interpolate solution history rk state I 381 lambda rk state t I rk state t1 382 adaptive runge kutta step rk state history 0 383 name integrate loop 384 y interp evaluate rk state interp coeff rk state t0 rk state t1 385 t I usr local lib python3 6 dist package tensorflow python op control flow op py in while loop cond body loop var shape invariant parallel iteration back prop swap memory name maximum iteration return same structure 3499 op add to collection op graphkey while context loop context 3500 result loop context buildloop cond body loop var shape invariant 3501 return same structure 3502 if maximum iteration be not none 3503 return result 1 usr local lib python3 6 dist package tensorflow python op control flow op py in buildloop self pre body loop var shape invariant return same structure 3010 with op get default graph mutation lock pylint disable protect access 3011 original body result exit var self buildloop 3012 pre body original loop var loop var shape invariant 3013 finally 3014 self exit usr local lib python3 6 dist package tensorflow python op control flow op py in buildloop self pre body original loop var loop var shape invariant 2935 expand composite true 2936 pre summary op get collection op graphkey summary collection pylint disable protect access 2937 body result body pack var for body 2938 post summary op get collection op graphkey summary collection pylint disable protect access 2939 if not nest be sequence or composite body result usr local lib python3 6 dist package tensorflow contrib integrate python op ode py in adaptive runge kutta step rk state history n step 345 with op control dependency 346 check underflow check max num step check numeric 347 y1 f1 y1 error k runge kutta step func y0 f0 t0 dt 348 349 with op name scope error ratio usr local lib python3 6 dist package tensorflow contrib integrate python op ode py in runge kutta step func y0 f0 t0 dt tableau name 121 ti t0 alpha I dt 122 yi y0 scale dot product dt cast beta I k 123 k append func yi ti 124 125 if not tableau c sol 1 0 and tableau c sol 1 tableau beta 1 in ode func h t ode param 8 9 def ode func h t ode param 10 return l0 h l0 call h avoid the error which be hacky 11 12 ts tf constant 0 1 dtype tf float64 usr local lib python3 6 dist package tensorflow python keras engine base layer py in call self input args kwargs 559 framework 560 if base layer util need keras history input 561 base layer util create keras history input 562 563 handle keras mask propagation from previous layer to current layer usr local lib python3 6 dist package tensorflow python keras engine base layer util py in create keras history tensor 198 kera tensor the tensor find that come from a keras layer 199 200 create layer create keras history helper tensor set 201 return create layer 202 usr local lib python3 6 dist package tensorflow python keras engine base layer util py in create keras history helper tensor process op create layer 244 constant I backend function op input 245 process op create layer create keras history helper 246 layer input process op create layer 247 name op name 248 node def op node def serializetostring usr local lib python3 6 dist package tensorflow python keras engine base layer util py in create keras history helper tensor process op create layer 244 constant I backend function op input 245 process op create layer create keras history helper 246 layer input process op create layer 247 name op name 248 node def op node def serializetostring usr local lib python3 6 dist package tensorflow python keras engine base layer util py in create keras history helper tensor process op create layer 242 constant I op input 243 else 244 constant I backend function op input 245 process op create layer create keras history helper 246 layer input process op create layer usr local lib python3 6 dist package tensorflow python keras backend py in call self input 3251 input nest flatten input 3252 3253 session get session input 3254 feed array 3255 array vals usr local lib python3 6 dist package tensorflow python keras backend py in get session op input list 460 if not manual var init 461 with session graph as default 462 initialize variable session 463 return session 464 usr local lib python3 6 dist package tensorflow python keras backend py in initialize variable session 877 mark as initialized 878 be initialize session run 879 variable module be variable initialize v for v in candidate var 880 uninitialized var 881 for flag v in zip be initialize candidate var usr local lib python3 6 dist package tensorflow python client session py in run self fetch feed dict option run metadata 948 try 949 result self run none fetch feed dict option ptr 950 run metadata ptr 951 if run metadata 952 proto datum tf session tf getbuffer run metadata ptr usr local lib python3 6 dist package tensorflow python client session py in run self handle fetch feed dict option run metadata 1156 create a fetch handler to take care of the structure of fetch 1157 fetch handler fetchhandler 1158 self graph fetch feed dict tensor feed handle feed handle 1159 1160 run request and get response usr local lib python3 6 dist package tensorflow python client session py in init self graph fetch feed feed handle 485 self op append true 486 else 487 self assert fetchable graph fetch op 488 self fetch append fetch 489 self op append false usr local lib python3 6 dist package tensorflow python client session py in assert fetchable self graph op 498 if not graph be fetchable op 499 raise valueerror 500 operation r have be mark as not fetchable op name 501 502 def fetch self valueerror operation odeint interpolate loop interpolate integrate loop runge kutta step varisinitializedop have be mark as not fetchable
tensorflowtensorflow
tf keras batch normalization be batch dependent at test time
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow both os platform and distribution e g linux ubuntu 16 04 window 7 tensorflow instal from source or binary binary tensorflow version use command below 1 13 python version 3 7 cuda cudnn version 10 gpu model and memory rtx describe the current behavior when do inference use tf keras batchnormalization for a give input sample x the output be dependent on the other sample in the batch I can modify the other sample in the batch to influence the output of x this should not be the case as all sample should be process independently whether I run a sample x as a batch size of 1 or with other sample its output should be exactly the same I be see difference of 0 3 at time see my output below it appear keras be also have this issue describe the expect behavior upon use tf keras batch normalization at test time a give input sample x should have the same output regardless of the other sample in the batch code to reproduce the issue note I put the code into colab and be able to verify the bug import tensorflow as tf import numpy as np input1 tf keras layers input shape 128 128 1 x tf keras layers batchnormalization input1 x tf keras layer flatten x x tf keras layer dense 5 activation softmax x model tf keras model model input1 x batchsize 64 x np random rand 1 128 128 1 x np repeat x batchsize axis 0 print the follow row should all be equal for k in range 1 batchsize y model predict x 0 k batch size 8 print y 0 I get the follow output the follow row should all be equal 2019 09 15 16 46 29 463998 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2019 09 15 16 46 29 463998 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2019 09 15 16 46 29 463998 I tensorflow core common runtime gpu gpu device cc 990 0 2019 09 15 16 46 29 463998 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2019 09 15 16 46 29 464999 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 11334 mb memory physical gpu device 0 name titan x pascal pci bus i d 0000 09 00 0 compute capability 6 1 2019 09 15 16 46 30 597112 I tensorflow stream executor dso loader cc 152 successfully open cuda library cublas64 100 dll locally 0 38215435 0 66578805 0 45068434 0 85698235 0 7698223 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215438 0 66578805 0 45068428 0 85698223 0 76982236 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 0 38215443 0 66578794 0 4506843 0 85698223 0 76982224 when I run on the cpu the problem go away when I run on my own datum I get the follow code output which show the problem more severely x y testgenerator next 0 score for kk in range 1 16 s model predict x 0 kk print s 0 0 give output 9 3195647e 01 4 5844557e 04 6 6464258e 05 6 7509413e 02 2 0588086e 06 1 3121526e 06 1 7270798e 06 2 0374323e 06 2 1472629e 06 9 3195647e 01 4 5844595e 04 6 6464192e 05 6 7509346e 02 2 0587986e 06 1 3121462e 06 1 7270781e 06 2 0374266e 06 2 1472567e 06 9 3195623e 01 4 5845023e 04 6 6464687e 05 6 7509525e 02 2 0588097e 06 1 3121534e 06 1 7270876e 06 2 0374339e 06 2 1472645e 06 9 1518945e 01 5 7900354e 04 8 6429020e 05 8 4134214e 02 2 4391386e 06 1 5495950e 06 2 0541604e 06 2 3649241e 06 2 5034813e 06 9 1518945e 01 5 7900301e 04 8 6429180e 05 8 4134296e 02 2 4391431e 06 1 5496009e 06 2 0541527e 06 2 3649197e 06 2 5034813e 06 9 1442782e 01 5 8447709e 04 8 7354209e 05 8 4889315e 02 2 4558769e 06 1 5600473e 06 2 0685882e 06 2 3791872e 06 2 5190409e 06 9 1442782e 01 5 8447709e 04 8 7354209e 05 8 4889315e 02 2 4558769e 06 1 5600473e 06 2 0685882e 06 2 3791872e 06 2 5190409e 06 9 1442782e 01 5 8447709e 04 8 7354209e 05 8 4889315e 02 2 4558769e 06 1 5600473e 06 2 0685882e 06 2 3791872e 06 2 5190409e 06 9 1442782e 01 5 8447709e 04 8 7354209e 05 8 4889315e 02 2 4558769e 06 1 5600473e 06 2 0685882e 06 2 3791872e 06 2 5190409e 06 2 7099210e 01 5 2615767e 03 1 4831917e 03 7 2222257e 01 9 9957060e 06 6 1737474e 06 8 7498120e 06 7 3227852e 06 8 3933583e 06 2 7099210e 01 5 2615767e 03 1 4831917e 03 7 2222257e 01 9 9957060e 06 6 1737474e 06 8 7498120e 06 7 3227852e 06 8 3933583e 06 2 7099210e 01 5 2615767e 03 1 4831917e 03 7 2222257e 01 9 9957060e 06 6 1737474e 06 8 7498120e 06 7 3227852e 06 8 3933583e 06 2 7099210e 01 5 2615767e 03 1 4831917e 03 7 2222257e 01 9 9957060e 06 6 1737474e 06 8 7498120e 06 7 3227852e 06 8 3933583e 06 2 7099210e 01 5 2615767e 03 1 4831917e 03 7 2222257e 01 9 9957060e 06 6 1737474e 06 8 7498120e 06 7 3227852e 06 8 3933583e 06 1 9735371e 01 5 5795610e 03 1 7620743e 03 7 9526496e 01 9 8536966e 06 6 0714729e 06 8 6973323e 06 7 0008568e 06 8 0944292e 06 edit 1 add code to replicate issue also if you remove the batch norm the problem still persist edit 2 display result from run on my own datum
tensorflowtensorflow
rnn layer do not reset dropout mask of rnncell
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 6 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 9392 gf3c7314d83 1 15 0 rc0 python version 3 7 4 describe the current behavior the rnn layer with an rnncell do not reset the state of dropout mask compare to the layer implementation of the cell thus the behavior of tf keras layers gru 10 tf keras layer rnn tf keras layer grucell 10 this be especially problematic because the keras rnn api tutorial rnn layer and rnn cell state both approach be mathematically equivalent describe the expect behavior the rnn layer should check the type of the rnncell and if it be a subclass of dropoutrnncellmixin reset the dropout mask after each call by call cell reset recurrent dropout mask and cell reset dropout mask code to reproduce the issue partially copy from python from future import absolute import division print function import numpy as np import tensorflow as tf tf enable eager execution tf enable eager execution print tf version datum np random normal 0 1 1 10 2 astype np float32 rnn tf keras layers gru unit 10 dropout 0 5 recurrent dropout 0 5 print set rnn datum training true numpy 0 0 for in range 5 rnn cell tf keras layer grucell unit 10 dropout 0 5 recurrent dropout 0 5 rnn tf keras layer rnn rnn cell print set rnn datum training true numpy 0 0 for in range 5 output warn tensorflow from check dropout py 5 the name tf enable eager execution be deprecate please use tf compat v1 enable eager execution instead 1 15 0 rc0 0 04537238 0 15487108 0 0 0 08881481 0 055508718 different dropout mask be use for each call 0 34464198 same dropout mask be use for each call
tensorflowtensorflow
import package of tf2 0 0rc in pycharm
Bug
I m use tf2 0 0rc win10 when I write the below code in pycharm dataset be underline with a red line and show a message can not find reference dataset in init py and pycharm can t auto complete dataset s function but the script can run well python from tensorflow core python data import dataset dataset dataset from tensor 2 when I write the below code there be no red line and pycharm can auto complete dataset s function but the script have error keyerror register two gradient with name reducedataset previous registration be in register home xiefangyuan anaconda3 lib python3 7 site package tensorflow core python framework registry py 66 python from tensorflow core python data op dataset op import dataset dataset dataset from tensor 2
tensorflowtensorflow
estimatorspec documentation remove from 1 14 1 15
Bug
look like the documentation for tf estimator estimatorspec get lose along the way from 1 13 to 1 14 can we please add it back in url s with the issue description of issue what need change reinstate documentation see submit a pull request no
tensorflowtensorflow
undeclared inclusion s in rule tensorflow contrib lite kernels eigen support
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version r1 12 python version 2 7 instal use virtualenv pip conda no bazel version if compile from source 0 15 0 gcc compiler version if compile from source 5 4 0 cuda cudnn version cuda 9 0 cudnn 7 gpu model and memory mobile gtx 1070 describe the problem sudo bazel output base opt tensorflow build config cuda config opt define framework share object false tensorflow libtensorflow cc so tensorflow tool pip package build pip package any other info log configure build action env python bin path usr bin python build action env python lib path home ryan catkin ws devel lib python2 7 dist package build python path usr bin python build ignite define with ignite support true build xla define with xla support true build action env tf need opencl sycl 0 build action env tf need rocm 0 build action env tf need cuda 1 build action env cuda toolkit path usr local cuda build action env tf cuda version 9 0 build action env cudnn install path usr lib x86 64 linux gnu build action env tf cudnn version 7 build action env tf nccl version 1 build action env tf cuda compute capability 6 1 build action env ld library path usr local cuda 9 0 lib64 home ryan catkin ws devel lib opt ros kinetic lib opt ros kinetic lib x86 64 linux gnu build action env tf cuda clang 0 build action env gcc host compiler path usr bin gcc build config cuda test config cuda build opt copt march native build opt host copt march native build opt define with default optimization true build v2 define tf api version 2 build log warn the follow config be expand more than once cuda for repeatable flag repeat be count twice and may lead to unexpected behavior debug opt tensorflow external bazel tool tool cpp lib cc configure bzl 115 5 auto configuration warning tmp environment variable be not set use c window temp as default warning opt tensorflow external grpc build 1992 1 in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb common c directly you should either move the file to this package or depend on an appropriate rule there since this rule be create by the macro grpc generate one off target the error might have be cause by the macro implementation in opt tensorflow external grpc bazel grpc build system bzl 172 12 warning opt tensorflow external grpc build 1992 1 in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb decode c directly you should either move the file to this package or depend on an appropriate rule there since this rule be create by the macro grpc generate one off target the error might have be cause by the macro implementation in opt tensorflow external grpc bazel grpc build system bzl 172 12 warning opt tensorflow external grpc build 1992 1 in srcs attribute of cc library rule grpc grpc nanopb please do not import grpc third party nanopb pb encode c directly you should either move the file to this package or depend on an appropriate rule there since this rule be create by the macro grpc generate one off target the error might have be cause by the macro implementation in opt tensorflow external grpc bazel grpc build system bzl 172 12 warn home ryan tensorflow tensorflow contrib learn build 17 1 in py library rule tensorflow contrib learn learn target tensorflow contrib learn learn depend on deprecate target tensorflow contrib session bundle exporter no long support switch to savedmodel immediately warn home ryan tensorflow tensorflow contrib learn build 17 1 in py library rule tensorflow contrib learn learn target tensorflow contrib learn learn depend on deprecate target tensorflow contrib session bundle gc no long support switch to savedmodel immediately warn home ryan tensorflow tensorflow contrib timeserie python timeserie build 354 1 in py library rule tensorflow contrib timeseries python timeseries ar model target tensorflow contrib timeseries python timeseries ar model depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution warn home ryan tensorflow tensorflow contrib timeseries python timeserie state space model build 230 1 in py library rule tensorflow contrib timeserie python timeserie state space model filter postprocessor target tensorflow contrib timeserie python timeserie state space model filter postprocessor depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution warn home ryan tensorflow tensorflow contrib timeseries python timeserie state space model build 73 1 in py library rule tensorflow contrib timeserie python timeserie state space model kalman filter target tensorflow contrib timeserie python timeserie state space model kalman filter depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution warn home ryan tensorflow tensorflow contrib seq2seq build 23 1 in py library rule tensorflow contrib seq2seq seq2seq py target tensorflow contrib seq2seq seq2seq py depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution warn home ryan tensorflow tensorflow contrib bayesflow build 17 1 in py library rule tensorflow contrib bayesflow bayesflow py target tensorflow contrib bayesflow bayesflow py depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution warn home ryan tensorflow tensorflow contrib build 13 1 in py library rule tensorflow contrib contrib py target tensorflow contrib contrib py depend on deprecate target tensorflow contrib distribution distribution py tensorflow distribution have migrate to tensorflow probability deprecate copy remain in tf contrib distribution be unmaintaine unsupported and will be remove by late 2018 you should update all usage of tf contrib distribution to tfp distribution info analyse 2 target 0 package load info find 2 target error home ryan tensorflow tensorflow contrib lite kernel build 57 1 undeclared inclusion s in rule tensorflow contrib lite kernel eigen support this rule be miss dependency declaration for the follow file include by tensorflow contrib lite kernels eigen support cc opt tensorflow external eigen archive eigen core info elapse time 3 144s critical path 2 54 info 18 process 18 local fail build do not complete successfully
tensorflowtensorflow
plain english explanation of cla
Bug
url s with the issue contributor license agreement description of issue what need change I have no idea what any of this legal mumbo jumbo actually entail grant of patent license grant of copyright license do I own my contribution do google own my contribution what do all of this mess mean if I create something and give it away I want to make it free as in gratis and as in libre widely and not just to google be that happen here do google charge restrict people e g corporation use tensorflow can google charge restrict people use tensorflow and or my contribution submit a pull request no
tensorflowtensorflow
error when use stateful rnn with multiple input
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 0 0rc0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 0 130 7 6 0 gpu model and memory gtx 980 ti describe the current behavior the stock example of rnns with multiple input from here rnn with listdict input or nest input produce an error if you set stateful true this seem to be a problem with any multi input rnn with stateful true describe the expect behavior there should be no error multi input rnns with stateful true should work the same as with stateful false other than preserve state code to reproduce the issue note this code be copy from rnn with listdict input or nest input with the exception that I change the line rnn tf keras layer rnn cell to rnn tf keras layer rnn cell stateful true python import collection import tensorflow as tf nestedinput collection namedtuple nestedinput feature1 feature2 nestedstate collection namedtuple nestedstate state1 state2 class nestedcell tf keras layers layer def init self unit 1 unit 2 unit 3 kwargs self unit 1 unit 1 self unit 2 unit 2 self unit 3 unit 3 self state size nestedstate state1 unit 1 state2 tf tensorshape unit 2 unit 3 self output size unit 1 tf tensorshape unit 2 unit 3 super nestedcell self init kwargs def build self input shape expect input shape to contain 2 item batch i1 batch i2 i3 input 1 input shape feature1 1 input 2 input 3 input shape feature2 1 self kernel 1 self add weight shape input 1 self unit 1 initializer uniform name kernel 1 self kernel 2 3 self add weight shape input 2 input 3 self unit 2 self unit 3 initializer uniform name kernel 2 3 def call self input state input should be in batch input 1 batch input 2 input 3 state should be in shape batch unit 1 batch unit 2 unit 3 input 1 input 2 tf nest flatten input s1 s2 state output 1 tf matmul input 1 self kernel 1 output 2 3 tf einsum bij ijkl bkl input 2 self kernel 2 3 state 1 s1 output 1 state 2 3 s2 output 2 3 output output 1 output 2 3 new state nestedstate state1 state 1 state2 state 2 3 return output new state unit 1 10 unit 2 20 unit 3 30 input 1 32 input 2 64 input 3 32 batch size 64 num batch 100 timestep 50 cell nestedcell unit 1 unit 2 unit 3 rnn tf keras layer rnn cell stateful true inp 1 tf keras input none input 1 inp 2 tf keras input none input 2 input 3 output rnn nestedinput feature1 inp 1 feature2 inp 2 model tf keras model model inp 1 inp 2 output model compile optimizer adam loss mse metric accuracy other info log traceback most recent call last file tmp2 py line 70 in output rnn nestedinput feature1 inp 1 feature2 inp 2 file site package tensorflow core python keras layers recurrent py line 623 in call return super rnn self call input kwargs file site package tensorflow core python keras engine base layer py line 777 in call self maybe build input file site package tensorflow core python keras engine base layer py line 2099 in maybe build self build input shape file site package tensorflow core python keras layers recurrent py line 561 in build self reset state file site package tensorflow core python keras layers recurrent py line 809 in reset state spec shape none if self input spec be none else self input spec 0 shape attributeerror nestedinput object have no attribute shape
tensorflowtensorflow
memory continue to grow after repeat call to model predict tf one hot state dtype float32 depth 3
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow 2 0 0 rc1 tensorflow version use command below v2 0 0 rc0 101 gd2d2566 2 0 0 rc1 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory intel iris plus graphic 640 1536 mb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run the below code in docker version 19 03 2 cause the memory to grow without limit this be visible in docker stat eventually crash docker describe the expect behavior the memory should not grow indefinitely code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python3 import tensorflow as tf row 6 column 7 model tf keras sequential tf keras layer flatten input shape row column 3 tf keras layer dense 7 input shape row column 3 model compile optimizer tf keras optimizer sgd lr 0 01 loss mean squared error metric accuracy state 1 row column for I in range 20 for iteration in range 1000000 print iteration iteration model predict tf one hot state dtype float32 depth 3 the aforementioned code run in an image generate by the follow dockerfile dockerfile from centos 7 env source directory tmp tf connect4 env python version 3 7 4 run yum y groupinstall y development tool yum y update yum y install openssl devel zlib devel libffi libffi devel wget wget tar xjf python python version tar xz cd python python version configure make make install pip3 install upgrade pip pip3 install tensorflow 2 0 0 rc1 tensorflow probability 0 8 0 rc0 numpy falcon jsonschema
tensorflowtensorflow
modulenotfounderror no module name official wide deep
Bug
url s with the issue description of issue what need change have wide deep in official r1 wide deep but the documentation say it s in official wide deep
tensorflowtensorflow
tensorflow contrib distributions python kernel test independent test py test fail with assertion error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 s390x mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 2 7 15 bazel version if compile from source 0 24 1 gcc compiler version if compile from source gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 cuda cudnn version na gpu model and memory na describe the current behavior fail testmnistlikedynamicshape main productdistributiontest testmnistlikedynamicshape main productdistributiontest traceback most recent call last file home test local lib python2 7 site package absl third party unittest3 backport case py line 37 in testpartexecutor yield file home test local lib python2 7 site package absl third party unittest3 backport case py line 162 in run testmethod file tensorflow contrib distributions python kernel test independent test py line 275 in testmnistlikedynamicshape self testmnistlike static shape false file tensorflow contrib distributions python kernel test independent test py line 269 in testmnistlike rtol 1e 6 atol 0 file home test local lib python2 7 site package tensorflow python framework test util py line 1073 in decorate return f args kwd file home test local lib python2 7 site package tensorflow python framework test util py line 2303 in assertallclose self assertallcloserecursive a b rtol rtol atol atol msg msg file home test local lib python2 7 site package tensorflow python framework test util py line 2272 in assertallcloserecursive path str path str msg file home test local lib python2 7 site package tensorflow python framework test util py line 2207 in assertarraylikeallclose a b rtol rtol atol atol err msg n join msgs equal nan true file home test local lib python2 7 site package numpy testing private util py line 1501 in assert allclose verbose verbose header header equal nan equal nan file home test local lib python2 7 site package numpy testing private util py line 827 in assert array compare raise assertionerror msg assertionerror not equal to tolerance rtol 1e 06 atol 0 mismatch value a be different from b not close where array 1 2 2 2 3 array 3 1 2 4 4 array 6 1 1 8 6 not close lhs 463 41407785 467 87059292 444 45405599 469 33429804 457 64799854 not close rhs 463 41458 467 87006 444 45358 469 3348 457 6486 not close dif 0 00050345 0 00053677 0 00047323 0 00051031 0 00059155 not close tol 0 00046341 0 00046787 0 00044445 0 00046933 0 00045765 dtype float64 shape 4 5 10 mismatch 2 5 max absolute difference 0 00059155 max relative difference 1 29257756e 06 x array 465 912459 448 916315 457 207675 486 805523 456 784984 448 14827 453 583166 486 295655 468 533898 481 740375 y array 465 9126 448 9159 457 20764 486 8053 456 7849 448 1483 453 5835 486 2955 468 53412 481 74048 472 38965 483 41187 464 7721 467 14288 478 4115 fail testmnistlikestaticshape main productdistributiontest testmnistlikestaticshape main productdistributiontest traceback most recent call last file home test local lib python2 7 site package absl third party unittest3 backport case py line 37 in testpartexecutor yield file home test local lib python2 7 site package absl third party unittest3 backport case py line 162 in run testmethod file tensorflow contrib distributions python kernel test independent test py line 272 in testmnistlikestaticshape self testmnistlike static shape true file tensorflow contrib distributions python kernel test independent test py line 269 in testmnistlike rtol 1e 6 atol 0 file home test local lib python2 7 site package tensorflow python framework test util py line 1073 in decorate return f args kwd file home test local lib python2 7 site package tensorflow python framework test util py line 2303 in assertallclose self assertallcloserecursive a b rtol rtol atol atol msg msg file home test local lib python2 7 site package tensorflow python framework test util py line 2272 in assertallcloserecursive path str path str msg file home test local lib python2 7 site package tensorflow python framework test util py line 2207 in assertarraylikeallclose a b rtol rtol atol atol err msg n join msgs equal nan true file home test local lib python2 7 site package numpy testing private util py line 1501 in assert allclose verbose verbose header header equal nan equal nan file home test local lib python2 7 site package numpy testing private util py line 827 in assert array compare raise assertionerror msg assertionerror not equal to tolerance rtol 1e 06 atol 0 mismatch value a be different from b not close where array 1 2 2 2 3 array 3 1 2 4 4 array 6 1 1 8 6 not close lhs 463 41407785 467 87059292 444 45405599 469 33429804 457 64799854 not close rhs 463 41458 467 87006 444 45358 469 3348 457 6486 not close dif 0 00050345 0 00053677 0 00047323 0 00051031 0 00059155 not close tol 0 00046341 0 00046787 0 00044445 0 00046933 0 00045765 dtype float64 shape 4 5 10 mismatch 2 5 max absolute difference 0 00059155 max relative difference 1 29257756e 06 x array 465 912459 448 916315 457 207675 486 805523 456 784984 448 14827 453 583166 486 295655 468 533898 481 740375 y array 465 9126 448 9159 457 20764 486 8053 456 7849 448 1483 453 5835 486 2955 468 53412 481 74048 472 38965 483 41187 464 7721 467 14288 478 4115 run 10 test in 1 019 fail failure 2 describe the expect behavior the test should pass on s390x code to reproduce the issue python tensorflow contrib distributions python kernel test independent test py
tensorflowtensorflow
odr violation between cudnn and tensorrt
Bug
this report an issue with an nvidia library and not a bug in tensorflow this issue serve as a public description and permalink there be one definition rule odr violation between tensorrt and cudnn that can cause binary that link in both these library to crash or misbehave we observe this with the combination of tensorrt 5 1 5 and cudnn 7 6 2 link statically we don t know if build that use share library be affect for instance both tensorrt 5 1 5 and cudnn 7 6 2 define z22first layer fwd kernelili4eli7eli7eli64eev19firstlayerfwdparam demangle void first layer fwd kernel 4 7 7 64 firstlayerfwdparam but sass blob correspond to these kernel be not the same in tensorrt and cudnn this mean that in an application that link in both tensorrt and cudnn one of the two library will launch the incorrect kernel observe failure manifest as gpu side crash due to misaligned memory access or data corruption we have inform nvidia and they be investigate the issue
tensorflowtensorflow
unexpected output shape on custom keras dynamic layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 6 tensorflow instal from source or binary pip install tensorflow 2 0 0rc0 tensorflow version use command below 2 0 0 rc0 python version 3 7 4 describe the current behavior upon attempt to create a custom dynamic keras layer keras seem to incorrectly interpret the output of compute output shape describe the expect behavior in the example code below model summary output none 2 for the output shape accord to the docs example I would expect that to be none 2 when attempt to place layer after this it return two placeholder despite the output shape only define one code to reproduce the issue import tensorflow as tf import numpy as np class example tf keras layers layer def init self kwargs kwargs dynamic true super example self init kwargs def call self input return input def compute output shape self input shape return none 2 inp tf keras layers input batch shape none 1 comp example inp model tf keras model model input inp output comp model summary in my code the input layer s batch shape and the content of call be arbitrary if I remove dynamic true then it give the expect shape base on the content of call there seem to be no semantic difference in output if compute output shape return none 2 none 2 or none 2 other info log here s what I be see from model summary model model layer type output shape param input 1 inputlayer none 1 0 example example none 2 0 total param 0 trainable param 0 non trainable param 0
tensorflowtensorflow
custom optimizer keep throw no attribute create slot error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0rc1 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 gpu model and memory p100 describe the current behavior when create a custom optimizer the optimizer keep throw object have no attribute create slot error code to reproduce the issue python pip install tensorflow gpu 2 0 0 rc1 import math import numpy as np import tensorflow as tf from tensorflow keras import backend as k class customoptimizer tf keras optimizers optimizer def init self optimizer cool period 10 kwargs super customoptimizer self init customoptimizer kwargs self optimizer tf keras optimizer get optimizer self cool period k variable cool period name cool period dtype k floatx self cool period slot self add slot self cool period cool period def get update self loss param if self optimizer iteration self get slot cool period self update self optimizer get update loss param def get config self config cool period k get value self cool period optimizer tf keras optimizer serialize self optimizer base config super customoptimizer self get config return dict list base config item list config item x np random rand 100 100 y np random randint 2 size 100 op customoptimizer adam input layer tf keras layers input shape x shape 1 fc tf keras layer dense 1 input layer model tf keras model model input layer fc model compile optimizer op loss mse model fit x y epoch 5
tensorflowtensorflow
executor error message in gradienttape jacobian
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 4 14 141 1 manjaro x86 64 with arch manjaro linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta1 5101 gc75bb66 2 0 0 rc0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I be calculate the jacobian of a model with respect to its parameter use a gradient tape the method gradienttape jacobian output an error message about a miss function library the program do not abort however and the compute jacobian be correct describe the expect behavior there should be no error message code to reproduce the issue python import tensorflow as tf from tensorflow import keras model keras sequential kera layer dense 2 input shape 2 input tf variable 1 2 dtype tf float32 with tf gradienttape as gtape output model input gtape jacobian output model trainable variable other info log the full output of the program show above be 2019 09 12 11 01 04 479226 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 09 12 11 01 04 498159 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2304500000 hz 2019 09 12 11 01 04 498584 I tensorflow compiler xla service service cc 168 xla service 0x556260300c60 execute computation on platform host device 2019 09 12 11 01 04 498609 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 09 12 11 01 04 630899 e tensorflow core common runtime executor cc 642 executor fail to create kernel internal no function library node loop body matmul 1 pfor cond with additional layer more error message like the one quote above appear one per matrix multiplication the problem persist when the jacobian calculation be move into a tf function decorate function
tensorflowtensorflow
undefined reference to tensorflow str util endswith
Bug
on kubuntu 18 08 tensorflow 1 13 instal from source python 3 6 bazel 19 2 gcc 7 4 0 in the past I have a test program in ubuntu 16 04 and it run fine I make the migration to ubuntu 18 04 recompile tensorflow for c and obtain the library libtensorflow cc so and libtensorflow framework so when I try to compile my program use cmake make see the follow message tensorflowlabelimageclassification cpp text 0x2bff undefined reference to tensorflow str util endswith std basic string view std basic string view tensorflowlabelimageclassification cpp text 0x2d3c undefined reference to tensorflow str util endswith std basic string view std basic string view tensorflowlabelimageclassification cpp text 0x2ee3 undefined reference to tensorflow str util endswith std basic string view std basic string view any help thank dibet describe the problem provide the exact sequence of command step that you execute before run into the problem any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
fix few typo in the annotation and doc
Bug
fix some typo a error change into an error an unique change into a unique a http change into an http a hdfs change into an hdfs
tensorflowtensorflow
tf2 0 multiple call to keras fit and evaluate make ram explode and be 25x slow
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos mojave mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 dev20190909 python version 3 6 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior everything be do on the cpu consecutive call to either fit or evaluate increase the ram use even if call with the same datum the call take approximately 10 time long than with tf1 x describe the expect behavior I expect the ram usage to remain constant just like in tf1 x code to reproduce the issue python from memory profiler import profile from time import time import numpy as np import tensorflow as tf model tf keras sequential tf keras layer dense 100 activation tf nn softmax model compile loss mse optimizer sgd profile def eval x y model evaluate x y x np random normal size 1 100 y np random normal size 1 100 for I in range 100000 print iteration I tic time eval x y print timeit time tic use tf2 0 229 mb ram use and evaluate complete in 99ms iteration 20 train on 1 sample 1 1 0s 10ms sample loss 1 0597 filename reproduce keras oom py line mem usage increment line content 9 228 8 mib 228 8 mib profile 10 def eval x y 11 229 1 mib 0 2 mib model evaluate x y timeit 0 09978580474853516 use tf2 0 1508 mb ram use after call evaluate 3312 time iteration 3312 1 1 0s 4ms sample loss 1 0205 filename reproduce keras oom py line mem usage increment line content 9 1508 3 mib 1508 3 mib profile 10 def eval x y 11 1508 7 mib 0 4 mib model evaluate x y timeit 0 09004998207092285 use tf1 x the ram use be not increase over consecutive call of evaluate ram stay at 176 mb indefinitely iteration 5100 below also note that it be 25 time fast iteration 5100 1 1 0s 1ms sample loss 1 2716 filename reproduce keras oom py line mem usage increment line content 9 176 0 mib 176 0 mib profile 10 def eval x y 11 176 0 mib 0 0 mib model evaluate x y timeit 0 004405021667480469 I just discover that wrap x y into a tf datum dataset do not have this issue modify code python from memory profiler import profile from time import time import numpy as np import tensorflow as tf model tf keras sequential tf keras layer dense 100 activation tf nn softmax model compile loss mse optimizer sgd profile def eval dataset model evaluate dataset x np random normal size 1 100 y np random normal size 1 100 dataset tf datum dataset from tensor slice x y dataset dataset batch 1 for I in range 100000 print iteration I tic time eval dataset print timeit time tic use tf datum dataset there be no explode ram 217 mb ram use after call evaluate 8154 time it be also only 2 5 time slow than tf1 x iteration 8154 1 1 0s 3ms step loss 0 9972 filename reproduce keras oom py line mem usage increment line content 9 217 6 mib 217 6 mib profile 10 def eval dataset 11 217 6 mib 0 0 mib model evaluate dataset timeit 0 010456085205078125
tensorflowtensorflow
mask in bert
Bug
in the original paper of bert it be say note that the purpose of the masking strategy be to reduce the mismatch between pre training and fine tuning as the mask symbol never appear during the fine tuning stage let s consider a sentence I be a liverpool fan which with 40 masking will be transform into I mask a mask fan when predict the first mask will it be predict by a phrase I mask a fan exclude the second mask or I mask a mask fan by a full sentence and what be the purpose of replace 10 of mask token with themselves do it mean they will not be predict or we will predict they have themselves in the context like predict the first mask by I be a mask fan will be very grateful for any help
tensorflowtensorflow
lite micro miss delete in greedymemoryplann
Bug
l43 do not override void operator delete void p which result in link time error
tensorflowtensorflow
how to split neural network into different gpu device
Bug
hi I be work on a video segmentation project due to the limit of gpu memory gtx 1080ti oom error have occur I do know this error can be avoid if I squeeze my model or just use other gpu with high memory but do tensorflow support split one neural network into different gpu device I e I have 2 gpu a and b and the model definitation be show as follow input keras layers input 28 28 f keras layer flatten input I want gpu a finish this part part1 keras layer dense 128 activation tf nn relu f after part1 be finish gpu b will finish this part part2 keras layer dense 10 activation tf nn softmax part1 I want gpu a finish part1 keras layer dense 128 activation tf nn relu f and gpu b finish part2 keras layer dense 10 activation tf nn softmax part1
tensorflowtensorflow
tf 2 0 0 rc0 keras model subclasse dynamic true throw error possible regression bug of rc0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac osx 10 13 6 intel i7 macbook pro retina 13 inch early 2015 intel iris graphics 6100 1536 mb google colab both cpu and gpu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary pip tensorflow version use command below 2 0 0 rc0 pip install tensorflow 2 0 0 rc0 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version na gpu model and memory intel iris graphics 6100 1536 mb stock intel i7 cpu bundle describe the current behavior in 2 0 0 rc0 while use model subclassing of tf keras model and pass dynamic true then when I call model fit it throw attributeerror nonetype object have no attribute dtype describe the expect behavior model fit should not throw any error note this be work fine in 2 0 0 beta1 code to reproduce the issue python import numpy as np import tensorflow as tf import os print tf version class convbn2d tf keras model def init self c out kernel size 3 super init self conv tf keras layer conv2d filter c out kernel size kernel size stride 1 padding same use bias false self bn tf keras layer batchnormalization momentum 0 9 epsilon 1e 7 def call self input re tf nn relu self bn self conv input return res class fnet tf keras model def init self start kernel 64 weight 0 125 kwargs super init kwargs c start kernel self max pool tf keras layer maxpooling2d self init conv bn convbn2d c kernel size 3 self c0 convbn2d c kernel size 3 self c1 convbn2d c 2 kernel size 3 self c2 convbn2d c 2 kernel size 3 self c3 convbn2d c 2 kernel size 3 self c4 convbn2d c 2 kernel size 3 self pool tf keras layers globalmaxpool2d self linear tf keras layer dense 10 use bias false self weight weight def call self x h self max pool self c0 self init conv bn x h self max pool self c2 self c1 h h self max pool self c4 self c3 h h self pool h h self linear h self weight return h x train y train x test y test tf keras datasets cifar10 load datum train tf datum dataset from tensor slice x train y train train train map lambda x y tf cast x tf float32 tf cast y tf int64 map lambda x y x 255 0 y batch 512 model fnet start kernel 8 model fnet start kernel 8 dynamic true loss tf keras loss sparsecategoricalcrossentropy from logit true model compile optimizer tf keras optimizer rmsprop 0 01 loss loss callback model fit train epoch 2 callback callback verbose 1 other info log run code in 2 0 0 beta1 it work in rc0 fail colab notebook with same issue
tensorflowtensorflow
possible issue in tf scatter nd documentation
Bug
thank you for explain about tf scatter nd use some wonderful visualization I have a doubt whether the cube use for the high dimensional explanation of tf scatter nd be right or not please check the visualization of the cube tag as output in tf scatter nd documentation the second and fourth index be shaded whereas the give index accord to the example be tf constant 0 2 so the first and third indice 0 2 should be shade instead of second and fourth please correct I if I m wrong
tensorflowtensorflow
gpu nms kernel in tf 1 15rc0 be not fix
Bug
this commit be include in tf 1 15rc0 however this commit have a bug as point out in issuecomment 512949342 the bug be fix in which be not include in tf 1 15rc0 I expect tf 1 15rc0 to include the bugfix for the gpu version of nms kernel
tensorflowtensorflow
valueerror from invalid weight while load old h5 model
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version 1 13 1 v1 12 1 10753 g1c2ae57 2 0 0 dev20190910 python version 3 7 4 cuda cudnn version 10 1 gpu model and memory describe the current behavior I create a tfkera model and save it to h5 format use tf version 1 13 1 the model can be load and use for inference just fine in 1 13 1 after upgrade to tf 2 0 nightly build load the model result in a valueerror see traceback below I be use tf compat v1 disable v2 behavior in case that might make a difference describe the expect behavior v1 model should load correctly or present the user with a way to migrate the model to a more compatible format code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach relevant traceback info file miniconda3 envs tf2 lib python3 7 site package tensorflow core python keras save save py line 146 in load mode l return hdf5 format load model from hdf5 filepath custom object compile file miniconda3 envs tf2 lib python3 7 site package tensorflow core python keras save hdf5 format py line 171 in lo ad model from hdf5 load weight from hdf5 group f model weight model layer file miniconda3 envs tf2 lib python3 7 site package tensorflow core python keras save hdf5 format py line 697 in lo ad weight from hdf5 group str len weight value element valueerror layer 1 name encoder bn 0 in the current model be find to correspond to layer encoder bn 0 in the save file however t he new layer encoder bn 0 expect 7 weight but the save weight have 8 element I add some print statement to get info on the weight for this batchnorm w renorm layer the name for the weight in the original model be encoder bn 0 gamma 0 encoder bn 0 beta 0 encoder bn 0 move mean 0 encoder bn 0 move variance 0 encoder bn 0 renorm mean 0 encoder bn 0 renorm mean weight 0 encoder bn 0 renorm stddev 0 encoder bn 0 renorm stddev weight 0 the expect weight placeholder be I m guess the format for save batchnorm w renorm parameter change at some point be there a way to make this backwards compatible or perhaps a way to migrate the save file
tensorflowtensorflow
assertionerror unreachable when add or subtract dataset range element and tf constant
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 home version 10 0 18362 build 18362 tensorflow instal from source or binary binary pip tensorflow version use command below 1 14 python version 3 6 9 describe the current behavior I m try to calculate the minimum index to take for extract window out of the dataset I ve get a constant window size define of which dtype default to int32 after run the follow code it throw assertion error code to reproduce the issue import tensorflow as tf tf enable eager execution ds tf datum dataset range 10 window size tf constant 3 def mapper idx test idx window size return test ds ds map mapper the same issue happen when I add or multiply but not divide when divide with this mapper def mapper idx test idx window size return test I get typeerror x and y must have the same dtype get tf int64 tf int32 then I ve try cast window size to tf int64 window size tf constant 3 dtype tf int64 and that fix the issue in all case describe the expect behavior I d expect the error to be the same in case of addition subtraction multiplication as in case of division as it turn out to be the dtype issue other info log file c source test python project main py line 14 in ds ds map mapper file c programdata anaconda3 envs test python project lib site package tensorflow python data op dataset op py line 1772 in map mapdataset self map func preserve cardinality false file c programdata anaconda3 envs test python project lib site package tensorflow python data op dataset op py line 3190 in init use legacy function use legacy function file c programdata anaconda3 envs test python project lib site package tensorflow python data op dataset op py line 2555 in init self function wrapper fn get concrete function internal file c programdata anaconda3 envs test python project lib site package tensorflow python eager function py line 1355 in get concrete function internal args kwargs file c programdata anaconda3 envs test python project lib site package tensorflow python eager function py line 1349 in get concrete function internal garbage collect graph function self maybe define function args kwargs file c programdata anaconda3 envs test python project lib site package tensorflow python eager function py line 1652 in maybe define function graph function self create graph function args kwargs file c programdata anaconda3 envs test python project lib site package tensorflow python eager function py line 1545 in create graph function capture by value self capture by value file c programdata anaconda3 envs test python project lib site package tensorflow python framework func graph py line 715 in func graph from py func func output python func func args func kwargs file c programdata anaconda3 envs test python project lib site package tensorflow python data op dataset op py line 2549 in wrapper fn ret wrapper helper args file c programdata anaconda3 envs test python project lib site package tensorflow python data op dataset op py line 2489 in wrapper helper ret func nest args file c source test python project main py line 10 in mapper test idx window size file c programdata anaconda3 envs test python project lib site package tensorflow python op math op py line 884 in binary op wrapper return func x y name name file c programdata anaconda3 envs test python project lib site package tensorflow python ops gen math op py line 11574 in sub sub x x y y name name file c programdata anaconda3 envs test python project lib site package tensorflow python framework op def library py line 621 in apply op helper assert false unreachable assertionerror unreachable
tensorflowtensorflow
tf function could not able to transform into graph
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
batchnormalization virtual batch size do not work with none in input shape
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta1 5101 gc75bb66 2 0 0 rc0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 0 cudnn 7 6 2 24 1 gpu model and memory nvidia rtx 2070 8 gb describe the current behavior a constructor of a tf keras model that use tf keras layer batchnormalization with virtual batch size set and unspecified input shape dimension throw an exception describe the expect behavior such a model should be usable code to reproduce the issue python import tensorflow as tf inp tf keras layers input shape none none 3 net tf keras layer batchnormalization virtual batch size 8 inp model tf keras model input inp outputs net other info log traceback of the exception traceback most recent call last file home ikret tf2 lib python3 6 site package tensorflow core python framework tensor util py line 541 in make tensor proto str value compat as bytes x for x in proto value file home ikret tf2 lib python3 6 site package tensorflow core python framework tensor util py line 541 in str value compat as bytes x for x in proto value file home ikret tf2 lib python3 6 site package tensorflow core python util compat py line 71 in as byte byte or text typeerror expect binary or unicode string get 8 during handling of the above exception another exception occur traceback most recent call last file home ikret tf2 lib python3 6 site package tensorflow core python framework test virtual batch py line 6 in net tf keras layer batchnormalization virtual batch size 8 inp file home ikret tf2 lib python3 6 site package tensorflow core python keras engine base layer py line 802 in call output call fn cast input args kwargs file home ikret tf2 lib python3 6 site package tensorflow core python keras layers normalization py line 652 in call input array op reshape input expand shape file home ikret tf2 lib python3 6 site package tensorflow core python op array op py line 131 in reshape result gen array op reshape tensor shape name file home ikret tf2 lib python3 6 site package tensorflow core python ops gen array op py line 8117 in reshape reshape tensor tensor shape shape name name file home ikret tf2 lib python3 6 site package tensorflow core python framework op def library py line 530 in apply op helper raise err file home ikret tf2 lib python3 6 site package tensorflow core python framework op def library py line 527 in apply op helper prefer dtype default dtype file home ikret tf2 lib python3 6 site package tensorflow core python framework op py line 1296 in internal convert to tensor ret conversion func value dtype dtype name name as ref as ref file home ikret tf2 lib python3 6 site package tensorflow core python framework constant op py line 286 in constant tensor conversion function return constant v dtype dtype name name file home ikret tf2 lib python3 6 site package tensorflow core python framework constant op py line 227 in constant allow broadcast true file home ikret tf2 lib python3 6 site package tensorflow core python framework constant op py line 265 in constant impl allow broadcast allow broadcast file home ikret tf2 lib python3 6 site package tensorflow core python framework tensor util py line 545 in make tensor proto support type type value value typeerror fail to convert object of type to tensor content 8 1 none none 3 consider cast element to a support type
tensorflowtensorflow
inconsistent tf print result
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below 1 14 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 cudnn7 1 gpu model and memory describe the current behavior python tf reset default graph use resource false v tf variable 0 trainable false use resource use resource name v v op1 v assign add 1 name v op1 v op2 v assign add 2 name v op2 with tf control dependency v op1 w tf print v v msg1 name v w with tf control dependency v op2 w tf print w v msg2 name w w with tf session as sess sess run tf global variable initializer sess run w when I run the above code multiple time the print message be inconsistent sometime it print python msg1 1 msg2 3 sometimes it print python msg1 3 msg2 3 I though when we execute sess run w v op2 will be execute first but in the v op2 we need to get w first so actually we be depend on v op1 that mean we actually execute v op1 first so v will be 1 and then v be assign to w in the end v be increase by 2 via v op2 so my expectation output of tf print should be python msg1 1 msg2 3 but why do it print the second result sometimes why be the print result inconsistent one possibility I could think of be that tf print sometimes read the value of v before v op2 be do sometimes it read the value of v after v op2 be do but I m not sure actually what s go on under the hood
tensorflowtensorflow
keras fit generator zip object be not consider generator or sequence in
Bug
description keras model fit generator do not work with zip object I follow the image segmentation example from of loading and transforming image and mask together and then train a model with fit generator I m get attributeerror zip object have no attribute shape see stack trace below system information tensorflow version 1 14 0 and 2 0 0 rc python version 3 7 3 occur on both cpu and gpu execution code to reproduce the issue code take from we create two instance with the same argument datum gen args dict featurewise center true featurewise std normalization true rotation range 90 width shift range 0 1 height shift range 0 1 zoom range 0 2 image datagen imagedatagenerator datum gen args mask datagen imagedatagenerator datum gen args provide the same seed and keyword argument to the fit and flow method seed 1 image datagen fit image augment true seed seed mask datagen fit mask augment true seed seed image generator image datagen flow from directory datum image class mode none seed seed mask generator mask datagen flow from directory datum mask class mode none seed seed combine generator into one which yield image and mask train generator zip image generator mask generator model fit generator train generator step per epoch 2000 epoch 50 stacktrace file home kevin dev code image gen src train py line 47 in main validation step 1 file home kevin dev venvs image gen lib python3 7 site package tensorflow core python keras engine training py line 1303 in fit generator step name step per epoch file home kevin dev venvs image gen lib python3 7 site package tensorflow core python keras engine training generator py line 144 in model iteration shuffle shuffle file home kevin dev venvs image gen lib python3 7 site package tensorflow core python keras engine training generator py line 477 in convert to generator like num sample int nest flatten datum 0 shape 0 attributeerror zip object have no attribute shape workaround create generator from zip object train generator pair for pair in zip image generator mask generator
tensorflowtensorflow
issue with basic classification tutorial
Bug
url s with the issue scrollto 9odch ofcaw4 line 2 uniqifi 1 description of issue what need change I be browse the beginner tutorial for basic classification it be at the above link in the section setup the layer it be throw a warning when run warn tensorflow from usr local lib python3 6 dist package tensorflow python op init op py 1251 call variancescale init from tensorflow python op init op with dtype be deprecate and will be remove in a future version instruction for update call initializer instance with the dtype argument instead of pass it to the constructor
tensorflowtensorflow
unable to load model with custom loss function with tf keras model load model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below tensorflow 2 0 0rc0 python version 2 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory none describe the current behavior I have save a simple feed forward keras model when I try to load it with the follow code I get an error from tensorflow import kera from tensorflow keras import layer input keras input shape 784 name digit x layer dense 64 activation relu name dense 1 input x layer dense 64 activation relu name dense 2 x output layer dense 10 activation softmax name prediction x model keras model inputs input output output name 3 layer mlp useless custom loss here def custom loss y true y pre return keras backend mean keras backend square y true y pre axis 1 model compile loss custom loss optimizer keras optimizer rmsprop model save model save format tf here come the bug new model keras model load model model custom object loss custom loss the error traceback most recent call last file line 1 in file env lib python3 6 site package tensorflow core python keras save save py line 147 in load model return save model load load filepath compile file env lib python3 6 site package tensorflow core python keras save save model load py line 93 in load model training config pylint disable protect access file env lib python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file env lib python3 6 site package tensorflow core python keras engine training py line 340 in compile self loss self output name file env lib python3 6 site package tensorflow core python keras engine training util py line 1350 in prepare loss function loss function get loss function loss for in output name file env lib python3 6 site package tensorflow core python keras engine training util py line 1350 in loss function get loss function loss for in output name file env lib python3 6 site package tensorflow core python keras engine training util py line 1086 in get loss function loss fn loss get loss file env lib python3 6 site package tensorflow core python keras loss py line 1166 in get return deserialize identifi file env lib python3 6 site package tensorflow core python keras loss py line 1157 in deserialize printable module name loss function file env lib python3 6 site package tensorflow core python keras util generic util py line 210 in deserialize keras object raise valueerror unknown printable module name object name valueerror unknown loss function loss my understanding of the issue I check in the tf code and I find the follow the function deserialize keras object from generic util py l169 l218 have indeed a custom object argument deserialize from loss py l1169 l1174 have this argument too get from loss py l1177 l1190 the function that call deserialize do not fill in this argument so that even though I give custom object to load model function it be not pass to deserialize keras object at the end could someone check this issue and implement the need change for this to work thank you for your help
tensorflowtensorflow
lite micro incorrect link of pre generate file for stm32f746 g discovery board
Bug
the link on page tensorflow lite experimental micro readme md for stm32f746 g discovery board link to keil project base file instead of make gcc base file
tensorflowtensorflow
lite micro huge size of libtensorflow microlite a for arm
Bug
the file tensorflow lite experimental micro readme md say the core runtime fit in 16 kb on a cortex m3 and with enough operator to run a speech keyword detection model take up a total of 22 kb but after compile libtensorflow microlite a I get ls lh total 512 rw r r 1 xxx xxx 254k sep 9 18 11 libtensorflow microlite a which be an order of magnitude large than 22 kb the command I use to compile it be make j2 f tensorflow lite experimental micro tool make makefile target foo microlite the configuration file lite experimental micro tool make target foo makefile inc be ifeq target foo target toolchain prefix arm none eabi common flag o2 fno unwind table fstack reuse all ffunction section fdata section wl gc section nostartfile mthumb mcpu cortex m4 march armv7e m mfloat abi hard mfpu fpv4 sp d16 mno thumb interwork ffast math wall werror wlogical op waddress wempty body wpoint arith wenum compare fno strict aliasing wno sign compare nostdlib fno exception fno unwind table fno builtin cxxflag common flag fno rtti ccflag common flag endif the compiler toolchain information be as follow arm none eabi gcc version arm none eabi gcc gnu tool for arm embed processor 5 3 1 20160307 release arm embed 5 branch revision 234589 copyright c 2015 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose
tensorflowtensorflow
keras model with sequence feature column fail to convert to estimator
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 beta1 5101 gc75bb66 2 0 0 rc0 python version 3 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version unknown from colab gpu model and memory unknown from colab describe the current behavior when I rewrote my estimator base model dataset feature column to keras I be able to run train it but when I convert it to estimator as show in migration guide estimator it fail with error python valueerror traceback most recent call last in 1 estimator tf keras estimator model to estimator 2 keras model model 3 9 frame usr local lib python3 6 dist package tensorflow core python autograph impl api py in wrapper args kwargs 235 except exception as e pylint disable broad except 236 if hasattr e ag error metadata 237 raise e ag error metadata to exception e 238 else 239 raise valueerror in convert code relative to usr local lib python3 6 dist package tensorflow core python feature column sequence feature column py 140 call transformation cache self state manager feature column v2 py 3205 get sequence dense tensor self categorical column valueerror in embed column word embed categorical column must be of type sequencecategoricalcolumn to use sequencefeature suggest fix use one of sequence categorical column with give type hashedcategoricalcolumn key word hash bucket size 1000 dtype tf string describe the expect behavior work keras model should always be convertable to estimator code to reproduce the issue here be an example there be 2 case show first I show that keras model with sequence feature be work see model fit generator and then I show that conversion to estimator fail
tensorflowtensorflow
tensorflow installation apt command be wrong
Bug
url s with the issue description of issue what need change 1 change your language to chinese simplify from the upright corner 2 in the apt installation command for ubuntu 18 04 cuda 10 last step show below be wrong install tensorrt require that libcudnn7 be instal above sudo apt get update sudo apt get install y no install recommend libnvinfer dev 5 1 5 1 cuda10 0 1 there be a syntax error 2 libnvinfer5 5 1 5 1 cuda10 0 should be instal first before libnvinfer dev 5 1 5 1 cuda10 0 3 just change the command to the same as what be show in english language website
tensorflowtensorflow
keras model evaluate progress bar way too long by default
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow just test this os platform and distribution e g linux ubuntu 16 04 window 10 fully update tensorflow instal from source or binary pip3 install user tensorflow gpu 2 0 0 rc0 tensorflow version use command below v2 0 0 beta1 5101 gc75bb66a99 2 0 0 rc0 python version 3 7 4 cuda cudnn version 10 0 gpu model and memory geforce gtx 1660 ti 6 gb describe the current behavior just run a basic image classifier with keras import datum create model train evaluate I run python script name py in the command prompt model evaluate print out an insanely long progress bar at the end it s many many page long with command prompt already maximize so one page be already a lot of character I have to scroll waaay up to see the previous output describe the expect behavior I know I could turn off verbosity but I would expect sane default for the progress bar print by tf keras and with verbose 1 that thing be so huge it s useless code to reproduce the issue python from future import absolute import division print function unicode literal tensorflow and tf keras import tensorflow as tf from tensorflow import keras helper librarie import numpy as np import matplotlib pyplot as plt from pprint import pprint cuda vs cpu import os os environ cuda visible device 1 print tf version load train test datum fashion mnist keras dataset fashion mnist train image train label test image test label fashion mnist load datum class name t shirt top trouser pullover dress coat sandal shirt sneaker bag ankle boot print shape size for train test datum print train image shape len train label test image shape len test label show first image plt figure plt imshow train image 0 plt colorbar plt grid false plt show normalize pixel value 0 1 train image train image 255 0 test image test image 255 0 show first 25 image sanity check plt figure figsize 10 10 for I in range 25 plt subplot 5 5 i 1 plt xtick plt yticks plt grid false plt imshow train image I cmap plt cm binary plt xlabel class name train label I plt show build the model flat 1d layer dense 128 node layer dense softmax output layer model keras sequential keras layer flatten input shape 28 28 kera layer dense 128 activation tf nn relu keras layer dense 10 activation tf nn softmax compile the model model compile optimizer adam loss sparse categorical crossentropy metric accuracy train the model model fit train image train label epoch 5 evaluate the model test loss test acc model evaluate test image test label print test accuracy test acc
tensorflowtensorflow
layernormalization fail when give tuple as axis input
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux 4 14 137 x86 64 with ubuntu 18 04 bionic mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary google colab have it preinstalle tensorflow version use command below 1 14 0 python version major minor micro releaselevel serial 3 6 8 final 0 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda version 10 1 gpu model and memory tesla k80 11441mib you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior python sequential input shape 64 64 3 dtype np float32 layernormalization axis 3 2 1 fail with error text typeerror traceback most recent call last in 1 sequential input shape 64 64 3 dtype np float32 layernormalization axis 3 2 1 6 frame usr local lib python3 6 dist package tensorflow python keras layers normalization py in build self input shape 957 for idx x in enumerate self axis 958 if x 0 959 self axis idx ndim x 960 961 validate axis typeerror tuple object do not support item assignment this be because the line l929 l930 python if isinstance axis list tuple self axis axis make a copy of the tuple instead of convert it to a list and later the line l954 l956 python convert axis to list and resolve negative if isinstance self axis int self axis self axis do not take care of the case when axis be a tuple describe the expect behavior fix be simple replace the line l954 l956 with python convert axis to list and resolve negative if isinstance self axis int self axis self axis elif isinstance self axis tuple self axis list axis code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python sequential input shape 64 64 3 dtype np float32 layernormalization axis 3 2 1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
rnn stateful be incompatible with initial state
Bug
system information os platform and distribution window 10 tensorflow instal from source or binary binary via conda 1 14 0 and pip 2 0 0 rc0 tensorflow version use command below 1 14 0 and 2 0 0 rc0 python version 3 6 cuda cudnn version 10 0 gpu model and memory 1080 ti describe the current behavior when a stateful rnn be create and be give initial state it effectively reset the state to the initial state for every prediction see the not expect example in the code below describe the expect behavior the stateful rnn should use the initial state the first time and then update the state and use it for each follow time code to reproduce the issue import tensorflow as tf from tensorflow import kera from tensorflow keras import backend as k import numpy as np input size 4 batch size 1 output size 2 define a model without any state initialization input layer keras input shape none input size batch size batch size rnn layer keras layers gru unit output size return sequence true stateful true input layer model keras model input layer rnn layer model compile optimizer keras optimizer rmsprop loss mse test np full batch size 1 input size 0 5 this generate different prediction for the two time step as expect model reset states print model predict test print model predict test 0 15479106 0 3035699 0 22833839 0 4250579 this generate the same prediction after reset the state as expect model reset states print model predict test model reset states print model predict test 0 15479106 0 3035699 0 15479106 0 3035699 define a model with a constant state initialization input layer keras input shape none input size batch size batch size initial state layer k constant np one output size shape 1 output size rnn layer keras layers gru unit output size return sequence true stateful true input layer initial state initial state layer model keras model input layer rnn layer model compile optimizer keras optimizer rmsprop loss mse this generate the same prediction for the two time step not expect model reset states print model predict test print model predict test 0 23247536 0 8585994 0 23247536 0 8585994 this generate the same prediction after reset the state as expect model reset states print model predict test model reset states print model predict test 0 23247536 0 8585994 0 23247536 0 8585994
tensorflowtensorflow
c api program fail during training when compile with optimization flag o2 or when libtensorflow be build with optimize instruction avx sse or when keras model be compile with sparse categorical crossentropy
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow mixed setup keras model be build base on stock example training happen in custom c code that adapt an exist tf example os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 cento 7 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary both binary and source keras model be build use python package instal with pip training program use libtensorflow compile from source tensorflow version use command below 2 0 python version 3 6 ubuntu 3 7 cento bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 ubuntu and 9 1 cento cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script my env tf env txt describe the current behavior I be use tf 2 0 c api to recreate the tutorial in here keras model be build use python package as indicate in the tutorial with slight modification categorical crossentropy be use as loss function instead of sparse categorical crossentropy as the latter produce the error reference in this issue model be save use keras experimental export save model model be save in a folder call save model name of this folder be hard code in c program I link against libtensorflow so 2 0 0 and libtensorflow framework so 2 0 0 as compile with the follow command configure bazel build c opt tensorflow tool lib package libtensorflow no cuda support describe the expect behavior if compile without optimization o program run normally and model can be train successfully as see can be observe in the output if compile with optimization o program fail at training step if libtensorflow be compile with avx sse support program fail at training step if keras model be compile use sparse categorical crossentropy program fail at training step all mode of failure end with the same error see below code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem model be build use the step I follow be 1 create keras model and save it to a file python createmodel py 2 download training datum and store in two file datum txt and label txt the name of the input datum be hard code in c program to download datum python getdata py 3 run c program the program will read the save model load training datum and label make prediction use the untrained model train the model run prediction again ld library path path to libtensorflow lib cfashionmnist the program will print predict value as well as loss value during training program be compile with for normal execution gcc wall I path to libtensorflow include l path to libtensorflow lib o cfashionmnist cfashionmnist c ltensorflow for fail execution gcc wall o2 I path to libtensorflow include l path to libtensorflow lib o cfashionmnist cfashionmnist c ltensorflow other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach when compile with optimization flag o I get 2019 09 06 14 16 51 617721 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx 2019 09 06 14 16 51 642262 I tensorflow core platform profile util cpu util cc 94 cpu frequency 1696290000 hz 2019 09 06 14 16 51 642842 I tensorflow compiler xla service service cc 168 xla service 0x557c6f2b89a0 execute computation on platform host device 2019 09 06 14 16 51 642893 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 09 06 14 16 51 643335 I tensorflow cc save model reader cc 31 reading savedmodel from keras model 2019 09 06 14 16 51 648699 I tensorflow cc save model reader cc 54 read meta graph with tag train 2019 09 06 14 16 51 662135 I tensorflow cc save model loader cc 202 restore savedmodel bundle 2019 09 06 14 16 51 693512 I tensorflow cc save model loader cc 151 running initialization op on savedmodel bundle at path keras model 2019 09 06 14 16 51 712088 I tensorflow cc save model loader cc 311 savedmodel load for tag train status success take 68754 microsecond error node dense 1 target type placeholder num of output 1 do not have output 2044864112 if I build libtensorflow with support for optimize instruction I get the same error minus the optimize instruction warn
tensorflowtensorflow
simple model evaluate example flood output with character
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 fedora 30 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary pip install tensorflow 2 0 0 rc0 tensorflow version use command below v2 0 0 beta1 5101 gc75bb66 2 0 0 rc0 python version python 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run the code I get an output flood with hundred of thousand of character when call model evaluate train on 60000 sample epoch 1 5 60000 60000 3s 43us sample loss 0 5010 accuracy 0 8228 epoch 2 5 60000 60000 2s 38us sample loss 0 3766 accuracy 0 8639 epoch 3 5 60000 60000 2s 38us sample loss 0 3408 accuracy 0 8753 epoch 4 5 60000 60000 2s 37us sample loss 0 3155 accuracy 0 8839 epoch 5 5 60000 60000 2s 38us sample loss 0 2980 accuracy 0 8903 10000 1 literally hundred of thousand of 0s 26us sample loss 0 2803 accuracy 0 8673 describe the expect behavior train on 60000 sample epoch 1 5 60000 60000 3s 43us sample loss 0 5010 accuracy 0 8228 epoch 2 5 60000 60000 2s 38us sample loss 0 3766 accuracy 0 8639 epoch 3 5 60000 60000 2s 38us sample loss 0 3408 accuracy 0 8753 epoch 4 5 60000 60000 2s 37us sample loss 0 3155 accuracy 0 8839 epoch 5 5 60000 60000 2s 38us sample loss 0 2980 accuracy 0 8903 10000 10000 0s 26us sample loss 0 2803 accuracy 0 8673 code to reproduce the issue python import tensorflow as tf mnist tf keras datasets fashion mnist training image training label test image test label mnist load datum training image training image 255 0 test image test image 255 0 model tf keras model sequential tf keras layer flatten tf keras layer dense 128 activation tf nn relu tf keras layer dense 10 activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit training image training label epoch 5 test loss model evaluate test image test label other info log run this code in a jupyter notebook result in a performance penalty for the huge unnecessary output
tensorflowtensorflow
the tf function for the trt segment could not be empty
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary when I convert pb to tensorrt I use tf nightly gpu 1 15 from pip install when I do infer I use tf 1 14 0 from source tensorflow version use command below tf 1 14 0 python version 3 5 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 5 4 cuda cudnn version cuda 10 cudnn 17 gpu model and memory 1060ti describe the current problem hi I successfully use trt convert to my pb model to tensorrt plan and I want to use c to infer my model so I use bazel complie tf with tensorrt together in my code pb model can run successfully I use the same code and tensorrt model can load successfully but as session run tf give I the follow error 2019 09 06 06 56 37 455586 e tensorflow core common runtime executor cc 642 executor fail to create kernel invalid argument the tf function for the trt segment could not be empty node fa layer4 c0 trtengineop 108 when I convert pb to tensorrt it be show as follow 2019 09 06 06 25 59 150320 w tensorflow compiler tf2tensorrt util trt logg cc 37 defaultlogger tensor datatype be determine at build time for tensor not mark as input or output 2019 09 06 06 25 59 183609 I tensorflow compiler tf2tensorrt convert convert graph cc 831 tensorrt node fa layer4 trtengineop 107 add for segment 107 consist of 3 node succeed 2019 09 06 06 25 59 189007 I tensorflow compiler tf2tensorrt convert convert graph cc 831 tensorrt node fa layer4 c0 trtengineop 108 add for segment 108 consist of 2 node succeed 2019 09 06 06 25 59 189378 w tensorflow compiler tf2tensorrt util trt logg cc 37 defaultlogger tensor datatype be determine at build time for tensor not mark as input or output 2019 09 06 06 25 59 189403 e tensorflow compiler tf2tensorrt util trt logg cc 41 defaultlogger network must have at least one output 2019 09 06 06 25 59 189426 w tensorflow compiler tf2tensorrt convert convert graph cc 834 tensorrt node fa layer4 c0 conv0 bn cond 1 trtengineop 109 add for segment 109 consist of 4 node fail internal fail to build tensorrt engine fallback to tf additionally I can use the tensorrt model in python code to infer python instal from pip anyone could provide some idea how to solve it
tensorflowtensorflow
clarification in the validation datum argument of fit in nsl keras adversarialregularization
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 0 0 rc0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior I be try the newly release neural structured learn api in order to do an experiment I think of start with fine tuning a vgg16 model and see the api s action in that case I be interested in use the validation data argument while call fit on a nsl keras adversarialregularization model I try to do two variant first one python adv model fit feature x train label y train validation datum x val y val batch size 128 epoch 2 verbose 1 it throw python indexerror traceback most recent call last in 2 h adv model fit feature x train label y train 3 validation datum feature x val label y val 4 batch size 128 epoch 2 verbose 1 5 print take 0 2f second format time time start 6 frame usr local lib python3 6 dist package tensorflow core python keras engine datum adapter py in 0 223 input x 224 225 num sample set int I shape 0 for I in nest flatten input 226 if len num sample 1 227 msg datum cardinality be ambiguous n indexerror tuple index out of range second one python adv model fit feature x train label y train validation datum feature x val label y val batch size 128 epoch 2 verbose 1 it throw python valueerror traceback most recent call last in 2 h adv model fit feature x train label y train 3 validation datum x val y val 4 batch size 128 epoch 2 verbose 1 5 print take 0 2f second format time time start 5 frame usr local lib python3 6 dist package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 732 max queue size max queue size 733 worker worker 734 use multiprocesse use multiprocesse 735 736 def evaluate self usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq kwargs 222 validation datum validation datum 223 validation step validation step 224 distribution strategy strategy 225 226 total sample get total number of sample training datum adapter usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in process training input model x y batch size epoch sample weight class weight step per epoch validation split validation datum validation step shuffle distribution strategy max queue size worker use multiprocesse 561 class weight class weight 562 step validation step 563 distribution strategy distribution strategy 564 elif validation step 565 raise valueerror validation step should not be specify if usr local lib python3 6 dist package tensorflow core python keras engine training v2 py in process input model x y batch size epoch sample weight class weight shuffle step distribution strategy max queue size worker use multiprocesse 591 batch size batch size 592 check step false 593 step step 594 adapter adapter cls 595 x usr local lib python3 6 dist package tensorflow core python keras engine training py in standardize user datum self x y sample weight class weight batch size check step step name step validation split shuffle extract tensor from dataset 2435 feed input shape 2436 check batch axis false don t enforce the batch size 2437 exception prefix input 2438 2439 get typespec for the input datum and sanitize it if necessary usr local lib python3 6 dist package tensorflow core python keras engine training util py in standardize input data datum name shape check batch axis exception prefix 528 expect to see str len name array s 529 but instead get the follow list of 530 str len data array str datum 200 531 elif len name 1 532 raise valueerror error when check model exception prefix valueerror error when check model input the list of numpy array that you be pass to your model be not the size the model expect expect to see 2 array s but instead get the follow list of 1 array array 132 140 64 135 142 64 140 145 64 145 141 70 146 134 86 148 128 100 but when I fit the model without validation datum it work fine describe the expect behavior either throw some light on the usage in the documentation or in the example code to reproduce the issue for experimentation I use colab so here s the colab notebook
tensorflowtensorflow
tf2 0 tensorflow object detection model zoo inference not work on gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 rc0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 gpu model and memory geforce rtx 2080 ti describe the current behavior run any of the save model from tensorflow model zoo lead to a segmentation fault code to reproduce the issue import os import tensorflow as tf import numpy as np os system wget os system tar xvf ssd mobilenet v2 coco 2018 03 29 tar gz with tf device device gpu 0 load model tf save model load ssd mobilenet v2 coco 2018 03 29 save model infer load model signature serve default sample img np zero 1 128 128 3 predict infer tf constant sample img tf uint8 print predict gdb output thread 89 python3 receive signal sigsegv segmentation fault switching to thread 0x7ffbfbfff700 lwp 7857 0x00007fff4372b741 in tensorflow nonmaxsuppressionv2gpuop compute tensorflow opkernelcontext from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so
tensorflowtensorflow
tf save model save break for my subclasse model in tf 2 0 0rc0
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 02 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below tf2 0 0rc0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version n 4 tf version git version out 4 v2 0 0 beta1 5101 gc75bb66 in 5 tf version version out 5 2 0 0 rc0 describe the current behavior I have a subclasse model that I have be save and deploy since tf 2 0 0b0 and after upgrade to rc0 it errore out with this message save py 136 skip full serialization of object main mymodel object at 0x7f76815add30 because an error occur while trace layer function error message in convert code typeerror tf call take from 2 to 3 positional argument but 4 be give this be how the call method be define in my class can someone tell I what go wrong or what I need to change to make this work thank david class mymodel model def init self n layer h dim b dim activation fn kernel init batch norm super init skip tf function def call self input training false y input 0 h input 1 n var input 2 x layer concatenate y h if self n layer 1 x self d hide 1 x training training if self n layer 2 x self d hide 2 x training training if self n layer 3 x self d hide 3 x training training if self n layer 4 x self d hide 4 x training training if self n layer 5 x self d hide 5 x training training scale output by 1 n var return self d out x n var return self d out x training training describe the expect behavior it should just work code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
spatialdropout2d
Bug
system information tensorflow version you be use 1 11 be you willing to contribute it yes no no describe the feature and the current behavior state spatialdropout2d be often use in cnns at inference time the layer should be treat just like a regular dropout layer where it just scale the coefficient by the dropout term however when I take a keras model with spatialdropout2d and save it as a tf protobuf tensorflow convert spatialdropout2d into a set of primitive operator to do the random dropout instead even for a tensorflow serve graph where the model have be freeze and optimize for inference will this change the current api how I d like to see the spatialdropout layer behave the same as the regular dropout layer during inference so the tensorflow serve protobuf should not include spatialdropout2d or any set of random slice op to reconstruct it but instead should just scale the convolutional filter by the dropout rate as explain in the original paper who will benefit with this feature if the spatialdropout layer behave the same as the regular dropout layer it will reduce the number of operation during inference and speed up inference on these graph any other info
tensorflowtensorflow
modelcheckpoint can t use val acc even if fit function receive validation datum argument
Bug
system information os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 rc0 gpu python version 3 6 8 cuda cudnn version 10 0 gpu model and memory nvidia tesla p100 16 gb describe the current behavior recently I switch from tf1 wrap with native kera to tf2 use the build in tf keras implementation in my code I train my model like so checkpoint modelcheckpoint auto save path monitor val acc save well only true callback append checkpoint keras model fit generator training img generator step per epoch 100 epoch 20 validation step 100 validation datum validation img generator callback callback however with that code I do get the follow warning message warning tensorflow can save good model only with val acc available skip a full work example can be find as github gist here describe the expect behavior I don t understand this error because I do provide the validation data argument to the fit generator function also I use exactly the same code with kera on top of tf1 which work fine there this warning come now after switch to tf2 and build in keras api
tensorflowtensorflow
io framework build of tf lite have miss tflgpudelegatecreate object
Bug
please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information tf version b50852ccac3d9e90af0568391ace90fc1da440e1 master on sep 5 2019 build type build with sh tensorflow lite tool make build ios universal lib sh try to compile for io on xcode 10 2 describe the problem add tensorflow lite framework to target link and add header search path as no module header avaiable follow error on build undefined symbol for architecture arm64 tflgpudelegatecreate reference from in mm class call tflgpudelegatecreate option
tensorflowtensorflow
bug in categorical crossentropy loss label smoothing
Bug
possible bug find in categorical crossentropy s label smoothing argument in line l940 l941 and for tf2 its in line l960 l961 the axis specify in num class be 1 which work fine for a rank 2 tensor but fail for high rank tensor def smooth label num class math op cast array op shape y true 1 y pre dtype 1 instead of 1 return y true 1 0 label smooth label smooth num class y true smart cond smart cond label smooth smooth label lambda y true return k categorical crossentropy y true y pre from logit from logit
tensorflowtensorflow
tfp 0 8 rc0 attributeerror multivariatenormaltril object have no attribute type
Bug
I be use tfp 0 8 rc0 with tf2 0 0 rc0 for a vae compose with keras functional api I be able to train the vae however when I want to retrieve the latent code by call the encoder only use predict it throw attributeerror multivariatenormaltril object have no attribute type which be not very informative in itself and I be not sure what it refer to I be a bit confused why the decoder input seem to accept the output of the tfp layer without problem but can t seem to call predict on the encoder alone note that call predict on encoder alone work when replace the tfp layer with an equivalent kera sample class I understand that the tfp layer when no convert to tensor fn be define should call sample by default so return its output with encoder predict sample should be possible here be a minimal reproduction code import numpy as np import os import tensorflow probability as tfp tfd tfp distribution tfpl tfp layer distribution layer import tensorflow as tf load model 1 latent dim 8 learning rate 1e 4 batch size 100 test batch size 10 color channel 1 train image test image tf keras datasets mnist load datum train image train image 5000 n trainsample np shape train image 0 imsize np shape train image 1 train image train image reshape 1 imsize imsize 1 astype float32 train image 255 train dataset tf datum dataset from tensor slice train image train image shuffle n trainsample repeat batch batch size image input tf keras input shape imsize imsize color channel name encoder input x tf keras layer flatten image input x tf keras layer dense 500 activation softplus name inference l1 dense x x tf keras layer dense tfpl multivariatenormaltril param size latent dim x z tfpl multivariatenormaltril latent dim x prior tfd independent tfd normal loc 0 0 scale 1 reinterpret batch ndim 1 tfpl kldivergenceaddloss prior weight 1 0 encoder tf keras model input image input output z name encoder latent input tf keras input shape latent dim name z sample x tf keras layer dense 500 activation softplus name generative l1 dense latent input x tf keras layer dense imsize 2 color channel activation sigmoid name generative l3 dense out x output prob tf keras layer reshape target shape imsize imsize color channel name generative output prob x decoder tf keras model input latent input output output prob name decoder output prob decoder z vae model tf keras model input image input output output prob name vae optimizer tf keras optimizer adam learning rate 1e 3 vae model compile optimizer tf keras loss binarycrossentropy vae model fit train dataset step per epoch n trainsample batch size epoch 4 latent encoder predict train image 4 print latent shape latent shape and the full stack with the error 2019 09 04 21 56 37 276275 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 09 04 21 56 37 276332 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 09 04 21 56 37 278632 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 09 04 21 56 37 278665 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 09 04 21 56 37 278682 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 09 04 21 56 37 280141 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10805 mb memory physical gpu device 0 name tesla k80 pci bus i d eaaf 00 00 0 compute capability 3 7 train for 50 step epoch 1 4 warning log before flag parsing go to stderr w0904 21 56 38 184839 140359104337664 deprecation py 323 from usr local lib python3 5 dist package tensorflow core python ops math grad py 1394 where from tensorflow python op array op be deprecate and will be remove in a future version instruction for update use tf where in 2 0 which have the same broadcast rule as np where 2019 09 04 21 56 39 006765 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 50 50 1s 29ms step loss 0 2975 epoch 2 4 50 50 0s 5ms step loss 0 2114 epoch 3 4 50 50 0s 5ms step loss 0 1730 epoch 4 4 50 50 0s 5ms step loss 0 1590 traceback most recent call last file home pycharm project vae save issue reproduction py line 55 in latent encoder predict train image 4 file usr local lib python3 5 dist package tensorflow core python keras engine training py line 915 in predict use multiprocesse use multiprocesse file usr local lib python3 5 dist package tensorflow core python keras engine training array py line 722 in predict callback callback file usr local lib python3 5 dist package tensorflow core python keras engine training array py line 189 in model iteration f make execution function model mode file usr local lib python3 5 dist package tensorflow core python keras engine training array py line 565 in make execution function return model make execution function mode file usr local lib python3 5 dist package tensorflow core python keras engine training py line 2155 in make execution function self make predict function file usr local lib python3 5 dist package tensorflow core python keras engine training py line 2145 in make predict function kwargs file usr local lib python3 5 dist package tensorflow core python keras backend py line 3658 in function return eagerexecutionfunction input output update update name name file usr local lib python3 5 dist package tensorflow core python keras backend py line 3555 in init base graph source graph file usr local lib python3 5 dist package tensorflow core python eager lift to graph py line 260 in lift to graph add source add source file usr local lib python3 5 dist package tensorflow core python op op selector py line 404 in map subgraph elif op type placeholder attributeerror multivariatenormaltril object have no attribute type
tensorflowtensorflow
tf signal fft2d speed be slow and unstable in rtx2080ti
Bug
system information os platform linux ubuntu 18 04 server tensorflow instal from docker tensorflow tensorflow 1 14 0 gpu py3 jupyter tensorflow version 1 14 0 python version 3 6 8 cuda v10 0 gpu model rtx 2080ti describe the current behavior the speed of fft2d operation tf signal fft2d be very unstable at different iteration here be an example output of time every 100 iteration code be show below 2019 09 04 19 03 57 731947 2019 09 04 19 04 33 715335 2019 09 04 19 05 10 976109 2019 09 04 19 05 44 012072 2019 09 04 19 06 15 616308 2019 09 04 19 07 14 961716 2019 09 04 19 08 12 324199 2019 09 04 19 09 11 560423 2019 09 04 19 10 08 877960 2019 09 04 19 11 08 102977 during the training there be no other program run bur when run the same code in my local machine gtx 1080ti with the same tensorflow docker image the speed be fast and stable 2019 09 04 14 12 00 387114 2019 09 04 14 12 07 363174 2019 09 04 14 12 14 355784 2019 09 04 14 12 21 384377 2019 09 04 14 12 28 378524 describe the expect behavior the speed should be always very fast about 7s 100iterations code to reproduce the issue import os import datetime import tensorflow as tf import numpy as np os environ cuda device order pci bus i d os environ cuda visible device 1 def main dp train tf placeholder tf float32 shape 41 300 300 1 dp tf complex dp train 0 0 dp fft tf abs tf signal fft2d dp w tf get variable w conv 1 1 1 1 not important just want to make the training process to run cost train tf reduce mean tf nn conv2d dp fft w stride 1 1 1 1 padding same opt tf train adamoptimizer 1e 4 update op tf get collection tf graphkey update op with tf control dependency update op train op opt minimize cost train var list var digital with tf session as sess sess run tf global variable initializer sess run tf local variable initializer coord tf train coordinator thread tf train start queue runner sess sess coord coord for I in range 0 5000 datum np random rand 41 300 300 1 sess run train op feed dict dp train datum if I 100 0 old time datetime datetime now print old time coord request stop coord join thread if name main tf app run
tensorflowtensorflow
raise undefined symbol in tensorflow contrib fuse conv python op fuse conv2d bias activation op module
Bug
system information os platform and distribution e g linux ubuntu 16 04 cento linux release 7 6 1810 core mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version 1 14 0 python version 3 6 5 instal use virtualenv pip conda conda bazel version if compile from source 0 25 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version cuda 10 1 cudnn 7 gpu model and memory geforce gtx 1080 8119mib describe the problem from tensorflow contrib fuse conv python op fuse conv2d bias activation op import 2019 09 04 13 57 26 493744 I tensorflow stream executor platform default dso loader cc 42 successfully open dynamic library libcudart so 10 1 warning log before flag parsing go to stderr w0904 13 57 28 321362 140054143231808 init py 308 limit tf compat v2 summary api due to miss tensorboard installation traceback most recent call last file line 1 in file root conda lib python3 6 site package tensorflow contrib fuse conv init py line 22 in from tensorflow contrib fuse conv python op fuse conv2d bias activation op import file root conda lib python3 6 site package tensorflow contrib fuse conv python op fuse conv2d bias activation op py line 26 in resource loader get path to datafile fuse conv2d bias activation op so file root conda lib python3 6 site package tensorflow contrib util loader py line 56 in load op library ret load library load op library path file root conda lib python3 6 site package tensorflow python framework load library py line 61 in load op library lib handle py tf tf loadlibrary library filename tensorflow python framework error impl notfounderror root conda lib python3 6 site package tensorflow contrib fuse conv python op fuse conv2d bias activation op so undefined symbol zn10tensorflow6logger18singleton factory e I follow anaconda compile tensorflow conda package method conda build and some reference recipe to compile tensorflow 1 14 with tensorrt and nccl feature root skyaxe compute 1 work dir curl l o root skyaxe compute 1 work dir tar jxf tensorflow base 1 14 0 gpu py36he45bfe2 0 tar bz2 get recipe in info recipe root skyaxe compute 1 work dir ls d info recipe info recipe my build sh meta yaml bin bash tensorflow version echo tensorflow version number tr d compile tensorflow from source export python bin path python export python lib path sp dir export cc opt flag march native disable jemmloc need madv hugepage macro which be not in glib 2 12 export tf need jemalloc 1 export tf need gcp 0 export tf need hdfs 0 export tf need s3 0 export tf enable xla 0 export tf need opencl 0 export tf need opencl sycl 0 export tf need gdr 0 export tf need kafka 1 export tf download clang 0 export tf need ignite 1 export tf enable xla 1 export tf need rocm 0 if cudnn version le 6 then export tf need tensorrt 0 else export tf need tensorrt 1 if cuda version 9 2 then export tensorrt install path opt compute tensorrt 4 0 1 6 elif cuda version 10 1 then export tensorrt install path opt compute tensorrt 5 1 2 2 else export tensorrt install path opt compute tensorrt 3 0 4 modify it accord to your requirement fi fi export tf nccl version 2 4 modify it accord to your requirement defalut value be 1 3 from github if cuda version 8 0 then export nccl install path opt compute nccl 2 2 13 1 cuda8 0 x86 64 modify it accord to your requirement elif cuda version 9 0 then export nccl install path opt compute nccl 2 2 13 1 cuda9 0 x86 64 modify it accord to your requirement elif cuda version 9 2 then export nccl install path opt compute nccl 2 3 7 1 cuda9 2 x86 64 modify it accord to your requirement elif cuda version 10 1 then export nccl install path opt compute nccl 2 4 8 1 cuda10 1 x86 64 fi cuda detail these should be customize depend on the system detail export tf need cuda 1 export tf cuda version cuda version export tf cudnn version cudnn version export gcc host compiler path which gcc export cuda toolkit path usr local cuda export cudnn install path usr local cuda export tf cuda clang 0 export tf cuda compute capability 3 0 3 5 5 2 if cuda version 8 0 then export tf cuda compute capability 3 0 3 5 5 2 6 0 6 1 elif cuda version 9 0 then export tf cuda compute capability 3 0 3 5 5 2 6 0 6 1 7 0 elif cuda version 9 2 then export tf cuda compute capability 3 0 3 5 5 2 6 0 6 1 7 0 elif cuda version 10 1 then export tf cuda compute capability 3 0 3 5 5 2 6 0 6 1 7 0 7 5 fi export tf need mpi 0 enable intel mkl if tensorflow version ge 120 then export tf need mkl 1 export tf download mkl 1 fi export tf need verb 1 export tf set android workspace 0 configure bazel build c opt copt mavx copt mavx2 copt mfma copt msse4 2 copt mfpmath both config cuda job nproc config mkl verbose failure color yes tensorflow tool pip package build pip package build a whl file mkdir p src dir tensorflow pkg bazel bin tensorflow tool pip package build pip package src dir tensorflow pkg install use pip from the whl file pip install no dep src dir tensorflow pkg whl rm f prefix bin tensorboard package name tensorflow gpu version version source git url git rev v version build entry point tensorboard tensorflow tensorboard tensorboard main script env cuda version cudnn version tensorflow version number string py conda py string requirement build werkzeug bleach numpy 1 15 mkl six protobuf 3 6 python x x backport weakref html5lib markdown mock keras application kera preprocesse enum34 py27 run python werkzeug six protobuf numpy markdown html5lib bleach backport weakref any other info log use nvidia docker and nvidia cuda 10 1 cudnn7 devel centos7 docker image import tensorflow be ok and sess tensorflow session config tensorflow configproto log device placement true will display visible gpu device and other info similar behaviour can refer to and pre build wheel package will produce same error error see root skyaxe compute 1 log 2019 07 docker run ti rm network host nvidia cuda 10 1 cudnn7 devel centos7 bin bash root skyaxe compute 1 curl l o root skyaxe compute 1 root miniconda3 bin pip install I tensorflow gpu 1 14 0
tensorflowtensorflow
make adv reg config not find
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version while try to run the code give on I run into the follow error attributeerror module neural structured learn lib have no attribute make adv reg config I have already instal neural structured learning use pip install neural structured learning and I be work on tf 2 0 0 rc0 on colab also try it on 1 14 I have also check the code on github and couldn t find make adv reg config in neural structured learn lib describe the expect behavior code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
bug in the tensorflow pb to tflite conversion use tfliteconverter
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 13 cpu version python version 3 6 9 describe the current behavior tfliteconverter convert the first convolutional layer of tensorflow model into depthwise convolutional layer if the input to the model have single channel in my case grey scale image with channel 1 describe the expect behavior convolutional layer in the tensorflow model should remain the same even after tfliteconverter code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem link to tensorflow model link to the generated tflite model code use to generate tflite file from tensorflow model import tensorflow as tf graph def file single layer pb input array s output array conv relu output file single layer tflite converter tf lite tfliteconverter from frozen graph graph def file input array output array tflite model converter convert open output file wb write tflite model I use netron to visualise the pb and tflite graph link other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach tensorflow graph 1 tflite graph after tfliteconverter image
tensorflowtensorflow
efficient band matrix multiplication
Bug
tensorflow appear to lack an efficient way to do a multiplication by band matrix by this I mean calculate c i j sum k a i j k b I k for example if b be b00 b01 b02 b07 b10 b11 b12 b17 b20 b21 b22 b27 this be equivalent to multiply a by b00 0 0 b10 b01 0 b20 b11 b02 0 b21 b12 0 0 b22 0 0 b27 this arise as part of the implementation of keras layer locallyconnected1d which provide two possible implementation for it both implementation be suboptimal one create a tensorflow op for every column which mean a lot of framework overhead and the other expand b to the full form which be inefficient and also seem to incur some overhead if b be 512x16 it should take one tensorflow op with 512x16 fma implementation 1 would do 512 op with 16 fma each implementation 2 would perform two op one with 512x529 float multiplication and one with 512x529 fma less however much be save internally by sparse matmul I try to replace some dense layer with locallyconnected1d layer in tensor2tensor transformer it would not load at all with implementation 1 it be still try to build the graph after 10 minute and it run significantly slow than with dense despite do few fma with implementation 2 the convolution would be easy to implement if one have direct access to tensor stride you simply set the second to last stride of a to stride 2 stride 1 and do a regular matrix multiplication call and there be some internal support for it in the form of method call stream thenblasgbmv in stream executor stream cc but I don t see how to change stride from python and thenblasgbmv be never use in 1 x master
tensorflowtensorflow
xla compilation fail in replica context of distribution strategy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 9 9 stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 14 0 0 g87989f6 1 14 0 python version python 3 5 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda version 10 0 130 gpu model and memory 2 x nvidia tesla k80 describe the current behavior invoke xla compile in the function pass to strategy experimental run v2 as in the code below raise the follow exception full traceback below valueerror xla compile computation can not be nest operator name replica 1 input 0 describe the expect behavior the expectation be to get one compile xla cluster for each replica code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python import tensorflow as tf from tensorflow python compiler xla import xla strategy tf distribute experimental centralstoragestrategy compute device device gpu 0 device gpu 1 parameter device device cpu 0 with strategy scope def train step input def step fn x w tf get variable name w initializer 1 0 loss w x optimizer tf train gradientdescentoptimizer 0 1 train op optimizer minimize loss with tf control dependency train op return tf identity loss def compile step fn x out xla compile step fn input x return out per replica loss strategy experimental run v2 compile step fn args input sum loss strategy reduce tf distribute reduceop sum per replica loss axis none return sum loss loss train step 1 0 with tf session as sess sess run tf global variable initializer for in range 10 print sess run loss other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file multi gpu strategy xla py line 29 in loss train step 1 0 file multi gpu strategy xla py line 24 in train step compile step fn args input file usr local lib python3 5 dist package tensorflow python distribute distribute lib py line 511 in experimental run v2 return self extend call for each replica fn args args kwargs kwargs file usr local lib python3 5 dist package tensorflow python distribute distribute lib py line 1555 in call for each replica return self call for each replica fn args kwargs file usr local lib python3 5 dist package tensorflow python distribute parameter server strategy py line 407 in call for each replica self container strategy self device map fn args kwargs file usr local lib python3 5 dist package tensorflow python distribute mirror strategy py line 195 in call for each replica coord join thread file usr local lib python3 5 dist package tensorflow python training coordinator py line 389 in join six reraise self exc info to raise file usr local lib python3 5 dist package six py line 693 in reraise raise value file usr local lib python3 5 dist package tensorflow python training coordinator py line 297 in stop on exception yield file usr local lib python3 5 dist package tensorflow python distribute mirror strategy py line 911 in run self main result self main fn self main args self main kwargs file multi gpu strategy xla py line 20 in compile step fn out xla compile step fn input x file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 110 in compile return compile internal computation input file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 338 in compile internal for I x in enumerate flat input file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 338 in for I x in enumerate flat input file usr local lib python3 5 dist package tensorflow python util dispatch py line 180 in wrapper return target args kwargs file usr local lib python3 5 dist package tensorflow python op array op py line 86 in identity ret gen array op identity input name name file usr local lib python3 5 dist package tensorflow python ops gen array op py line 4253 in identity identity input input name name file usr local lib python3 5 dist package tensorflow python framework op def library py line 788 in apply op helper op def op def file usr local lib python3 5 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 5 dist package tensorflow python framework op py line 3616 in create op op def op def file usr local lib python3 5 dist package tensorflow python framework op py line 2043 in init self control flow post processing file usr local lib python3 5 dist package tensorflow python framework op py line 2054 in control flow post processing self control flow context addop self file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 254 in addop self outer context addinnerop op file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 274 in addinnerop self addop op file usr local lib python3 5 dist package tensorflow python compiler xla xla py line 202 in addop name s op name valueerror xla compile computation can not be nest operator name replica 1 input 0
tensorflowtensorflow
tensorflow lite build fail atomic in namespace std do not name a type
Bug
this issue happen while build tensorflow lite into aws lambda docker system information os platform and distribution docker image lambci lambda build python3 6 tensorflowlite pip package build from source tensorflow version master currently on 496acff python version 3 6 compiler version clang version 3 6 2 tag release 362 final gcc version 4 8 5 20150623 red hat 4 8 5 28 gcc ldd gnu libc 2 17 describe the problem use this dockerfile from lambci lambda build python3 6 workdir var task run yum update y yum y install swig libjpeg devel zlib devel wget run git clone workdir var task re2 run make make test make install make testinstall workdir var task run wget tar xf release 1 8 0 tar gz workdir var task googlet release 1 8 0 run cmake dbuild share lib on make make install workdir var task run pip install numpy run git clone workdir var task tensorflow run git checkout 496acff run bash tensorflow lite tool pip package build pip package sh it fail with tensorflow lite kernels acceleration test util internal h in function absl optional tflite getaccelerationtestparam std string tensorflow lite kernels acceleration test util internal h 69 10 error atomic in namespace std do not name a type static std atomic test config ptr
tensorflowtensorflow
lite doc break link in tensorflow lite and tensorflow operator compatibility
Bug
the link for tf transpose on page compatible operation be break
tensorflowtensorflow
tf 2 0 regression cloudpickle can not serialize tf keras sequential
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes code include below in the issue os platform and distribution e g linux ubuntu 16 04 macos 10 14 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below v2 0 0 beta1 5101 gc75bb66a99 2 0 0 rc0 python version python 3 6 7 anaconda inc bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a use cloudpickle to serialize a python function that use tf keras sequential fail with a recursion error note that this work with tensorflow 1 14 0 I imagine it also fail with other thing not just tf keras sequential python import cloudpickle cloudpickle version 1 2 1 import tensorflow as tf tf version 2 0 0 rc0 def f tf keras sequential cloudpickle loads cloudpickle dump f this fail the last line fail with recursionerror traceback most recent call last in 1 cloudpickle loads cloudpickle dump f anaconda3 lib python3 6 site package tensorflow init py in getattr self item 48 49 def getattr self item 50 module self load 51 return getattr module item 52 anaconda3 lib python3 6 site package tensorflow init py in load self 42 def load self 43 import the target module and insert it into the parent s namespace 44 module importlib import module self name 45 self parent module global self local name module 46 self dict update module dict last 2 frame repeat from the frame below anaconda3 lib python3 6 site package tensorflow init py in getattr self item 48 49 def getattr self item 50 module self load 51 return getattr module item 52 recursionerror maximum recursion depth exceed while call a python object see 57761034
tensorflowtensorflow
tf 2 0 0 rc0 tfp 0 7 broken combo tensor be unhashable if tensor equality be enable instead use tensor experimental ref as the key
Bug
error occur tf gpu 2 0 0 rc0 with tfp 0 7 code to reproduce import tensorflow probability as tfp tfp distribution multivariatenormaldiag 0 1 sample error return traceback most recent call last file home pycharm project vae save issue reproduction py line 3 in tfp distribution multivariatenormaldiag 0 1 sample file usr local lib python3 5 dist package tensorflow probability python distribution distribution py line 840 in sample return self call sample n sample shape seed name kwargs file usr local lib python3 5 dist package tensorflow probability python distribution transform distribution py line 391 in call sample n y self bijector forward x bijector kwargs file usr local lib python3 5 dist package tensorflow probability python bijectors bijector py line 933 in forward return self call forward x name kwargs file usr local lib python3 5 dist package tensorflow probability python bijectors bijector py line 904 in call forward mapping self lookup x x kwargs kwargs file usr local lib python3 5 dist package tensorflow probability python bijectors bijector py line 1343 in lookup mapping self from x x get subkey mapping merge x x file usr local lib python3 5 dist package tensorflow probability python bijectors bijector py line 151 in getitem return super weakkeydefaultdict self getitem weak key file usr local lib python3 5 dist package tensorflow probability python bijectors bijector py line 181 in hash return hash x file usr local lib python3 5 dist package tensorflow core python framework op py line 713 in hash raise typeerror tensor be unhashable if tensor equality be enable typeerror tensor be unhashable if tensor equality be enable instead use tensor experimental ref as the key
tensorflowtensorflow
g3doc
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide and the doc style guide
tensorflowtensorflow
tf 2 0 gradient of tf keras layer dense with bias produce non deterministic result
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tf nightly gpu 2 0 preview 2 0 0 dev20190826 tensorflow version use command below v1 12 1 9705 g0fbc138 2 0 0 dev20190826 python version 3 6 9 cuda cudnn version 10 0 0 7 3 1 gpu model and memory titan xp 11 gb describe the current behavior 1 the follow code produce the same numpy data0 pkl initial params0 pkl loss0 pkl all the time which mean same data same parameter same loss but grad0 pkl change I check it with diff command between generate file 2 it seem only with tensorflow 2 0 gpu version this happen I check the code with tf nightly 2 0 preview 2 0 0 dev20190830 cpu version it be ok show deterministic result 3 use custom dense layer tf keras layers relu be ok also show deterministic result custom dense layer be class mydenselayer tf keras layers layer def init self num output super mydenselayer self init self num output num output def build self input shape self kernel self add variable kernel initializer tf keras initializers glorotuniform shape int input shape 1 self num output self bias self add variable bias initializer tf zeros initializer shape self num output def call self input return tf matmul input self kernel self bias and net with net tf keras sequential net add mydenselayer 100 net add tf keras layers relu net add mydenselayer 100 net add tf keras layers relu net add mydenselayer 1 net build none input dim when use bias false option apply on hide layer be be ok show deterministic result describe the expect behavior since cudnn force to behave determinisically os environ tf cudnn deterministic true and all the datum parameter loss be the same grad be expect to be same code to reproduce the issue import os import pickle import random import numpy as np import tensorflow as tf os environ tf cudnn deterministic true seed 1234 np random seed seed tf random set seed seed random seed seed nn model input dim 5 net tf keras sequential net add tf keras layer dense 100 activation tf nn relu kernel initializer none net add tf keras layer dense 100 activation tf nn relu kernel initializer none net add tf keras layer dense 1 activation none kernel initializer none net build none input dim initial v param initial v param net variable update nn model one step x np random normal loc 0 scale 1 size 1000 input dim y np random normal loc 0 scale 1 size 1000 with tf gradienttape as tape loss tf reduce mean tf square y net x grad tape gradient loss net trainable variable tag for compare file tag 1 with open numpy datum pkl format tag wb as f pickle dump x y f with open initial param pkl format tag wb as f pickle dump initial v param f with open loss pkl format tag wb as f pickle dump loss f with open grad pkl format tag wb as f pickle dump grad f
tensorflowtensorflow
tf 2 0 api docs tf one like
Bug
url s with the issue description of the issue what need change change create a tensor with all element set to zero to create a tensor with all element set to one
tensorflowtensorflow
session that be closed and reset and all input and output be out of scope do not release gpu memory
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 13 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source 1 12 0 tensorflow version use command below python version na bazel version if compile from source 0 19 2 gcc compiler version if compile from source apple llvm version 9 1 0 clang 902 0 39 2 cuda cudnn version 10 0 130 gpu model and memory gtx 1060 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior memory from tensorflow session be not free when a session be close and no variable be in scope memory be only free on program exit describe the expect behavior memory should be free from the gpu on call close and reset in c or some other way to release gpu resource without fork a process code to reproduce the issue include getenv include include include include include include cuda runtime api h include int freecudamem int gpucount I curesult re cudevice dev cucontext ctx int result cuinit 0 cudevicegetcount gpucount size t curclockrate 0 size t curmaxmem 0 for int I 0 I gpucount I cudasetdevice I cudadeviceprop curdeviceprop cudaerror t err cudagetdevicepropertie curdeviceprop I if err cudasuccess continue size t free mem total mem cudeviceget dev I cuctxcreate ctx 0 dev re cumemgetinfo free mem total mem result int free mem return result int main int argc char argv setenv tf cpp min vlog level 3 1 setenv tf cpp min log level 3 1 std cout start of program freecudamem std endl tensorflow sessionoption option tensorflow configproto config option config config mutable gpu option set allow growth true auto device count option config mutable device count device count insert gpu 1 device count insert cpu 1 std unique ptr session std unique ptr tensorflow newsession option tensorflow graphdef graph def std string graphfile path to large frozen model pb tensorflow status graphloadedstatus readbinaryproto tensorflow env default graphfile c str graph def auto inputtensor2 tensorflow tensor tensorflow dt uint8 1 2048 2048 3 std vector output tensorflow status session create status session create graph def tensorflow status run status session run inputtensor inputtensor2 outputtensor output output clear int number3 std cout after session freecudamem std endl tensorflow status closestatus3 session close std cout close status closestatus3 std endl std cout after close freecudamem std endl session reset std cout after reset freecudamem std endl std cout start of program2 freecudamem std endl tensorflow sessionoption options2 tensorflow configproto config2 options2 config config2 mutable gpu option set allow growth true auto device count2 options2 config mutable device count device count2 insert gpu 1 device count2 insert cpu 1 std unique ptr session2 std unique ptr tensorflow newsession options2 tensorflow graphdef graph def2 std string graphfile2 path to large frozen model pb tensorflow status graphloadedstatus2 readbinaryproto tensorflow env default graphfile2 c str graph def2 auto inputtensor22 tensorflow tensor tensorflow dt uint8 1 2048 2048 3 std vector outputs2 tensorflow status session create status2 session2 create graph def2 tensorflow status run status2 session2 run inputtensor inputtensor22 outputtensor outputs2 outputs2 clear int number2 std cout after session2 freecudamem std endl tensorflow status closestatus2 session2 close std cout close status2 closestatus2 std endl std cout after close2 freecudamem std endl session2 reset std cout after reset2 freecudamem std endl std cout start of program3 freecudamem std endl tensorflow sessionoption options3 tensorflow configproto config3 options3 config config3 mutable gpu option set allow growth true auto device count3 options3 config mutable device count device count3 insert gpu 1 device count3 insert cpu 1 std unique ptr session3 std unique ptr tensorflow newsession options3 tensorflow graphdef graph def3 std string graphfile3 path to large frozen model pb tensorflow status graphloadedstatus3 readbinaryproto tensorflow env default graphfile3 c str graph def3 auto inputtensor32 tensorflow tensor tensorflow dt uint8 1 2048 2048 3 std vector outputs3 tensorflow status session create status3 session3 create graph def3 tensorflow status run status3 session3 run inputtensor inputtensor32 outputtensor outputs3 outputs3 clear int number3 std cout after session3 freecudamem std endl tensorflow status closestatus3 session3 close std cout close status3 closestatus3 std endl std cout after close3 freecudamem std endl session3 reset std cout after reset3 freecudamem std endl other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here be the output from the program above start of program 1031380992 after session 700043264 close status ok after close 635219968 after reset 570400768 start of program2 505581568 after session2 440766464 close status2 ok after close2 375943168 after reset2 311111680 start of program3 246292480 after session3 181473280 close status3 ok after close3 116654080 after reset3 1013313536 note the gpu memory available keep go down even though the session have be close and reset and all variable be out of scope
tensorflowtensorflow
tensorflow 2 0 combine model add loss and kera loss function in train doesn t work
Bug
I read about tensorflow 2 0 tutorial in vae section I follow the tutorial but the model doesn t work as expect despite run the notebook directly from give google colab the result actually be the same as in the tutorial I e loss value be very similar but if you look at the output you ll see that the model can t reconstruct the input at all I e output the same image for all input this seem to be a mistake from the tutorial itself when combine model add loss and keras loss original code I change mse loss to binarycrossentropy but the result be still the same later I try compute the binarycrossentropy loss explicitly in my forward pass then use model add loss in addition with the kl divergence loss use only model add loss to calculate the loss this way the model can actually learn the datum and the output seem good enough so I have a question about model add loss and loss as a function that take y true y pre I e keras loss the update code work only if it can calculate loss in forward pass e g kl divergence or reconstruction loss how can I combine model add loss and keras loss correctly in the case where the model need ground truth of the output e g denoise vae
tensorflowtensorflow
tf keras fit run forever with 0 sample
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip tensorflow version use command below v2 0 0 beta1 5101 gc75bb66a99 2 0 0 rc0 python version 3 7 4 cuda cudnn version n a cpu describe the current behavior code run forever although nothing be to be train no parameter no data 2019 08 30 15 50 26 413745 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 epoch 1 2 warning tensorflow the list of trainable weight be empty make sure that you be not set model trainable to false before compile the model describe the expect behavior code stop pretty soon code to reproduce the issue import numpy as np from tensorflow keras import layer model shape 10 10 1 layer layer input shape model model model layer layer model compile loss mse datum np zero 0 shape model fit x datum y datum epoch 2 verbose 2 other info log it s somewhat annoying if you set the number of training sample to 0 by mistake before start to train you might not notice it for a while
tensorflowtensorflow
importerror can not import name dense feature from tensorflow python feature column
Bug
from tensorflow python feature column import dense feature traceback most recent call last file line 1 in importerror can not import name dense feature from tensorflow python feature column tensorflow 1 14 0 python 3 7 3 what be the issue in this case
tensorflowtensorflow
how to break tf while loop inside body function
Bug
the issue like this I write py code for segment chinese sentence it s easy but when I use tf do this thing it s unlucky specific issue be as follow py code python x list y 3 3 0 2 0 1 2 def func x y re for I in range len x if y I 3 re append x I elif y I 0 str while I len x str x I if y I 2 break I 1 re append str return res call func func x y tf code python x tf constant list y tf constant 3 3 0 2 0 1 2 3 3 I tf constant 0 label tf constant 0 out tf constant dtype tf string def cond I label out return tf not equal I x shape 0 def body I label out str tf constant def cond I label str return tf not equal I x shape 0 def body I label str str str x I def continue str def func 1 return str return func 1 def break def func 2 return str return func 2 str tf cond tf equal y I 2 break continue str return tf add I 1 y I str str tf while loop cond body I label str shape invariant I get shape label get shape tf tensorshape none return tf add I 1 y I tf cond tf equal y I 3 lambda tf concat out x I i 1 axis 0 lambda tf cond tf equal y I 0 lambda tf concat out str axis 0 lambda out I label out tf while loop cond body I label out shape invariant I get shape label get shape tf tensorshape none session run python with tf session as sess o I o label o out sess run I label out I want to same result from py to tf and I know what s problem cause this result but I can t break tf while loop inside body function I have no idea for this could you give I some advice my env tensorflow 1 13 1 os win10 python 3 5
tensorflowtensorflow
fatal error when use tf cudnn use autotune 0
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below v1 12 1 10080 g6e0893c79c 1 14 0 python version 3 7 bazel version if compile from source 0 24 1 gcc compiler version if compile from source 7 4 cuda cudnn version 10 0 7 6 2 gpu model and memory v100 16 g describe the current behavior I have a model which run fine however after I use export tf cudnn use autotune 0 in order to save some autotune time it start to crash with this error in the first iteration 019 08 30 00 05 05 705707 f tensorflow stream executor cuda cuda dnn cc 262 unsupported cudnn convolution backward algorithm for filter 6 zsh abort core dump python3 xx py code to reproduce the issue can try to provide one but I think the information be probably already enough to pin down the problem other info log the algorithm 6 mention in the error message correspond to cudnn convolution bwd filter algo fft tiling which be explictly disabled in tensorflow l258 l265 when I set autotune 0 cudnn will choose the good algorithm with its own heuristic and I suspect that tensorflow crash because cudnn choose this algorithm that tf have disable cc chsigg