repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | connect to invalid output x of source node y which have z output tf2 1 nightly | Bug | occur in graph execution not eager with a return sequence true follow by return sequence false but not either on its own custom rnn layer the layer involve nn moment and nn batch normalization op in call as describe here the error occur upon model train on batch fit save weight save I try tf compat v1 experimental output all intermediate true and false didn t help doesn t occur with build in lstm layer any resolution error trace python file c dl code dev bn indrnn main2 py line 29 in model train on batch x y file d anaconda envs tf2n env lib site package tensorflow python keras engine training v1 py line 1083 in train on batch output self train function in pylint disable not callable file d anaconda envs tf2n env lib site package tensorflow python keras backend py line 3597 in call session get session input file d anaconda envs tf2n env lib site package tensorflow python keras backend py line 528 in get session initialize variable session file d anaconda envs tf2n env lib site package tensorflow python keras backend py line 943 in initialize variable variable module be variable initialize v for v in candidate var file d anaconda envs tf2n env lib site package tensorflow python client session py line 958 in run run metadata ptr file d anaconda envs tf2n env lib site package tensorflow python client session py line 1181 in run feed dict tensor option run metadata file d anaconda envs tf2n env lib site package tensorflow python client session py line 1359 in do run run metadata file d anaconda envs tf2n env lib site package tensorflow python client session py line 1384 in do call raise type e node def op message invalidargumenterror node training nadam gradient gradient ind rnn while grad ind rnn while grad connect to invalid output 46 of source node ind rnn while which have 46 output try use tf compat v1 experimental output all intermediate true update while there aren t error in eager the gradient be extremely small 1e 9 to 1e 19 whereas usually the same layer have 1e 6 to 1e 2 this may or may not be a design rather than a bug problem |
tensorflowtensorflow | decode wav op cc 55 invalid argument bad file size for wav expect 16 or 18 but get 40 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux fedora 31 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior import tensorflow as tf from file tf io read file tmp glaucidium minutissimum 22180 wav tf audio decode wav file error 2020 02 17 14 20 35 070542 w tensorflow core framework op kernel cc 1675 op require fail at decode wav op cc 55 invalid argument bad file size for wav expect 16 or 18 but got40 describe the expect behavior no error code to reproduce the issue see above other info log file be play without problem by e g sox file info input file glaucidium minutissimum 22180 wav channel 1 sample rate 16000 precision 16 bit duration 00 00 46 29 740624 sample 3471 68 cdda sector file size 1 48 m bit rate 256k sample encode 16 bit sign integer pcm also the number of sample d es not seem to be the problem as after a simple resampling sox glaucidium minutissimum 22180 wav r 16000 o5 wav the new output be decode without error source location be l238 but it be not clear to I what exactly be go on so it would be great if this could be investigate thank |
tensorflowtensorflow | exception in gradient computation for loss when combine tensor and eagertensor | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mint 19 tensorflow instal from source or binary binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 describe the current behavior the follow minimal example work when use the second implementation alternative but raise an error for the first python ip layer input 10 with tf gradienttape as tape t tf zero 10 10 tape watch t this line throw an error result keras loss meansquarederror t ip but this one work result t ip 2 tf cast tf reduce prod tf shape t tf float32 tape gradient result t the error be file wrapper py line 101 in print tape gradient result t file venv lib python3 6 site package tensorflow core python eager backprop py line 1029 in gradient unconnected gradient unconnected gradient file venv lib python3 6 site package tensorflow core python eager imperative grad py line 77 in imperative grad compat as str unconnected gradient value file venv lib python3 6 site package tensorflow core python eager backprop py line 141 in gradient function return grad fn mock op out grad file venv lib python3 6 site package tensorflow core python ops math grad py line 258 in meangrad sum grad sumgrad op grad 0 file venv lib python3 6 site package tensorflow core python ops math grad py line 213 in sumgrad op input 1 file venv lib python3 6 site package tensorflow core python ops math op py line 3502 in reduce shape input shape input shape numpy attributeerror tensor object have no attribute numpy describe the expect behavior as the second implementation work I would also expect the first to work I have not do exhaustive testing of different loss function but the same problem also occur when replace the meansquarederror by categoricalcrossentropy the problem seem to be cause by the fact that only one of the argument to the loss function be an eagertensor |
tensorflowtensorflow | gradient support for unique operator | Bug | I be try to run a custom graph with only unique op for a debugging with tf gradienttape as g y tf unique x dy dx g gradient y x the forward pass run fine but get this error during backward pass lookuperror gradient registry have no entry for unique gradient yet to be support for unique or there be no gradient for unique please help I understand why incase if its the second one |
tensorflowtensorflow | wrong doc for categorical hinge loss | Bug | description of issue what need change document for tensorflow keras loss categorical hinge be wrong clear description python keras export keras loss categorical hinge def categorical hinge y true y pre compute the categorical hinge loss between y true and y pre loss maximum neg pos 1 0 where neg sum y true y pre and pos maximum 1 y true args y true the ground truth value y true value be expect to be 1 or 1 if binary 0 or 1 label be provide they will be convert to 1 or 1 y pre the predict value return categorical hinge loss value y pre op convert to tensor v2 y pre y true math op cast y true y pre dtype pos math op reduce sum y true y pre axis 1 neg math op reduce max 1 y true y pre axis 1 return math op maximum 0 neg pos 1 should be neg maximum 1 y true y pre and pos sum y true y pre |
tensorflowtensorflow | tflite get error output when enable hexagon delegate | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution linux ubuntu 16 04 mobile device oppo reno ace if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I use this model for testing I set all input value as 1 just for tese then print the first 50 value of output 0 see below picture on the left be the value when run on cpu on the right be the value when enable hexagon delegate cpu dsp cpu and dsp get diffrent output describe the expect behavior cpu and dsp should get the same output value code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | start the thread in hook result in failure to exit when train step fail | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory ubuntu16 04 x86 64 gnu linux you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I use the estimator for training and implement a hook I m sorry but for some reason I can t show the source code in the hook s after create session method I start a python thread a bit like tpuinfeedoutfeedsessionhook which sess run a block type op eg dequeue also I implement the end method which will run an op eg enqueue to prevent the thread from get stick but estimator explicitly describe the behavior of the hook if sess run on in train step return failure it will not call the end and after run method in the code monitorsession call the close internal method it will invoke the session close which will wait for all the work thread end but obviously the thread execute sess run in my hook s after create session will not be able to exit describe the expect behavior I feel my usage be not strange also the realization of the reference part tpuestimator tpuinfeedoutfeedsessionhook be similar to my hook do tensorflow not allow to start a thread in the hook as once it s start when something go wrong in the training step the next call be session close eventually the tensorflow process get stick code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem python def my thread task while true try session run self dequeue op except tf error outofrangeerror this signal will send by end method break except exception as e raise runtime error def after create session threading thread target my thread task other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | same input different output | Bug | I have write a small code and notice that I get a different result dependent on if I use numpy or not this code should give the same result for both df dy and df dy p but it do not it do not matter if I change the order of the calculation from math import e import numpy as np def calc sigma y return 1 1 e y def get derivative x y with tf gradienttape persistent true as tape tape watch x sigma cal sigma y f x sigma np power x y 2 p x sigma x y 2 print x y f p df dy tape gradient f x df dy p tape gradient p x print df dy print df dy p x tf constant 1 y tf constant 3 get derivative x y the output be tf tensor 1 0 shape dtype float32 tf tensor 3 0 shape dtype float32 tf tensor 0 12203588 shape dtype float32 tf tensor 0 12203588 shape dtype float32 tf tensor 0 0625 shape dtype float32 tf tensor 0 0014820583 shape dtype float32 as one can see the input be the same but the output differ |
tensorflowtensorflow | can not convert a symbolic tensor neg 1 0 to a numpy array | Bug | hello I be try to derive with tensorflow and get an error tell I to report the behavior to the tensorflow team hence this post my code python from math import e def cal sigma y return 1 1 np power e y tf function def get derivative x y with tf gradienttape persistent true as tape tape watch x tape watch y y sigma cal sigma y f x y sigma np power x y 2 df dy tape gradient f y return df dy x tf constant 1 y tf constant 3 print get derivative x y if I derive with respect to x df dy tape gradient f x in second to last line it work however with y I get the follow error warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause can not convert a symbolic tensor neg 1 0 to a numpy array warn autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause can not convert a symbolic tensor neg 1 0 to a numpy array notimplementederror traceback most recent call last in 2 y tf constant 3 3 print get derivative x y 4 print get derivative2 x y opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager def function py in call self args kwd 604 in this case we have not create variable on the first call so we can 605 run the first trace but we should fail if variable be create 606 result self stateful fn args kwd 607 if self create variable 608 raise valueerror create variable on a non first call to a function opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager function py in call self args kwargs 2360 call a graph function specialize to the input 2361 with self lock 2362 graph function args kwargs self maybe define function args kwargs 2363 return graph function filter call args kwargs pylint disable protect access 2364 opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2701 2702 self function cache miss add call context key 2703 graph function self create graph function args kwargs 2704 self function cache primary cache key graph function 2705 return graph function args kwargs opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2591 arg name arg name 2592 override flat arg shape override flat arg shape 2593 capture by value self capture by value 2594 self function attribute 2595 tell the concretefunction to clean up its graph once it go out of opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 976 convert func 977 978 func output python func func args func kwargs 979 980 invariant func output contain only tensor compositetensor opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python eager def function py in wrap fn args kwd 437 wrap allow autograph to swap in a converted function we give 438 the function a weak reference to itself to avoid a reference cycle 439 return weak wrap fn wrap args kwd 440 weak wrap fn weakref ref wrap fn 441 opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise notimplementederror in convert code 20 get derivative2 y sigma cal sigma y 4 cal sigma return 1 1 np power e y opt anaconda3 envs deeplearning lib python3 7 site package tensorflow core python framework op py 728 array array format self name notimplementederror can not convert a symbolic tensor neg 1 0 to a numpy array |
tensorflowtensorflow | tf keras backend set floatx cause valueerror dtype conversion error while compute tf keras metric | Bug | system information have I write custom code on google colab code tf keras backend set floatx float64 model compile optimizer adam learning rate 0 001 clipnorm 1 0 clipvalue 0 5 loss class output binarycrossentropy decoder output binarycrossentropy loss weight 0 5 1 0 metric class output tf metric recall tf metric precision decoder output tf metric recall tf metric precision error usr local lib python3 6 dist package tensorflow core python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1288 raise valueerror 1289 tensor conversion request dtype s for tensor with dtype s r 1290 dtype name value dtype name value 1291 return value 1292 valueerror tensor conversion request dtype float64 for tensor with dtype float32 os platform and distribution os uname posix uname result sysname linux nodename ed841897617b release 4 14 137 version 1 smp thu aug 8 02 47 02 pdt 2019 machine x86 64 tensorflow instal from pip install tensorflow 2 10 tensorflow version tf version 2 1 0 python version python v python 3 6 9 bazel version na gcc compiler version na cuda cudnn version na gpu model and memory from psutil import virtual memory mem virtual memory print mem total 1024 3 gb total physical memory available 12 717426300048828 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when use tf keras backend set floatx float64 whole tf should be set to float64 right but the tf metric be not get set as show in the code above describe the expect behavior all of tf include tf metric should be calculate on the basis of tf keras backend set floatx float64 code to reproduce the issue import tensorflow as tf tf keras backend set floatx float64 m tf keras metric recall m update state 0 1 1 1 1 0 1 1 print final result m result numpy other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach stacktrace valueerror traceback most recent call last in 6 loss weight 0 5 1 0 7 metric 8 class output tf metric recall tf metric precision 9 decoder output tf metric recall tf metric precision 10 13 frame usr local lib python3 6 dist package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access usr local lib python3 6 dist package tensorflow core python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 437 target self target 438 skip target mask self prepare skip target mask 439 mask self prepare output mask 440 441 prepare sample weight mode list with the same length as model output usr local lib python3 6 dist package tensorflow core python keras engine training py in handle metric self output target skip target mask sample weight mask return weight metric return weight and unweighte metric 2002 metric result extend 2003 self handle per output metric self per output metric I 2004 target output output mask 2005 if return weight and unweighte metric or return weight metric 2006 metric result extend usr local lib python3 6 dist package tensorflow core python keras engine training py in handle per output metric self metric dict y true y pre mask weight 1953 with k name scope metric name 1954 metric result training util call metric function 1955 metric fn y true y pre weight weight mask mask 1956 metric result append metric result 1957 return metric result usr local lib python3 6 dist package tensorflow core python keras engine training util py in call metric function metric fn y true y pre weight mask 1153 1154 if y pre be not none 1155 return metric fn y true y pre sample weight weight 1156 mean metric only take a single value 1157 return metric fn y true sample weight weight usr local lib python3 6 dist package tensorflow core python keras metrics py in call self args kwargs 194 from tensorflow python keras distribute import distribute training util pylint disable g import not at top 195 return distribute training util call replica local fn 196 replica local fn args kwargs 197 198 property usr local lib python3 6 dist package tensorflow core python keras distribute distribute training util py in call replica local fn fn args kwargs 1133 with strategy scope 1134 return strategy extend call for each replica fn args kwargs 1135 return fn args kwargs 1136 1137 usr local lib python3 6 dist package tensorflow core python keras metrics py in replica local fn args kwargs 177 def replica local fn args kwargs 178 update the state of the metric in a replica local context 179 update op self update state args kwargs pylint disable not callable 180 with op control dependency update op 181 result t self result pylint disable not callable usr local lib python3 6 dist package tensorflow core python keras util metric util py in decorate metric obj args kwargs 74 75 with tf util graph context for symbolic tensor args kwargs 76 update op update state fn args kwargs 77 if update op be not none update op will be none in eager execution 78 metric obj add update update op usr local lib python3 6 dist package tensorflow core python keras metrics py in update state self y true y pre sample weight 1340 top k self top k 1341 class i d self class i d 1342 sample weight sample weight 1343 1344 def result self usr local lib python3 6 dist package tensorflow core python keras util metric util py in update confusion matrix variable variable to update y true y pre threshold top k class i d sample weight multi label label weight 438 update op append 439 weight assign add label pre weight tile 440 variable to update matrix cond 441 442 return control flow op group update op usr local lib python3 6 dist package tensorflow core python keras util metric util py in weighted assign add label pre weight var 414 if weight be not none 415 label and pre weight 416 return var assign add math op reduce sum label and pre 1 417 418 loop var usr local lib python3 6 dist package tensorflow core python op resource variable op py in assign add self delta use lock name read value 783 with handle graph self handle self assign dependency 784 assign add op gen resource variable op assign add variable op 785 self handle op convert to tensor delta dtype self dtype 786 name name 787 if read value usr local lib python3 6 dist package tensorflow core python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1288 raise valueerror 1289 tensor conversion request dtype s for tensor with dtype s r 1290 dtype name value dtype name value 1291 return value 1292 valueerror tensor conversion request dtype float64 for tensor with dtype float32 |
tensorflowtensorflow | add see also reference | Bug | this be sort of a follow up to 33756 the tf doc have undergo huge improvement over the last couple month however one thing I really like about the pytorch doc which be still mostly miss in the tf doc be see also reference a lot of function have similar or related functionality example include 1 tf split tf unstack 2 tf size tf shape 3 tf repeat tf concat tf tile tf stack 4 tf exp tf math log 5 tf keras layers maxpool2d tf nn max pool2d 6 tf one tf one like and tf zero tf zero like just to name a few often people happen to find one and start use it regularly but remain unaware of the other for quite a while even if I do know about all of they I often find myself want to compare the signature of similar function to find the one most suitable to the current use case in those case it take way too many click to get from one to the other s in short would be great if the doc reference related content that should help guide people to use the good tool for the job right from the start |
tensorflowtensorflow | allow overriding validate state spec in rnn feature request | Bug | we be allow to define get initial state for initialize custom state but not for validate they I have a scalar hide state which raise an exception per flat state spec I shape 1 it assume a 2d tensor below modification resolve it but isn t really a workaround since it modify l586 the parent class for all rnn so tf can either include such scalar handling or enable method override python for I in range len flat cell state size state spec shape flat state spec I shape if len state spec shape 1 and not tensor shape tensorshape check scalar case first state spec shape 0 be compatible with tensor shape tensorshape flat cell state size I raise validation error elif not tensor shape tensorshape ignore the first axis for init state which be for batch state spec shape 1 be compatible with tensor shape tensorshape flat cell state size I raise validation error |
tensorflowtensorflow | first example on the tensor page result in an error | Bug | url s with the issue the page have the breadcrumb tensorflow api tensorflow core v2 1 0 python description of issue what need change run the first example on the tensor page result in an error clear description here be the outcome of run the first example py build a dataflow graph c tf constant 1 0 2 0 3 0 4 0 d tf constant 1 0 1 0 0 0 1 0 e tf matmul c d construct a session to execute the graph sess tf compat v1 session execute the graph and store the value that e represent in result result sess run e traceback most recent call last file line 2 in file usr local lib python3 7 site package tensorflow core python client session py line 960 in run run metadata ptr file usr local lib python3 7 site package tensorflow core python client session py line 1108 in run raise runtimeerror the session graph be empty add operation to the runtimeerror the session graph be empty add operation to the graph before call run I understand from here that a session be no long require in tf v2 but the tensor documentation start off with multiple session reference which appear to now be obsolete or not require |
tensorflowtensorflow | add more information about eager graph context for keras layer call | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue call description of issue what need change clear description this be a key function that user will implement with they custom layer currently it be poorly document especially w r t the execution context of eager graph in tf 2 0 we be advocate eager execution by default however the call body in keras be execute with graph context by default unless configure otherwise it will raise error if user try to add print debug relate to item into the call body eg print eager tensor numpy etc some related question raise in correct link yes parameter define yes return define yes raise list and define yes usage example no request visual if applicable no submit a pull request no |
tensorflowtensorflow | unify the code for gpucompiler optimizehlopostlayoutassignment for amd and nvidia in xla | Bug | the code in gpucompiler optimizehlopostlayoutassignment subclass be essentially duplicate for amd and nvidia in xla this already lead to subtle bug treereductionrewriterpass be apply to nvidia but not to amd would it be possible to unify those put the code in gpu compiler cc and just check the platform to dynamically choose whether to apply nvidia specific or amd specific pass |
tensorflowtensorflow | ubuntu 18 04 with rtx 2070 super with tensorflow 1 13 could not create cudnn handle cudnn status internal erro | Bug | system information have I write custom code no os linux 5 3 0 28 18 04 1 ubuntu tensorflow instal from pip3 tensorflow version try 13 1 and 13 2 python version python 3 6 9 bazel gcc compiler gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 cuda cudnn version cuda 10 0 cudnn 7 4 2 gpu model and memory geforce rtx 2070 super 8 gb describe the current behavior I download the mnist tensorflow without a phd example from git and try to run one of the example with python3 mnist 3 1 convolutional big dropout py after download tensorflow and tensorflow gpu with pip3 install tensorflow 1 13 2 tensorflow gpu 1 13 2 describe the expect behavior I have not alter the code in any way and could successfully run it on the cpu I assume it would work without problem with the gpu too after I instal proprietary nvidia driver 435 and the cuda and cudnn library there be a workaround in a different issue relate to the same problem issuecomment 464909727 but this do not fix the issue for I code to reproduce the issue just clone the repository above and run the code from the tensorflow mnist tutorial directory other info log this be the output python3 mnist 3 1 convolutional big dropout py usr local lib python3 6 dist package tensorflow python framework dtype py 526 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 527 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 usr local lib python3 6 dist package tensorflow python framework dtype py 528 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 529 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 usr local lib python3 6 dist package tensorflow python framework dtype py 530 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 usr local lib python3 6 dist package tensorflow python framework dtype py 535 futurewarning pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 info tensorflow tensorflow version 1 13 2 tensorflow version 1 13 2 2020 02 14 11 56 04 391977 I tensorflow compiler xla service service cc 150 xla service 0x16d9470 execute computation on platform cuda device 2020 02 14 11 56 04 392023 I tensorflow compiler xla service service cc 158 streamexecutor device 0 geforce rtx 2070 super compute capability 7 5 2020 02 14 11 56 04 412696 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3600105000 hz 2020 02 14 11 56 04 413472 I tensorflow compiler xla service service cc 150 xla service 0x1718880 execute computation on platform host device 2020 02 14 11 56 04 413512 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2020 02 14 11 56 04 413749 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name geforce rtx 2070 super major 7 minor 5 memoryclockrate ghz 1 815 pcibusid 0000 03 00 0 totalmemory 7 79gib freememory 7 56gib 2020 02 14 11 56 04 413786 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2020 02 14 11 56 04 414980 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2020 02 14 11 56 04 415008 I tensorflow core common runtime gpu gpu device cc 990 0 2020 02 14 11 56 04 415022 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2020 02 14 11 56 04 415176 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7350 mb memory physical gpu device 0 name geforce rtx 2070 super pci bus i d 0000 03 00 0 compute capability 7 5 2020 02 14 11 56 08 593018 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2020 02 14 11 56 08 593064 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2020 02 14 11 56 08 593075 I tensorflow core common runtime gpu gpu device cc 990 0 2020 02 14 11 56 08 593087 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2020 02 14 11 56 08 593176 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7350 mb memory physical gpu device 0 name geforce rtx 2070 super pci bus i d 0000 03 00 0 compute capability 7 5 warning tensorflow from usr local lib python3 6 dist package tensorflow python framework op def library py 263 colocate with from tensorflow python framework op be deprecate and will be remove in a future version instruction for update colocation handle automatically by placer warning tensorflow from mnist 3 1 convolutional big dropout py 88 call dropout from tensorflow python op nn op with keep prob be deprecate and will be remove in a future version instruction for update please use rate instead of keep prob rate should be set to rate 1 keep prob warn tensorflow from mnist 3 1 convolutional big dropout py 95 softmax cross entropy with logit from tensorflow python op nn op be deprecate and will be remove in a future version instruction for update future major version of tensorflow will allow gradient to flow into the label input on backprop by default see tf nn softmax cross entropy with logit v2 2020 02 14 11 56 10 165980 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2020 02 14 11 56 10 166030 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2020 02 14 11 56 10 166039 I tensorflow core common runtime gpu gpu device cc 990 0 2020 02 14 11 56 10 166049 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2020 02 14 11 56 10 166133 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7350 mb memory physical gpu device 0 name geforce rtx 2070 super pci bus i d 0000 03 00 0 compute capability 7 5 2020 02 14 11 56 10 651240 I tensorflow stream executor dso loader cc 152 successfully open cuda library libcubla so 10 0 locally 2020 02 14 11 56 11 534908 e tensorflow stream executor cuda cuda dnn cc 334 could not create cudnn handle cudnn status internal error 2020 02 14 11 56 11 581923 e tensorflow stream executor cuda cuda dnn cc 334 could not create cudnn handle cudnn status internal error exception in tkinter callback traceback most recent call last file usr local lib python3 6 dist package tensorflow python client session py line 1334 in do call return fn args file usr local lib python3 6 dist package tensorflow python client session py line 1319 in run fn option feed dict fetch list target list run metadata file usr local lib python3 6 dist package tensorflow python client session py line 1407 in call tf sessionrun run metadata tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d node convert image during handling of the above exception another exception occur traceback most recent call last file usr lib python3 6 tkinter init py line 1705 in call return self func args file usr lib python3 6 tkinter init py line 749 in callit func args file usr lib python3 dist package matplotlib backends backend tkagg py line 95 in on timer timerbase on timer self file usr lib python3 dist package matplotlib backend basis py line 1383 in on timer ret func args kwargs file usr lib python3 dist package matplotlib animation py line 1542 in step still go animation step self args file usr lib python3 dist package matplotlib animation py line 1277 in step self draw next frame framedata self blit file usr lib python3 dist package matplotlib animation py line 1296 in draw next frame self draw frame framedata file usr lib python3 dist package matplotlib animation py line 1814 in draw frame self draw artist self func framedata self args file home myuser repository tensorflow without a phd tensorflow mnist tutorial tensorflowvisu py line 364 in animate step compute step n request test datum update request datum update file mnist 3 1 convolutional big dropout py line 131 in training step feed dict x batch x y batch y pkeep 1 0 step I file usr local lib python3 6 dist package tensorflow python client session py line 929 in run run metadata ptr file usr local lib python3 6 dist package tensorflow python client session py line 1152 in run feed dict tensor option run metadata file usr local lib python3 6 dist package tensorflow python client session py line 1328 in do run run metadata file usr local lib python3 6 dist package tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d define at mnist 3 1 convolutional big dropout py 78 node convert image define at home myuser repository tensorflow without a phd tensorflow mnist tutorial tensorflowvisu py 62 cause by op conv2d define at file mnist 3 1 convolutional big dropout py line 78 in y1 tf nn relu tf nn conv2d x w1 stride 1 stride stride 1 padding same b1 file usr local lib python3 6 dist package tensorflow python ops gen nn op py line 1026 in conv2d datum format datum format dilation dilation name name file usr local lib python3 6 dist package tensorflow python framework op def library py line 788 in apply op helper op def op def file usr local lib python3 6 dist package tensorflow python util deprecation py line 507 in new func return func args kwargs file usr local lib python3 6 dist package tensorflow python framework op py line 3300 in create op op def op def file usr local lib python3 6 dist package tensorflow python framework op py line 1801 in init self traceback tf stack extract stack unknownerror see above for traceback fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv2d define at mnist 3 1 convolutional big dropout py 78 node convert image define at home myuser repository tensorflow without a phd tensorflow mnist tutorial tensorflowvisu py 62 by the way there be a helpful table show which tensorflow version require what python version gcc compiler bazel version and cudnn cuda version what happen to that cheer |
tensorflowtensorflow | attributeerror batchgen object have no attribute shape | Bug | I be use tensorflow version 1 15 for a project I have convert a biobert pre train model into a keras layer follow the code here however when I run my code I get the follow error traceback most recent call last file home jupyter belona conda envs mimic proj lib python3 7 runpy py line 193 in run module as main main mod spec file home jupyter belona conda envs mimic proj lib python3 7 runpy py line 85 in run code exec code run global file home jupyter belona untitled folder deep learn clinical forecast mimic3newmodel decompensation main py line 152 in verbose args verbose file home jupyter belona conda envs mimic proj lib python3 7 site package tensorflow core python keras engine trainin g py line 1296 in fit generator step name step per epoch file home jupyter belona conda envs mimic proj lib python3 7 site package tensorflow core python keras engine trainin g generator py line 144 in model iteration shuffle shuffle file home jupyter belona conda envs mimic proj lib python3 7 site package tensorflow core python keras engine trainin g generator py line 477 in convert to generator like num sample int nest flatten datum 0 shape 0 attributeerror batchgen object have no attribute shape please how do I fix this error I really need your help |
tensorflowtensorflow | tensorflow gpu 2 1 0 attributeerror module tensorflow have no attribute placeholder | Bug | when I use the tensorflow gpu 2 1 0 to run in keras python3 6 env it say use tensorflow backend 2020 02 13 21 47 27 592762 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 100 dll warning tensorflow from d anaconda3 envs keras36 lib site package tensorflow core python compat v2 compat py 65 disable resource variable from tensorflow python op variable scope be deprecate and will be remove in a future version instruction for update non resource variable be not support in the long term traceback most recent call last file I git lip xwlip lite dc kare lipread p70 r18 code network py line 574 in model lip net param file I git lip xwlip lite dc kare lipread p70 r18 code network py line 367 in lip net input datum input name the input shape 24 112 112 3 dtype float32 file d anaconda3 envs keras36 lib site package keras engine input layer py line 178 in input input tensor tensor file d anaconda3 envs keras36 lib site package keras legacy interface py line 91 in wrapper return func args kwargs file d anaconda3 envs keras36 lib site package keras engine input layer py line 87 in init name self name file d anaconda3 envs keras36 lib site package keras backend tensorflow backend py line 517 in placeholder x tf placeholder dtype shape shape name name attributeerror module tensorflow have no attribute placeholder how can I sovle the problem thank for reply |
tensorflowtensorflow | github issue creation for bug miss | Bug | note that the option for bug report be miss image I suspect this be relate to 36636 which be recently merge when I look at the 00 bug issue md file there be a space instead of a newline before about don t know if this would cause it to go miss but definitely seem not right image other appear miss as well such as the build installation issue performance issue |
tensorflowtensorflow | tensorflow installation document korean translate page outdate | Bug | url s with the issue description of issue what need change on installation document window build from source page it tell that tensorflow bazel 0 23 0 c bazel which mean install bazel 0 23 0 to compile tensorflow however most recent version of tensorflow which be r2 1 doesn t support bazel 0 23 0 instead it use 0 27 0 0 29 0 I check english document and it tell I the version of bazel that be need so I think the korean page should be renew like this bazel tensorflow bazel tensorflow configure py tf min bazel version tf max bazel version bazel path |
tensorflowtensorflow | mkl no long work with tensorflow 1 15 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 1 15 2 python version n a bazel version if compile from source 0 24 1 gcc compiler version if compile from source gcc 6 devtoolset 6 on cento 7 cuda cudnn version n a gpu model and memory n a exact command to reproduce bash bazel build c opt copt msse4 2 copt mavx copt o3 config mkl linkopt ldl copt march x86 64 tensorflow tool pip package build pip package tensorflow tool lib package libtensorflow jni tensorflow tool lib package libtensorflow tensorflow tool lib package libtensorflow proto describe the problem libtensorflow framework so build this way do not have any symbol from mkl when try to import tensorflow from java scala it fail with symbol not find for tensorflow disablemkl the number of mkl symbol find in libtensorflow framework so for 1 15 be also significantly low than those find in 1 14 source code log code use to import tensorflow in scala scala import org tensorflow tensorflow note we have to ensure libiomp5 so and libmklml intel so be available on library load path the simple solution we find be to load the library manually in order the code snippet can be see here error libtensorflow jni so undefined symbol zn10tensorflow10disablemklev look at the number of symbol relate to mkl nm d org tensorflow native linux x86 64 libtensorflow framework so 1 grep I mkl wc l 1 nm d org tensorflow native linux x86 64 libtensorflow jni so grep I mkl wc l 9 nm d org tensorflow native linux x86 64 libtensorflow framework so 1 grep I mkl 0000000000e127b0 t zn10tensorflow12ismklenabledev for reference 1 14 have a lot more nm d org tensorflow native linux x86 64 libtensorflow framework so 1 grep I mkl wc l 11388 nm d org tensorflow native linux x86 64 libtensorflow jni so grep I mkl wc l 8 mkl be also not available in the wheel build by the command mention above python import tensorflow as tf tf python pywrap tensorflow ismklenable false |
tensorflowtensorflow | bug tf random normal have a fix value in eager mode tf2 0 | Bug | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request in tf2 0 eager mode tf random normal will give the same value over and over again this happen whether you use a keras model or just call the tf random normal tensor repeatedly import numpy as np import tensorflow as tf i d np one shape 32 10 I tf keras layers input shape 10 batch size 32 dtype tf float64 y tf random normal shape 32 10 name noise dtype tf float64 o tf add I y model tf keras model input I output o same value every time model predict i d model predict i d model predict i d this occur without kera as well x tf constant value np one shape 32 10 dtype tf float64 y tf random normal shape 32 10 name noise dtype tf float64 z tf add x y print z print z print z if you disable eager mode with tf compat v1 disable eager execution the keras model will generate new value each time it s call as it should source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem |
tensorflowtensorflow | first python project for class could someone please help I understand exif datum use python | Bug | here be the instruction and at the bottom be exp 11 s script I have no idea what to do very new to programming thank you ahead of time use the exp 11 py script provide as a baseline your assignment be as follow 1 allow the user to enter a path 2 use that path process all the jpg file contain in that folder note you will need to create a directory with jpg image 3 extract exif datum from each of the image and create a pretty table output note you will go beyond the basic and extract whatever camera or photo datum exist for each photo 4 plot the geolocation of each image on a map note there be several way to do this however the easy method would be to use mapmaker app at you can either manually enter the lat long value your code generate or you can place your result in a csv file and upload the datum to the map note this be a manual step process 5 submit both your script and a screenshot of the result exif data acquistion january 2019 version 1 1 from future import print function copyright c 2019 chet hosmer python forensic permission be hereby grant free of charge to any person obtain a copy of this software and associate documentation file the software to deal in the software without restriction include without limitation the right to use copy modify merge publish distribute sublicense and or sell copy of the software and to permit person to whom the software be furnish to do so subject to the follow condition the above copyright notice and this permission notice shall be include in all copy or substantial portion of the software usage example python exp 11 py requirement python 2 x or 3 x requirement 3rd party library that be utilize be pillow pip install pillow from the command line library import section import os python standard library operating system method import sys python standard library system method from datetime import datetime python standard libary datetime method from standard library import the python image library along with tag and gps relate tag note you must install the pillow module pip install pillow from pil import image from pil exiftag import tag gpstag import the prettytable library from prettytable import prettytable def extractgpsdictionary filename function to extract gps dictionary try pilimage image open filename exifdata pilimage getexif except exception if exception occur from pil processing report the return none none interate through the exifdata search for gps tag imagetimestamp na cameramodel na cameramake na gpsdata false gpsdictionary if exifdata for tag thevalue in exifdata item obtain the tag tagvalue tag get tag tag collect basic image datum if available if tagvalue datetimeoriginal imagetimestamp exifdata get tag strip if tagvalue make cameramake exifdata get tag strip if tagvalue model cameramodel exifdata get tag strip check the tag for gps if tagvalue gpsinfo gpsdata true find it now create a dictionary to hold the gps datum loop through the gps information for curtag in thevalue gpstag gpstag get curtag curtag gpsdictionary gpstag thevalue curtag basicexifdata imagetimestamp cameramake cameramodel return gpsdictionary basicexifdata else return none none end extractgpsdictionary def extractlatlon gps function to extract lattitude and longitude value to perform the calcuation we need at least lat lon latref and lonref try latitude gps gpslatitude latituderef gps gpslatituderef longitude gps gpslongitude longituderef gps gpslongituderef lat converttodegree latitude lon converttodegree longitude check latitude reference if south of the equator then lat value be negative if latituderef s lat 0 lat check longitude reference if west of the prime meridian in greenwich then the longitude value be negative if longituderef w lon 0 lon gpscoor lat lat latref latituderef lon lon lonref longituderef return gpscoor except return none end extract lat lon def converttodegree gpscoordinate function to convert gps cooridinate to degress d0 gpscoordinate 0 0 d1 gpscoordinate 0 1 try degree float d0 float d1 except degree 0 0 m0 gpscoordinate 1 0 m1 gpscoordinate 1 1 try minute float m0 float m1 except minute 0 0 s0 gpscoordinate 2 0 s1 gpscoordinate 2 1 try second float s0 float s1 except second 0 0 floatcoordinate float degree minute 60 0 second 3600 0 return floatcoordinate main program entry section if name main pyexif main entry point print nextract exif datum from jpeg file print script start str datetime now print process each jpeg file section latlonlist targetfile test jpg file must be locate in the same folder if os path isfile targetfile gpsdictionary exiflist extractgpsdictionary targetfile if exiflist ts exiflist 0 make exiflist 1 model exiflist 2 else ts na make na model na print photo detail print print timestamp ts print camera make make print camera model model if gpsdictionary none obtain the lat lon value from the gpsdictionary convert to degree the return value be a dictionary key value pair dcoor extractlatlon gpsdictionary print ngeo location datum print if dcoor lat dcoor get lat latref dcoor get latref lon dcoor get lon lonref dcoor get lonref if lat and lon and latref and lonref print lattitude 4 4f format lat print longitude 4 4f format lon else print warn no gps exif data else print warn no gps exif data else print warn not a valid file targetfile create result table display use prettytable generate result table section result table head resulttable prettytable file name lat lon timestamp make model your work start here print |
tensorflowtensorflow | function string keras attention layer contain error typo | Bug | it be a documentation issue but embed in the code this seem most appropriate issue template url s with the issue l187 l313 description of issue what need change there be an error in the provide example l236 276 here be a code example for use attention in a cnn attention network python value input tf keras input shape none dtype int32 value embedding token embed query input this last one should be value embedding token embed value input |
tensorflowtensorflow | spelling of placeholder incorrect in line 53 p | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change the spelling of placeholder be incorrect in line 53 clear description the spelling in placehoolder and should be placeholder for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request maybe if you accept the request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | dataset padded batch fail with invalidargumenterror | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 describe the current behavior rt tf ragged constant 1 2 3 2 1 2 3 4 1 1 ds tf datum dataset from tensor slice rt ds padded batch 2 pad shape none fail with the follow error traceback most recent call last file line 1 in file home vas anaconda3 envs text ai lib python3 6 site package tensorflow core python data op dataset op py line 1481 in padded batch drop remainder file home vas anaconda3 envs text ai lib python3 6 site package tensorflow core python data op dataset op py line 3858 in init output shape structure get flat tensor shape self structure file home vas anaconda3 envs text ai lib python3 6 site package tensorflow core python ops gen dataset op py line 4091 in padded batch dataset v2 op raise from not ok status e name file home vas anaconda3 envs text ai lib python3 6 site package tensorflow core python framework op py line 6606 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror mismatch type between padding value 0 and input dataset s component 0 int32 vs variant op paddedbatchdatasetv2 however ds map lambda x x pad batch 2 pad shape none work as expect |
tensorflowtensorflow | tensorflow installation document korean translate page outdate | Bug | url s with the issue description of issue what need change on installation document window build from source page it tell that tensorflow bazel 0 23 0 c bazel which mean install bazel 0 23 0 to compile tensorflow however most recent version of tensorflow which be r2 1 doesn t support bazel 0 23 0 instead it use 0 27 0 0 29 0 I check english document and it tell I the version of bazel that be need so I think the korean page should be renew |
tensorflowtensorflow | tf2 savedmodel do not support multiple signature | Bug | I use tf save model save and tf save model load to save and load tf2 savedmodel accord to this link 1 I create a signature and this signature be serve default then I try to add a new function with the signature decorator in class adder but after I load the model accord to this 2 I find that the signature disappear in the model I e print adder1 signature print no signature name I don t find any information about how to use multiple signature while save model so I think this may be a bug if it be not can anyone tell I how can I use multiple signature in one model thank you very much tensorflow 2 1 0 on google colab the code look like this import tensorflow as tf import tensorflow hub as hub import numpy as np import os import panda as pd class add tf module tf function input signature tf tensorspec shape none dtype tf float32 tf tensorspec shape none dtype tf float32 def add self x y return x y 2 1 tf function input signature tf tensorspec shape none dtype tf float32 def square self x return x 2 to export add tf save model save to export tmp adder adder1 tf save model load tmp add print adder1 signature adder1 sig adder1 signature serve default adder1 sig x tf constant 1 y tf constant 2 1 use in the notebook 2 |
tensorflowtensorflow | error while read resource variable anonymousvar12 from container localhost | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento 7 7 tensorflow instal from source or binary yes tensorflow version use command below 2 1 python version 3 6 describe the current behavior error when try to update tf variable in datum dataset pipeline describe the expect behavior no error code to reproduce the issue ds tf datum dataset from tensor slice tf constant 9 10 11 12 def update update ref tf variable 1 2 3 4 5 6 7 8 indice tf constant 4 3 1 7 update tf tensor scatter nd update ref indice update return update ds ds map update for I in ds print I other info log failedpreconditionerror function node inference dataset map update 674 error while read resource variable anonymousvar12 from container localhost this could mean that the variable be uninitialize not find resource localhost anonymousvar12 n10tensorflow3vare do not exist node tensorscatterupdate readvariableop op iteratorgetnextsync |
tensorflowtensorflow | save model with tf keras layer rnn with unroll true fail for save format tf | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tf nightly via docker tensorflow version git version v1 12 1 23779 g96c5c8a 2 2 0 dev20200202 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior save a tf keras sequential model with tf keras layer rnn with unroll true fail for save format tf but succeed for save format ht describe the expect behavior saving should succeed for save format tf as well code to reproduce the issue python import tensorflow as tf model tf keras sequential model add tf keras layers input shape 1 1 cell tf keras layer grucell 10 model add tf keras layer rnn cell unroll true model save test tf save format tf fail model save test h5 save format h5 work other info log unfortunately save as h5 be not an option which would actually be my favorite since it fail when have more than one cell see 36093 traceback in case of failure bash file test py line 11 in model save test tf save format tf fail file usr local lib python3 6 dist package tensorflow core python keras engine network py line 999 in save signature option file usr local lib python3 6 dist package tensorflow core python keras save save py line 138 in save model signature option file usr local lib python3 6 dist package tensorflow core python keras save save model save py line 78 in save save lib save model filepath signature option file usr local lib python3 6 dist package tensorflow core python save model save py line 955 in save checkpoint graph view file usr local lib python3 6 dist package tensorflow core python save model signature serialization py line 75 in find function to export function saveable view list function saveable view root file usr local lib python3 6 dist package tensorflow core python save model save py line 142 in list function self serialization cache file usr local lib python3 6 dist package tensorflow core python keras engine base layer py line 2535 in list function for serialization list function for serialization serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model base serialization py line 91 in list function for serialization fns self function to serialize serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model layer serialization py line 79 in function to serialize serialization cache function to serialize file usr local lib python3 6 dist package tensorflow core python keras save save model layer serialization py line 94 in get serialize attribute serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model model serialization py line 53 in get serialize attribute internal serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model layer serialization py line 103 in get serialize attribute internal function save impl wrap layer function self obj serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 161 in wrap layer function original fns replace child layer function layer serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 249 in replace child layer function serialization cache function file usr local lib python3 6 dist package tensorflow core python keras save save model layer serialization py line 94 in get serialize attribute serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model layer serialization py line 103 in get serialize attribute internal function save impl wrap layer function self obj serialization cache file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 171 in wrap layer function layer call and return conditional loss format layer name file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 487 in add function self add trace self input signature file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 402 in add trace trace with training true file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 400 in trace with training fn get concrete function args kwargs file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 531 in get concrete function return super layercall self get concrete function args kwargs file usr local lib python3 6 dist package tensorflow core python eager def function py line 953 in get concrete function concrete self get concrete function garbage collect args kwargs file usr local lib python3 6 dist package tensorflow core python eager def function py line 859 in get concrete function garbage collect self initialize args kwargs add initializer to initializer file usr local lib python3 6 dist package tensorflow core python eager def function py line 505 in initialize args kwd file usr local lib python3 6 dist package tensorflow core python eager function py line 2440 in get concrete function internal garbage collect graph function self maybe define function args kwargs file usr local lib python3 6 dist package tensorflow core python eager function py line 2771 in maybe define function graph function self create graph function args kwargs file usr local lib python3 6 dist package tensorflow core python eager function py line 2661 in create graph function capture by value self capture by value file usr local lib python3 6 dist package tensorflow core python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file usr local lib python3 6 dist package tensorflow core python eager def function py line 440 in wrap fn return weak wrap fn wrap args kwd file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 508 in wrapper ret method args kwargs file usr local lib python3 6 dist package tensorflow core python keras save save model util py line 170 in wrap with training arg lambda replace training and call false file usr local lib python3 6 dist package tensorflow core python keras util tf util py line 59 in smart cond pre true fn true fn false fn false fn name name file usr local lib python3 6 dist package tensorflow core python framework smart cond py line 54 in smart cond return true fn file usr local lib python3 6 dist package tensorflow core python keras save save model util py line 169 in lambda replace training and call true file usr local lib python3 6 dist package tensorflow core python keras save save model util py line 165 in replace training and call return wrap call args kwargs file usr local lib python3 6 dist package tensorflow core python keras save save model save impl py line 550 in call and return conditional loss return layer call input args kwargs layer get loss for input file usr local lib python3 6 dist package tensorflow core python keras layers recurrent py line 734 in call raise valueerror can not unroll a rnn if the valueerror can not unroll a rnn if the time dimension be undefined if use a sequential model specify the time dimension by pass an input shape or batch input shape argument to your first layer if your first layer be an embed you can also use the input length argument if use the functional api specify the time dimension by pass a shape or batch shape argument to your input layer |
tensorflowtensorflow | tensorflow lite android application crash on inference a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x7759200000 in tid 26388 mple inpainte pid 26388 mple inpainte | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 ubuntu 18 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device samsung a30 tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 3 5 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I be work on inpainte application on android use the tensorflowlite model but the application crash during inference describe the expect behavior I be develop an inpainte android application use tensorflow lite the tensorflow lite model take two input 1 input image 512x680x3 2 mask image 512x680x1 these two input image be load from the asset folder in the form of bitmap and then convert into bytebuffer after that they be pass to the tensorflow lite model and the model be suppose to give output in the form of bytebuffer later this bytebuffer be convert into a restore rgb image code to reproduce the issue main activity code protect void oncreate bundle savedinstancestate super oncreate savedinstancestate setcontentview r layout activity main toolbar toolbar findviewbyid r i d toolbar setsupportactionbar toolbar floatingactionbutton fab findviewbyid r i d fab fab setonclicklistener new view onclicklistener override public void onclick view view snackbar make view replace with your own action snackbar length long setaction action null show try tflite new interpreter loadmodelfile mainactivity this convert model tflite bitmap image getbitmapfromasset this 001 png bitmap mask getbitmapfromasset this tmp mask png bytebuffer bimage convertbitmaptobytebuffer image system out println first image convert bytebuffer bmask convertbitmaptobytebuffer1 mask system out println second image convert bimage order byteorder nativeorder bmask order byteorder nativeorder system out println byte in bimage bimage position system out println byte in bmask bmask position object input bimage bmask int outputtensorindex 0 int outputshape tflite getoutputtensor outputtensorindex shape 1 num class datatype outputdatatype tflite getoutputtensor outputtensorindex datatype system out println output 1 shape outputshape 1 outputshape 2 outputshape 3 system out println output 1 datatype outputdatatype tensorbuffer outputbuffer tensorbuffer createfixedsize outputshape outputdatatype map output new hashmap system out println resizing input system out println byte in ouput outputbuffer getbuffer position output put 0 outputbuffer getbuffer int input1 new int 1 512 680 3 int input2 new int 1 512 680 1 tflite resizeinput 0 input1 tflite resizeinput 1 input2 system out println resizing input do tflite runformultipleinputsoutput input output toast maketext this work toast length long show system out println inference complete tflite close catch ioexception e toast maketext this fail toast length long show e printstacktrace code for get bitmap from asset folder private bitmap getbitmapfromasset context context string filepath assetmanager assetmanager context getasset inputstream istr bitmap bitmap null try istr assetmanager open filepath bitmap bitmapfactory decodestream istr catch ioexception e handle exception return bitmap code for convert bitmap to bytebuffer for rgb input private bytebuffer convertbitmaptobytebuffer bitmap bitmap int image mean 128 float image std 128 0f bytebuffer bytebuffer bytebuffer allocatedirect 4 1 512 680 3 int intvalue new int 512 680 bitmap getpixel intvalue 0 bitmap getwidth 0 0 bitmap getwidth bitmap getheight int pixel 0 for int I 0 I 512 I for int j 0 j 680 j final int val intvalue pixel bytebuffer putfloat val 16 0xff image mean image std bytebuffer putfloat val 8 0xff image mean image std bytebuffer putfloat val 0xff image mean image std return bytebuffer code for convert bitmap to bytebuffer for single channel input private bytebuffer convertbitmaptobytebuffer1 bitmap bitmap int image mean 128 float image std 128 0f bytebuffer bytebuffer1 bytebuffer allocatedirect 4 1 512 680 1 bytebuffer1 order byteorder nativeorder int intvalue new int 512 680 bitmap getpixel intvalue 0 bitmap getwidth 0 0 bitmap getwidth bitmap getheight int pixel 0 for int I 0 I 512 I for int j 0 j 680 j final int val intvalue pixel bytebuffer1 putfloat val 16 0xff image mean image std bytebuffer1 putfloat val 8 0xff image mean image std bytebuffer1 putfloat val 0xff image mean image std return bytebuffer1 other info log 2020 02 11 14 56 45 730 26572 26572 a debug 2020 02 11 14 56 44 696 26388 26388 com example inpainte a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x7759200000 in tid 26388 mple inpainte pid 26388 mple inpainte 2020 02 11 14 56 45 730 26572 26572 a debug build fingerprint samsung a30dd a30 9 ppr1 180610 011 a305fddu2asf3 user release key 2020 02 11 14 56 45 730 26572 26572 a debug revision 3 2020 02 11 14 56 45 730 26572 26572 a debug abi arm64 2020 02 11 14 56 45 730 26572 26572 a debug pid 26388 tid 26388 name mple inpainte com example inpainte 2020 02 11 14 56 45 730 26572 26572 a debug signal 11 sigsegv code 1 segv maperr fault addr 0x7759200000 2020 02 11 14 56 45 731 26572 26572 a debug x0 000000770c7eb900 x1 0000007756164700 x2 ffffffffb35eb630 x3 00000077591fffc0 2020 02 11 14 56 45 731 26572 26572 a debug x4 000000770974fd80 x5 000000770c7eb680 x6 bf7ff19cbf7fec39 x7 bf7ff8c9bf5baee6 2020 02 11 14 56 45 731 26572 26572 a debug x8 bf800000bf800000 x9 bf709453bf7fdb6c x10 bf7fffeebf7ffeac x11 bf7fff54bf7ffa20 2020 02 11 14 56 45 731 26572 26572 a debug x12 bf800000bf800000 x13 bf7f9c97bf7fb93d x14 00000000000000aa x15 00000000000000aa 2020 02 11 14 56 45 731 26572 26572 a debug x16 0000007759593008 x17 00000077f5f60cf0 x18 0000000000000000 x19 fffffffffffffa00 2020 02 11 14 56 45 731 26572 26572 a debug x20 fffffffffffffd80 x21 0000000000000000 x22 0000007709750000 x23 0000000000000000 2020 02 11 14 56 45 731 26572 26572 a debug x24 000000770c7eb900 x25 0000000000000001 x26 0000000000005500 x27 0000000000000380 2020 02 11 14 56 45 731 26572 26572 a debug x28 0000000000000000 x29 0000007fe8dc5b90 2020 02 11 14 56 45 731 26572 26572 a debug sp 0000007fe8dc5af0 lr 0000007759423300 pc 00000077f5f60e18 2020 02 11 14 56 46 346 26572 26572 a debug backtrace 2020 02 11 14 56 46 347 26572 26572 a debug 00 pc 000000000001de18 system lib64 libc so memcpy 296 2020 02 11 14 56 46 347 26572 26572 a debug 01 pc 000000000005b2fc datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 02 pc 000000000005b008 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 03 pc 000000000005a5a0 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 04 pc 0000000000061ac8 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 05 pc 000000000005a358 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 06 pc 0000000000141aa8 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 07 pc 0000000000144f88 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so 2020 02 11 14 56 46 347 26572 26572 a debug 08 pc 000000000000eec8 datum app com example inpainte 40kevdvzyiok rsh1rbkga lib arm64 libtensorflowlite jni so java org tensorflow lite nativeinterpreterwrapper run 32 2020 02 11 14 56 46 347 26572 26572 a debug 09 pc 0000000000563be0 system lib64 libart so art quick generic jni trampoline 144 2020 02 11 14 56 46 347 26572 26572 a debug 10 pc 000000000055ae4c system lib64 libart so art quick invoke static stub 604 2020 02 11 14 56 46 347 26572 26572 a debug 11 pc 00000000000d04e8 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 2020 02 11 14 56 46 347 26572 26572 a debug 12 pc 00000000002838c0 system lib64 libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 344 2020 02 11 14 56 46 347 26572 26572 a debug 13 pc 000000000027d8c8 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 968 2020 02 11 14 56 46 347 26572 26572 a debug 14 pc 000000000052b784 system lib64 libart so mterpinvokestatic 204 2020 02 11 14 56 46 347 26572 26572 a debug 15 pc 000000000054d394 system lib64 libart so executemterpimpl 14612 2020 02 11 14 56 46 347 26572 26572 a debug 16 pc 000000000019b778 dev ashmem dalvik class dex extract in memory from data app com example inpainte 40kevdvzyiok rsh1rbkga base apk 26388 26388 delete org tensorflow lite nativeinterpreterwrapper run 156 2020 02 11 14 56 46 347 26572 26572 a debug 17 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 348 26572 26572 a debug 18 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 348 26572 26572 a debug 19 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 348 26572 26572 a debug 20 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 348 26572 26572 a debug 21 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 348 26572 26572 a debug 22 pc 000000000019adee dev ashmem dalvik class dex extract in memory from data app com example inpainte 40kevdvzyiok rsh1rbkga base apk 26388 26388 delete org tensorflow lite interpreter runformultipleinputsoutput 10 2020 02 11 14 56 46 348 26572 26572 a debug 23 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 348 26572 26572 a debug 24 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 348 26572 26572 a debug 25 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 348 26572 26572 a debug 26 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 348 26572 26572 a debug 27 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 348 26572 26572 a debug 28 pc 000000000001c28c dev ashmem dalvik classes2 dex extract in memory from data app com example inpainte 40kevdvzyiok rsh1rbkga base apk classes2 dex 26388 26388 delete com example inpainte mainactivity oncreate 736 2020 02 11 14 56 46 348 26572 26572 a debug 29 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 348 26572 26572 a debug 30 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 348 26572 26572 a debug 31 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 348 26572 26572 a debug 32 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 348 26572 26572 a debug 33 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 348 26572 26572 a debug 34 pc 00000000004c641e system framework boot framework vdex android app activity performcreate 24 2020 02 11 14 56 46 348 26572 26572 a debug 35 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 348 26572 26572 a debug 36 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 348 26572 26572 a debug 37 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 348 26572 26572 a debug 38 pc 000000000052d2c4 system lib64 libart so mterpinvokevirtualquick 584 2020 02 11 14 56 46 348 26572 26572 a debug 39 pc 0000000000550f94 system lib64 libart so executemterpimpl 29972 2020 02 11 14 56 46 349 26572 26572 a debug 40 pc 00000000005eaccc system framework boot framework vdex android app activity performcreate 2 2020 02 11 14 56 46 349 26572 26572 a debug 41 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 349 26572 26572 a debug 42 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 349 26572 26572 a debug 43 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 349 26572 26572 a debug 44 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 349 26572 26572 a debug 45 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 349 26572 26572 a debug 46 pc 00000000004e6948 system framework boot framework vdex android app instrumentation callactivityoncreate 6 2020 02 11 14 56 46 349 26572 26572 a debug 47 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 349 26572 26572 a debug 48 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 349 26572 26572 a debug 49 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 349 26572 26572 a debug 50 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 349 26572 26572 a debug 51 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 349 26572 26572 a debug 52 pc 00000000004bd762 system framework boot framework vdex android app activitythread performlaunchactivity 864 2020 02 11 14 56 46 349 26572 26572 a debug 53 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 349 26572 26572 a debug 54 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 349 26572 26572 a debug 55 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 349 26572 26572 a debug 56 pc 000000000052b5c0 system lib64 libart so mterpinvokedirect 296 2020 02 11 14 56 46 349 26572 26572 a debug 57 pc 000000000054d314 system lib64 libart so executemterpimpl 14484 2020 02 11 14 56 46 349 26572 26572 a debug 58 pc 00000000004bd37a system framework boot framework vdex android app activitythread handlelaunchactivity 72 2020 02 11 14 56 46 349 26572 26572 a debug 59 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 349 26572 26572 a debug 60 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 349 26572 26572 a debug 61 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 349 26572 26572 a debug 62 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 349 26572 26572 a debug 63 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 349 26572 26572 a debug 64 pc 00000000005050cc system framework boot framework vdex android app servertransaction launchactivityitem execute 114 2020 02 11 14 56 46 349 26572 26572 a debug 65 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 350 26572 26572 a debug 66 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 350 26572 26572 a debug 67 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 350 26572 26572 a debug 68 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 350 26572 26572 a debug 69 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 350 26572 26572 a debug 70 pc 0000000000505d50 system framework boot framework vdex android app servertransaction transactionexecutor executecallback 198 2020 02 11 14 56 46 350 26572 26572 a debug 71 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 350 26572 26572 a debug 72 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 350 26572 26572 a debug 73 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 350 26572 26572 a debug 74 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 350 26572 26572 a debug 75 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 350 26572 26572 a debug 76 pc 0000000000505c62 system framework boot framework vdex android app servertransaction transactionexecutor execute 68 2020 02 11 14 56 46 350 26572 26572 a debug 77 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 350 26572 26572 a debug 78 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 350 26572 26572 a debug 79 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 350 26572 26572 a debug 80 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 350 26572 26572 a debug 81 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 350 26572 26572 a debug 82 pc 00000000004bcb2c system framework boot framework vdex android app activitythread h handlemessage 208 2020 02 11 14 56 46 350 26572 26572 a debug 83 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 350 26572 26572 a debug 84 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 350 26572 26572 a debug 85 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 350 26572 26572 a debug 86 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 350 26572 26572 a debug 87 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 350 26572 26572 a debug 88 pc 0000000000c637de system framework boot framework vdex android os handler dispatchmessage 42 2020 02 11 14 56 46 350 26572 26572 a debug 89 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 350 26572 26572 a debug 90 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 350 26572 26572 a debug 91 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 350 26572 26572 a debug 92 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 350 26572 26572 a debug 93 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 351 26572 26572 a debug 94 pc 0000000000c6c452 system framework boot framework vdex android os looper loop 406 2020 02 11 14 56 46 351 26572 26572 a debug 95 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 351 26572 26572 a debug 96 pc 000000000025d0c0 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2020 02 11 14 56 46 351 26572 26572 a debug 97 pc 000000000027d8ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 940 2020 02 11 14 56 46 351 26572 26572 a debug 98 pc 000000000052b784 system lib64 libart so mterpinvokestatic 204 2020 02 11 14 56 46 351 26572 26572 a debug 99 pc 000000000054d394 system lib64 libart so executemterpimpl 14612 2020 02 11 14 56 46 351 26572 26572 a debug 100 pc 00000000004c27b8 system framework boot framework vdex android app activitythread main 220 2020 02 11 14 56 46 351 26572 26572 a debug 101 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 351 26572 26572 a debug 102 pc 000000000051ab14 system lib64 libart so artquicktointerpreterbridge 1020 2020 02 11 14 56 46 351 26572 26572 a debug 103 pc 0000000000563cfc system lib64 libart so art quick to interpreter bridge 92 2020 02 11 14 56 46 351 26572 26572 a debug 104 pc 000000000055ae4c system lib64 libart so art quick invoke static stub 604 2020 02 11 14 56 46 351 26572 26572 a debug 105 pc 00000000000d04e8 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 2020 02 11 14 56 46 351 26572 26572 a debug 106 pc 00000000004618ac system lib64 libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 104 2020 02 11 14 56 46 351 26572 26572 a debug 107 pc 0000000000463300 system lib64 libart so art invokemethod art scopedobjectaccessalreadyrunnable const jobject jobject jobject unsigned long 1440 2020 02 11 14 56 46 351 26572 26572 a debug 108 pc 00000000003f2984 system lib64 libart so art method invoke jnienv jobject jobject jobjectarray 52 2020 02 11 14 56 46 351 26572 26572 a debug 109 pc 000000000011e7e4 system framework arm64 boot oat offset 0x114000 java lang class getdeclaredmethodinternal dedupe 180 2020 02 11 14 56 46 351 26572 26572 a debug 110 pc 000000000055ab88 system lib64 libart so art quick invoke stub 584 2020 02 11 14 56 46 351 26572 26572 a debug 111 pc 00000000000d04c8 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 200 2020 02 11 14 56 46 351 26572 26572 a debug 112 pc 00000000002838c0 system lib64 libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 344 2020 02 11 14 56 46 351 26572 26572 a debug 113 pc 000000000027d8c8 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 968 2020 02 11 14 56 46 351 26572 26572 a debug 114 pc 000000000052a280 system lib64 libart so mterpinvokevirtual 588 2020 02 11 14 56 46 351 26572 26572 a debug 115 pc 000000000054d214 system lib64 libart so executemterpimpl 14228 2020 02 11 14 56 46 351 26572 26572 a debug 116 pc 00000000013d1462 system framework boot framework vdex com android internal os runtimeinit methodandargscaller run 22 2020 02 11 14 56 46 352 26572 26572 a debug 117 pc 00000000002575cc system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3090282045 488 2020 02 11 14 56 46 352 26572 26572 a debug 118 pc 000000000051ab14 system lib64 libart so artquicktointerpreterbridge 1020 2020 02 11 14 56 46 352 26572 26572 a debug 119 pc 0000000000563cfc system lib64 libart so art quick to interpreter bridge 92 2020 02 11 14 56 46 352 26572 26572 a debug 120 pc 0000000000e17090 system framework arm64 boot framework oat offset 0x41f000 com android internal os zygoteinit main 2208 2020 02 11 14 56 46 352 26572 26572 a debug 121 pc 000000000055ae4c system lib64 libart so art quick invoke static stub 604 2020 02 11 14 56 46 352 26572 26572 a debug 122 pc 00000000000d04e8 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 2020 02 11 14 56 46 352 26572 26572 a debug 123 pc 00000000004618ac system lib64 libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 104 2020 02 11 14 56 46 352 26572 26572 a debug 124 pc 000000000046150c system lib64 libart so art invokewithvarargs art scopedobjectaccessalreadyrunnable const jobject jmethodid std va list 424 2020 02 11 14 56 46 352 26572 26572 a debug 125 pc 0000000000366214 system lib64 libart so art jni callstaticvoidmethodv jnienv jclass jmethodid std va list 652 2020 02 11 14 56 46 352 26572 26572 a debug 126 pc 00000000000b8ebc system lib64 libandroid runtime so jnienv callstaticvoidmethod jclass jmethodid 120 2020 02 11 14 56 46 352 26572 26572 a debug 127 pc 00000000000bba78 system lib64 libandroid runtime so android androidruntime start char const android vector const bool 772 2020 02 11 14 56 46 352 26572 26572 a debug 128 pc 000000000000498c system bin app process64 main 1200 2020 02 11 14 56 46 352 26572 26572 a debug 129 pc 00000000000ae8f0 system lib64 libc so libc init 88 |
tensorflowtensorflow | value error | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version valueerror traceback most recent call last in 1 classifier fit x train y train batch size 10 epoch 100 anaconda3 lib site package kera engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step kwargs 950 sample weight sample weight 951 class weight class weight 952 batch size batch size 953 prepare validation datum 954 do validation false anaconda3 lib site package kera engine training py in standardize user datum self x y sample weight class weight check array length batch size 787 feed output shape 788 check batch axis false don t enforce the batch size 789 exception prefix target 790 791 generate sample wise weight value give the sample weight and anaconda3 lib site package kera engine training util py in standardize input data datum name shape check batch axis exception prefix 136 expect name I to have shape 137 str shape but get array with shape 138 str data shape 139 return datum 140 valueerror error when check target expect dense 3 to have shape 6 but get array with shape 1 |
tensorflowtensorflow | custom layer weight all have the same name by default | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary no tensorflow version use command below 2 1 python version 3 6 7 gpu model and memory no gpu describe the current behavior I be write a custom layer and stumble upon the follow error the comment here issuecomment 582829374 make I realize that this be probably due to the custom layer weight I be use and realize that in my custom layer the weight all have by default the same name but it s not only my custom layer if I use the code example from the documentation good practice defer weight creation until the shape of the input be know I also notice that the weight all have the same name this be why I answer no to the question of whether I have write custom code because it fail also in the code provide describe the expect behavior to I it s more of a bug than a documentation issue because we expect the weight to have different name by default increase id I mean however name the weight indeed solve the problem code to reproduce the issue python import tensorflow as tf class linear tf keras layers layer def init self unit 32 super linear self init self unit unit def build self input shape self w self add weight shape input shape 1 self unit initializer random normal trainable true self b self add weight shape self unit initializer random normal trainable true def call self input return tf matmul input self w self b l linear 32 l build 128 print w name for w in l weight other info log the code above give variable 0 variable 0 this cause a problem when save the model for example with the modelcheckpoint callback |
tensorflowtensorflow | node have input from different frame while context k rnn | Bug | I need a simple counter in lstmcell s call how can this be accomplish my attempt below minimal example w approach 1 I m use tf 1 14 0 in graph mode add the following to lstmcell l2027 python in call self add update self step assign add 1 in init self step none property def step self variable the current layer time step if self step be none self step self add weight step shape dtype dtype int64 trainable false aggregation tf variable variableaggregation only first replica return self step approach 2 python instead of self step and property def step add below to init self step k variable 0 dtype int64 name step error trace python approach 1 file c dev rbn main py line 42 in model train on batch x y file d anaconda envs s4 env lib site package tensorflow python keras engine training py line 1175 in train on batch output self train function in pylint disable not callable file d anaconda envs s4 env lib site package tensorflow python keras backend py line 3289 in call self make callable feed array feed symbol symbol val session file d anaconda envs s4 env lib site package tensorflow python keras backend py line 3222 in make callable callable fn session make callable from option callable opt file d anaconda envs s4 env lib site package tensorflow python client session py line 1489 in make callable from option return basesession callable self callable option file d anaconda envs s4 env lib site package tensorflow python client session py line 1446 in init session session option ptr invalidargumenterror node training 1 group dep have input from different frame the input node lstm while assignaddvariableop be in frame lstm while while context the input node adam adam assignaddvariableop be in frame python approach 2 file c dev rbn main py line 42 in model train on batch x y file d anaconda envs s4 env lib site package tensorflow python keras engine training py line 1174 in train on batch self make train function file d anaconda envs s4 env lib site package tensorflow python keras engine training py line 2219 in make train function param self collect trainable weight loss self total loss file d anaconda envs s4 env lib site package tensorflow python keras optimizer v2 optimizer v2 py line 492 in get update grad self get gradient loss param file d anaconda envs s4 env lib site package tensorflow python keras optimizer v2 optimizer v2 py line 399 in get gradient k argmax k round k eval format param valueerror variable have none for gradient please make sure that all of your op have a gradient define I e be differentiable common op without gradient k argmax k round k eval observation self add update self step assign add 1 insert right after k rnn l749 in rnn do work w approach 1 but self step must be accessible during k rnn s control flow op while loop l4236 and be incremente by 1 at each loop iteration note I m aware of workaround via re implementation of rnn or k rnn but that s too intrusive a simple op like this shouldn t require a dedicated base class and backend function |
tensorflowtensorflow | error throw when decoration use suggest use decoration | Bug | I be go through the deepdream tutorial for tensorflow 2 0 locate here os platform and distribution clear linux os version i d 32270 tensorflow instal from source or binary clear linux machine learn tensorflow bundle tensorflow version use command below 2 0 0 python version 3 8 1 cpu model and memory intel r xeon r gold 6136 cpu 3 00ghz describe the current behavior this work class deepdream tf module def init self model self model model notice that these line be comment out tf function input signature tf tensorspec shape none none 3 dtype tf float32 tf tensorspec shape dtype tf int32 tf tensorspec shape dtype tf float32 def call self img step step size print trace loss tf constant 0 0 for n in tf range step with tf gradienttape as tape this need gradient relative to img gradienttape only watch tf variable s by default tape watch img loss calc loss img self model calculate the gradient of the loss with respect to the pixel of the input image gradient tape gradient loss img normalize the gradient gradient tf math reduce std gradient 1e 8 in gradient ascent the loss be maximize so that the input image increasingly excite the layer you can update the image by directly add the gradient because they re the same shape img img gradient step size img tf clip by value img 1 1 return loss img this as be give in the tutorial do not class deepdream tf module def init self model self model model tf function input signature tf tensorspec shape none none 3 dtype tf float32 tf tensorspec shape dtype tf int32 tf tensorspec shape dtype tf float32 def call self img step step size print trace loss tf constant 0 0 for n in tf range step with tf gradienttape as tape this need gradient relative to img gradienttape only watch tf variable s by default tape watch img loss calc loss img self model calculate the gradient of the loss with respect to the pixel of the input image gradient tape gradient loss img normalize the gradient gradient tf math reduce std gradient 1e 8 in gradient ascent the loss be maximize so that the input image increasingly excite the layer you can update the image by directly add the gradient because they re the same shape img img gradient step size img tf clip by value img 1 1 return loss img I ve put the error that s throw in the section below describe the expect behavior I would expect it to work with the decoration tag and perhaps not work with the decoration tag comment out however when I run it with the decoration it throw an error that seem to suggest add the decoration that appear to already be there and when I comment out the decoration tag it work when I use the decoration tag the final error be operatornotallowedingrapherror iterate over tf tensor be not allow autograph do not convert this function try decorate it directly with tf function more detailed log below however when I remove the decoration by comment it out that be when it work note I have tensorflow 2 0 0 setup on google collab as well with python 3 6 9 and it do not display the same behavior it work either with or without the decoration all of this be instal on a fresh clear linux install tonight with no tinkering I have another installation early where I try a fix see elsewhere which include downgrade gast to 0 2 2 I do that and that do not work code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem current deep dream tutorial at def run deep dream simple img step 100 step size 0 01 convert from uint8 to the range expect by the model img tf keras application inception v3 preprocess input img img tf convert to tensor img step size tf convert to tensor step size step remain step step 0 while step remain if step remain 100 run step tf constant 100 else run step tf constant step remain step remain run step step run step loss img deepdream img run step tf constant step size display clear output wait true show deprocess img print step loss format step loss result deprocess img display clear output wait true show result return result from datetime import datetime starttime datetime now dream img run deep dream simple img original img step 100 step size 0 01 print datetime now starttime other info log warn tensorflow entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause invalid value for node expect ast ast get to visit list of node use visit block instead warn entity could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause invalid value for node expect ast ast get to visit list of node use visit block instead trace valueerror traceback most recent call last usr lib python3 8 site package tensorflow core python autograph impl api py in convert call f option args kwargs caller fn scope 505 option option autograph module tf inspect getmodule convert call 506 convert f conversion convert target entity program ctx 507 usr lib python3 8 site package tensorflow core python autograph impl conversion py in convert entity program ctx 320 321 convert entity info convert with cache entity program ctx 322 free nonglobal var names usr lib python3 8 site package tensorflow core python autograph impl conversion py in convert with cache entity program ctx free nonglobal var name 238 239 node convert name entity info convert entity to ast 240 entity program ctx usr lib python3 8 site package tensorflow core python autograph impl conversion py in convert entity to ast o program ctx 470 elif tf inspect ismethod o 471 node name entity info convert func to ast o program ctx 472 elif hasattr o class usr lib python3 8 site package tensorflow core python autograph impl conversion py in convert func to ast f program ctx do rename 668 context converter entitycontext namer entity info program ctx new name 669 node node to graph node context 670 usr lib python3 8 site package tensorflow core python autograph impl conversion py in node to graph node context 697 698 node converter standard analysis node context be initial true 699 node converter apply node context function scope usr lib python3 8 site package tensorflow core python autograph core converter py in standard analysis node context be initial 383 node qual name resolve node 384 node activity resolve node context none 385 node reach definition resolve node context graph annotateddef usr lib python3 8 site package tensorflow core python autograph pyct static analysis activity py in resolve node context parent scope 497 def resolve node context parent scope none 498 return activityanalyzer context parent scope visit node usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit self node 479 if not anno hasanno node anno basic skip processing 480 result super base self visit node 481 self ctx current origin parent origin usr lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr lib python3 8 site package tensorflow core python autograph pyct static analysis activity py in visit functiondef self node 441 self enter scope false 442 node body self visit block node body 443 anno setanno node nodeanno body scope self scope usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit block self node before visit after visit 370 371 replacement self visit node 372 usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit self node 479 if not anno hasanno node anno basic skip processing 480 result super base self visit node 481 self ctx current origin parent origin usr lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr lib python3 8 site package tensorflow core python autograph pyct static analysis activity py in visit expr self node 265 def visit expr self node 266 return self process statement node 267 usr lib python3 8 site package tensorflow core python autograph pyct static analysis activity py in process statement self node 259 self enter scope false 260 node self generic visit node 261 anno setanno node anno static scope self scope usr lib python3 8 ast py in generic visit self node 444 elif isinstance old value ast 445 new node self visit old value 446 if new node be none usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit self node 479 if not anno hasanno node anno basic skip processing 480 result super base self visit node 481 self ctx current origin parent origin usr lib python3 8 ast py in visit self node 359 visitor getattr self method self generic visit 360 return visitor node 361 usr lib python3 8 site package tensorflow core python autograph pyct static analysis activity py in visit call self node 327 self enter scope false 328 node args self visit block node args 329 node keyword self visit block node keyword usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit block self node before visit after visit 370 371 replacement self visit node 372 usr lib python3 8 site package tensorflow core python autograph pyct transformer py in visit self node 457 type node 458 raise valueerror msg 459 valueerror invalid value for node expect ast ast get to visit list of node use visit block instead during handling of the above exception another exception occur operatornotallowedingrapherror traceback most recent call last in 2 starttime datetime now 3 4 dream img run deep dream simple img original img 5 step 100 step size 0 01 6 in run deep dream simple img step step size 14 step run step 15 16 loss img deepdream img run step tf constant step size 17 18 display clear output wait true usr lib python3 8 site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace usr lib python3 8 site package tensorflow core python eager def function py in call self args kwd 501 this be the first call of call so we have to initialize 502 initializer map object identity objectidentitydictionary 503 self initialize args kwd add initializer to initializer map 504 finally 505 at this point we know that the initialization be complete or less usr lib python3 8 site package tensorflow core python eager def function py in initialize self args kwd add initializer to 405 self graph deleter functiondeleter self lift initializer graph 406 self concrete stateful fn 407 self stateful fn get concrete function internal garbage collect pylint disable protect access 408 args kwd 409 usr lib python3 8 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 1846 if self input signature 1847 args kwargs none none 1848 graph function self maybe define function args kwargs 1849 return graph function 1850 usr lib python3 8 site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs usr lib python3 8 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2029 arg name base arg name miss arg name 2030 graph function concretefunction 2031 func graph module func graph from py func 2032 self name 2033 self python function usr lib python3 8 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor usr lib python3 8 site package tensorflow core python eager def function py in wrap fn args kwd 356 wrap allow autograph to swap in a converted function we give 357 the function a weak reference to itself to avoid a reference cycle 358 return weak wrap fn wrap args kwd 359 weak wrap fn weakref ref wrap fn 360 usr lib python3 8 site package tensorflow core python eager function py in bind method wrapper args kwargs 2656 however the replacer be still responsible for attach self properly 2657 todo mdan be it possible to do it here instead 2658 return wrap fn args kwargs 2659 weak bind method wrapper weakref ref bind method wrapper 2660 usr lib python3 8 site package tensorflow core python framework func graph py in wrapper args kwargs 894 todo mdan push this block high in tf function s call stack 895 try 896 return autograph convert call 897 original func 898 autograph conversionoptions usr lib python3 8 site package tensorflow core python autograph impl api py in convert call f option args kwargs caller fn scope 532 the verbosity to 10 on linux export autograph verbosity 10 and 533 attach the full output cause s target entity e 534 return call unconverted f args kwargs option 535 536 with stacktracemapper convert f tf stack currentmodulefilter usr lib python3 8 site package tensorflow core python autograph impl api py in call unconverted f args kwargs option update cache 324 325 if inspect util istfmethodtarget f 326 return f self call args kwargs 327 328 try usr lib python3 8 site package tensorflow core python eager function py in call self args kwargs 2618 if tf inspect ismethod wrap fn 2619 wrap fn six get unbound function wrap fn 2620 return wrap fn self weakrefself target args kwargs 2621 2622 in call self img step step size 12 print trace 13 loss tf constant 0 0 14 for n in tf range step 15 with tf gradienttape as tape 16 this need gradient relative to img usr lib python3 8 site package tensorflow core python framework op py in iter self 545 def iter self 546 if not context execute eagerly 547 self disallow iteration 548 549 shape self shape tuple usr lib python3 8 site package tensorflow core python framework op py in disallow iteration self 538 self disallow when autograph disable iterate over tf tensor 539 elif ag ctx control status ctx status ag ctx status enable 540 self disallow when autograph enable iterate over tf tensor 541 else 542 default v1 style graph execution usr lib python3 8 site package tensorflow core python framework op py in disallow when autograph enable self task 514 515 def disallow when autograph enable self task 516 raise error operatornotallowedingrapherror 517 be not allow autograph do not convert this function try 518 decorate it directly with tf function format task operatornotallowedingrapherror iterate over tf tensor be not allow autograph do not convert this function try decorate it directly with tf function thank you |
tensorflowtensorflow | successfully convert model through full integer but leave node with int8 instead of uint8 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts gnu linux 4 15 0 55 generic x86 64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device would run on raspberry pi4 edge tpu or dev board tensorflow instal from source or binary source tensorflow version use command below v1 15 0 0 g590d6eef7e 1 15 0 python version 3 7 4 bazel version if compile from source 0 24 1 gcc compiler version if compile from source gcc 7 3 0 cuda cudnn version cuda compilation tool release 10 1 v10 1 243 gpu model and memory 2080ti but tensorflow be not compile with cuda support you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I convert my model from pytorch onnx tensorflow tensorflow lite and I would run on edge tpu thus my tensorflow version should be 1 15 0 as suggestion however due to lack of split and incorrect of min and max I rebuild from source base on the two link below diff 71a783b5a7624430897d44e42f816444 url I use full integer quantization and all the conversion be successful the tflite file work fine on interpreter input and output datatype be tf uint8 correctly but almost every node in the model except input and output be tf int8 quantization describe the expect behavior I think these node should be tf uint8 and I wonder that whether I could just load the tflite file via json format and change the model subgraph 0 tensor type int8 to uint8 this idea base on the link below actually it fail I want to know the reason code to reproduce the issue import tensorflow as tf import torch from pil import image import numpy as np import pathlib tf compat v1 enable eager execution tf log set verbosity tf logging debug import matplotlib pyplot as plt print tf version print torch version val ilsvrc2012 img val val pathlib path val item np array I name for I in val glob image generator tf keras preprocesse image imagedatagenerator rescale 1 255 datum format channel first train datum gen image generator flow from directory directory str val batch size 100 shuffle true target size 320 320 class list item image batch label batch next train datum gen image batch np expand dim image batch 1 print image batch dtype image batch label batch next train datum gen mean 0 485 0 456 0 406 std 0 229 0 224 0 225 for I in range 3 image batch I image batch I mean I std I print image batch shape image batch np expand dim image batch 1 def representative datum gen for I in range 100 yield image batch I graph def file test 320 pb input input 1 output 655 converter tf lite tfliteconverter from frozen graph graph def file input output converter optimization tf lite optimize default converter inference type tf uint8 converter representative dataset representative datum gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 converter experimental new quantizer true converter experimental enable mlir converter true tflite model op converter convert open cvte model 320 quant op tflite wb write tflite model op other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I try compile with edge tpu compiler and find out unsupported data type then I use netron find that most of they be tf int8 here be the edge tpu compiler log result of netron and my model edge tpu compiler version 2 0 291256449 model compile successfully in 45 ms input model home xxxxx01 39dstest cvte model 320 quant op tflite input size 6 28mib output model cvte model 320 quant op edgetpu tflite output size 6 03mib on chip memory available for cache model parameter 0 00b on chip memory use for cache model parameter 0 00b off chip memory use for stream uncached model parameter 0 00b number of edge tpu subgraph 0 total number of operation 7950 operation log cvte model 320 quant op edgetpu log model successfully compile but not all operation be support by the edge tpu a percentage of the model will instead run on the cpu which be slow if possible consider update your model to use only operation support by the edge tpu for detail visit g co coral model req number of operation that will run on edge tpu 0 number of operation that will run on cpu 7950 operator count status split 36 operation be work on an unsupported data type quantize 3729 operation be otherwise support but not map due to some unspecified limitation mul 74 operation be work on an unsupported data type maximum 38 operation be work on an unsupported data type minimum 38 operation be work on an unsupported data type add 74 operation be work on an unsupported data type concatenation 56 operation be work on an unsupported data type pad 37 operation be work on an unsupported data type conv 2d 38 operation be work on an unsupported data type depthwise conv 2d 3680 operation be work on an unsupported data type transpose 148 operation be work on an unsupported data type mean 1 operation be work on an unsupported data type fully connect 1 operation be work on an unsupported data type result of netron result of netron my model test 320 zip new to github thx |
tensorflowtensorflow | fft2d different result between tensorflow and numpy | Bug | hi I have one question about the fft2d and fftshift the follow code show the similar process fft2d fftshift filter lpf ifftshift ifft2d with numpy and tensorflow however the result be mush different and the result of tensorflow be so weird could anyone help I figure out what s wrong in the code thank a lot img cv2 imread xxx png img cv2 cvtcolor img cv2 color bgr2gray astype np float32 width img shape 1 height img shape 0 filter lpf butterworth width height 33 4 use numpy fft2 f np fft fft2 img f np fft fftshift f f l f filter f l np fft ifftshift f l f l np fft ifft2 f l f np np real f l input placeholder tf compat v1 placeholder tf float32 shape height width 1 name input use tensorflow fft2d tf filter tf convert to tensor filter dtype tf float32 tf filter comx tf expand dim tf complex tf filter tf zeros tf filter shape 2 fft org tf fft2d tf cast input placeholder tf complex64 fft org tf signal fftshift fft org axis 0 1 fft filter fft org tf filter comx fft filter tf ifft2d tf signal ifftshift fft filter axis 0 1 fft filter tf real fft filter sess tf session config tf configproto device count gpu 0 sess run tf global variable initializer sess run tf local variable initializer with sess as default feed dict input placeholder np expand dim img axis 2 f tf sess run fft filter feed dict feed dict f np cv2 normalize f np none 0 255 cv2 norm minmax astype np uint8 f tf cv2 normalize f tf none 0 255 cv2 norm minmax astype np uint8 cv2 imshow numpy f np cv2 imshow tensorflow f tf cv2 waitkey result system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 win10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 15 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory 2080ti 11 g |
tensorflowtensorflow | nmt with attention | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | nit tutorial inconsistency in text generation with rnn | Bug | url s with the issue the prediction loop description of issue what need change consider the previously generate letter for subsequent generation right now only the lastly generate letter be consider to generate the immediate next letter the result read very far from natural as you can imagine clear description the generate text method be suppose to generate text letter by letter by take into account a starting string and any letter that have be generate so far right now it s generate the first letter by take into account the starting string then from then on it only consider the last generate letter so in this def generate text model start string evaluation step generate text use the learn model number of character to generate num generate 1000 convert our start string to number vectorize input eval char2idx s for s in start string input eval tf expand dim input eval 0 empty string to store our result text generate low temperature result in more predictable text high temperature result in more surprising text experiment to find the good setting temperature 1 0 here batch size 1 model reset state for I in range num generate prediction model input eval remove the batch dimension prediction tf squeeze prediction 0 use a categorical distribution to predict the character return by the model prediction prediction temperature predict i d tf random categorical prediction num sample 1 1 0 numpy we pass the predict character as the next input to the model along with the previous hidden state input eval tf expand dim predict i d 0 text generate append idx2char predict i d return start string join text generate the end of the for loop need to be update to match the comment description with something like this we pass the predict character as the next input to the model along with the previous hidden state input eval tf concat input eval tf expand dim predict i d 0 1 where we add the newly predict I d to the end of what be currently input eval submit a pull request I plan on submit a pull request with the quick fix |
tensorflowtensorflow | software requirement may lack vc runtime | Bug | url s with the issue software requirement software requirement description of issue what need change the requirement may lack the dependency of microsoft visual c redistributable for visual studio runtime clear description after installation tf raise an error with importerror dll load fail the specify module could not be find fix after instal microsoft visual c redistributable for visual studio 2015 2017 and 2019 via error page and issue wanna confirm whether microsoft visual c redistributable for visual studio must be instal and its version if true please add this information to software requirement software requirement submit a pull request if confirmation I can pr to tensorflow doc ps here come system info to trace that issue hardware processor intel r core tm i7 7500u cpu 2 70ghz 4 cpus 2 9ghz memory 16384 mb ram card name nvidia geforce 940mx manufacturer nvidia chip type geforce 940mx software operating system window 10 pro 64 bit 10 0 build 18363 18362 19h1 release 190318 1202 conda tensorflow 2 1 0 install via pypi conda cudatoolkit 10 1 243 install via anaconda conda cudnn 7 6 5 install via anaconda nvidia driver version 441 87 it can be reproduce in conda install or native install hope those info help thank in advance |
tensorflowtensorflow | tpu proto buf error after migration from 1 3 to 2 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian 9 tensorflow instal from source or binary gcp tf 2 1 image tensorflow version use command below tf v 2 1 0 kera v 2 2 4 tf python version 3 5 3 tpu software version be 2 1 I try sample from here and it work when I try to convert my own perfectly work tpu model from 1 3 to 2 1 use distributedstrategy it fail when I run the below code in jupyter kernel die when it go to fit from future import absolute import division print function unicode literal import tensorflow as tf import tensorflow kera as k print tf v tf version keras v k version resolver tf distribute cluster resolver tpuclusterresolver tpu grpc xx xx xx xx 8470 tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver with strategy scope model k sequential model add k layer conv1d filter 16 kernel size 2 activation relu input shape window size 1 model add k layer conv1d filter 32 kernel size 2 activation relu model add k layer conv1d filter 64 kernel size 2 activation relu model add k layer conv1d filter 128 kernel size 2 activation relu model add k layers maxpooling1d pool size 2 model add k layer flatten model add k layer dense cat activation softmax summary print model metric name print model summary print model compile optimizer adam loss tf keras loss categoricalcrossentropy from logit true metric categorical accuracy print model fit x y batch size window size shuffle false epoch 5 output tf v 2 1 0 kera v 2 2 4 tf info tensorflow initialize the tpu system xxxxxxxxxx 8470 info tensorflow initialize the tpu system xxxxxxxxxx 8470 info tensorflow clear out eager cache info tensorflow clear out eager cache info tensorflow finish initialize tpu system info tensorflow finish initialize tpu system info tensorflow find tpu system info tensorflow find tpu system info tensorflow num tpu core 8 info tensorflow num tpu core 8 info tensorflow num tpu worker 1 info tensorflow num tpu worker 1 info tensorflow num tpu core per worker 8 info tensorflow num tpu core per worker 8 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 loss model sequential layer type output shape param conv1d conv1d none 1279 16 48 conv1d 1 conv1d none 1278 32 1056 conv1d 2 conv1d none 1277 64 4160 conv1d 3 conv1d none 1276 128 16512 max pooling1d maxpooling1d none 638 128 0 flatten flatten none 81664 0 dense dense none 4 326660 total param 348 436 trainable param 348 436 non trainable param 0 none I can see this error in the console though I be not sure where the proto buf be come and why do it work in tf 1 3 hence I consider this a bug e0208 17 03 32 001652096 4567 proto buffer writer h 83 assertion fail byte count total size if I cut datum 50 time the fit start but fail immediately with different error notfounderror no register identity opkernel for tpu device compatible with node node identity full traceback train on 27720 sample epoch 1 5 0 27720 eta 0s notfounderror traceback most recent call last in 1 model fit x y batch size window size shuffle false epoch epoch n usr local lib python3 5 dist package tensorflow core python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 817 max queue size max queue size 818 worker worker 819 use multiprocesse use multiprocesse 820 821 def evaluate self usr local lib python3 5 dist package tensorflow core python keras engine training v2 py in fit self model x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation freq max queue size worker use multiprocesse kwargs 327 training datum iter initializer pylint disable pointless statement 328 else 329 training datum iter iter training dataset 330 331 training result run one epoch usr local lib python3 5 dist package tensorflow core python distribute input lib py in iter self 563 564 worker iterator create iterator per worker self clone dataset 565 self input worker 566 iterator distributediterator self input worker worker iterator 567 self strategy usr local lib python3 5 dist package tensorflow core python distribute input lib py in create iterator per worker worker dataset input worker 1009 worker device input worker compute device for worker I 1010 iterator singleworkerdatasetiterator worker dataset I worker 1011 worker device 1012 iterator append iterator 1013 return iterator usr local lib python3 5 dist package tensorflow core python distribute input lib py in init self dataset worker device 862 self worker worker 863 self device device 864 self make iterator 865 866 def make iterator self usr local lib python3 5 dist package tensorflow core python distribute input lib py in make iterator self 868 with op device self worker 869 self iterator multi device iterator op multideviceiterator 870 self dataset self device 871 872 def get next self device name none usr local lib python3 5 dist package tensorflow core python data op multi device iterator op py in init self dataset device max buffer size prefetch buffer size source device 292 self experimental slack 293 if context execute eagerly 294 self device iterator append dataset op make one shot iterator ds 295 else 296 self device iterator append usr local lib python3 5 dist package tensorflow core python data op dataset op py in make one shot iterator dataset 2479 return dataset make one shot iterator pylint disable protect access 2480 except attributeerror 2481 return datasetv1adapter dataset make one shot iterator pylint disable protect access 2482 2483 usr local lib python3 5 dist package tensorflow core python data op dataset op py in make one shot iterator self 2057 def make one shot iterator self pylint disable miss docstre 2058 if context execute eagerly 2059 return iterator op ownediterator self 2060 2061 ensure same dataset graph self usr local lib python3 5 dist package tensorflow core python data ops iterator op py in init self dataset component element spec 592 context context device spec device type cpu 593 with op device cpu 0 594 self create iterator dataset 595 else 596 self create iterator dataset usr local lib python3 5 dist package tensorflow core python data ops iterator op py in create iterator self dataset 617 output type self flat output type 618 output shape self flat output shape 619 gen dataset op make iterator ds variant self iterator resource 620 delete the resource when this object be delete 621 self resource deleter iteratorresourcedeleter usr local lib python3 5 dist package tensorflow core python ops gen dataset op py in make iterator dataset iterator name 2703 pass add node to the tensorflow graph 2704 except core notokstatusexception as e 2705 op raise from not ok status e name 2706 add node to the tensorflow graph 2707 op output op def library apply op helper usr local lib python3 5 dist package tensorflow core python framework op py in raise from not ok status e name 6604 message e message name name if name be not none else 6605 pylint disable protect access 6606 six raise from core status to exception e code message none 6607 pylint enable protect access 6608 usr local lib python3 5 dist package six py in raise from value from value notfounderror no register identity opkernel for tpu device compatible with node node identity opkernel be find but attribute didn t match request attribute t dt half register device xla gpu jit t in dt float dt double dt int32 dt uint8 dt int16 dt half dt uint32 dt uint64 dt resource dt variant device xla cpu jit t in dt float dt double dt int32 dt uint8 dt int16 dt half dt uint32 dt uint64 dt resource dt variant device xla tpu jit t in dt float dt double dt int32 dt complex64 dt int64 dt bool dt bfloat16 dt uint32 dt uint64 dt resource dt variant device xla cpu t in dt uint8 dt quint8 dt uint16 dt int8 dt qint8 dt double dt complex64 dt complex128 dt bool dt bfloat16 device tpu t in dt int32 dt uint32 dt bfloat16 dt float dt double dt bool dt complex64 dt int64 dt uint64 device tpu system device gpu t in dt half device gpu t in dt bfloat16 device gpu t in dt float device gpu t in dt double device gpu t in dt int64 device gpu t in dt uint16 device gpu t in dt int16 device gpu t in dt uint8 device gpu t in dt int8 device gpu t in dt complex64 device gpu t in dt complex128 device gpu t in dt variant device default t in dt string device default t in dt variant device default t in dt resource device cpu identity op makeiterator |
tensorflowtensorflow | tensorflow 2 0 valueerror tf function decorate | Bug | hello I have a code for mnist dataset in which I be do the follow step 1 train model 1 prune model use tensorflow model optimization for say p 1 create a mask of prune model so that the sparsity be maintain in subsequent step 1 reset weight of non prune model to random initialize weight when model be initialize I do the follow step iteratively n time the code can be find in for retrain a prune model I use gradienttape along with mask now the first time the model be train use train one step and test step function which be tf function annotate function thing work fine but when I try to use they again in cell 76 of jupyter notebook it give I the error valueerror traceback most recent call last in 13 for x y in train dataset 14 train step x y 15 train one step model gt strip mask model strip optimizer x y 16 17 for x t y t in test dataset local lib python3 7 site package tensorflow core python eager def function py in call self args kwd 455 456 trace count self get trace count 457 result self call args kwd 458 if trace count self get trace count 459 self call counter call without trace local lib python3 7 site package tensorflow core python eager def function py in call self args kwd 485 in this case we have create variable on the first call so we run the 486 defunne version which be guarantee to never create variable 487 return self stateless fn args kwd pylint disable not callable 488 elif self stateful fn be not none 489 release the lock early so that multiple thread can perform the call local lib python3 7 site package tensorflow core python eager function py in call self args kwargs 1820 def call self args kwargs 1821 call a graph function specialize to the input 1822 graph function args kwargs self maybe define function args kwargs 1823 return graph function filter call args kwargs pylint disable protect access 1824 local lib python3 7 site package tensorflow core python eager function py in maybe define function self args kwargs 2148 graph function self function cache primary get cache key none 2149 if graph function be none 2150 graph function self create graph function args kwargs 2151 self function cache primary cache key graph function 2152 return graph function args kwargs local lib python3 7 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2039 arg name arg name 2040 override flat arg shape override flat arg shape 2041 capture by value self capture by value 2042 self function attribute 2043 tell the concretefunction to clean up its graph once it go out of local lib python3 7 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 913 convert func 914 915 func output python func func args func kwargs 916 917 invariant func output contain only tensor compositetensor local lib python3 7 site package tensorflow core python eager def function py in wrap fn args kwd 356 wrap allow autograph to swap in a converted function we give 357 the function a weak reference to itself to avoid a reference cycle 358 return weak wrap fn wrap args kwd 359 weak wrap fn weakref ref wrap fn 360 local lib python3 7 site package tensorflow core python framework func graph py in wrapper args kwargs 903 except exception as e pylint disable broad except 904 if hasattr e ag error metadata 905 raise e ag error metadata to exception e 906 else 907 raise valueerror in convert code 29 train one step optimizer apply gradient zip grad mask mul model trainable variable home majumdar local lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 435 apply gradient self create slot var list home majumdar local lib python3 7 site package tensorflow core python keras optimizer v2 adam py 146 create slot self add slot var m home majumdar local lib python3 7 site package tensorflow core python keras optimizer v2 optimizer v2 py 587 add slot initial value initial value home majumdar local lib python3 7 site package tensorflow core python op variable py 260 call return cls variable v2 call args kwargs home majumdar local lib python3 7 site package tensorflow core python op variable py 254 variable v2 call shape shape home majumdar local lib python3 7 site package tensorflow core python op variable py 65 getter return capture getter capture previous kwargs home majumdar local lib python3 7 site package tensorflow core python eager def function py 413 invalid creator scope tf function decorate function try to create valueerror tf function decorate function try to create variable on non first call the only way of avoid this valueerror be by rerun the train one step and test step tf function annotate function why be this happen thank |
tensorflowtensorflow | documentation error for tf signal frame | Bug | url s with the issue l55 l199 documentation description of issue what need change in the following example the count of frame generate with pad end false be incorrect a batch size 3 tensor of 9152 audio sample audio tf random normal 3 9152 compute overlap frame of length 512 with a step of 180 frame overlap by 332 sample by default only 50 frame be generate since the last 152 sample do not form a full frame frame tf signal frame audio 512 180 frame shape assert be compatible with 3 50 512 in fact only 49 frame be generate with pad true false the default clear description suppose we be give a tensor x with x shape 1 n a frame length k and a frame step k to compute tf signal frame x frame length frame step here default axis 1 and pad end false we could equivalently stack along axis 2 the follow list of slice x 0 k x k k k x 2 k k 2 k x 3 k k 3 k x j k k j k where j be the maximum integer such that k j k n we can compute that j n k k so the number of slice in this list be j 1 1 n k k in the example here n 9152 k 512 k 180 so j 1 1 9152 512 180 1 8640 180 1 48 49 thus in the example give the return shape should be 3 49 512 not 3 50 512 attach be a screenshot show that this be indeed the behavior of tf signal frame so it be an error in the documentation not in the code tf signal frame submit a pull request I be plan to submit a pull request |
tensorflowtensorflow | tpu closed error | Bug | when I run tpu on colab I meet error epoch 1 6 warning tensorflow gradient do not exist for variable tf bert model bert pooler dense kernel 0 tf bert model bert pooler dense bias 0 when minimize the loss warn tensorflow gradient do not exist for variable tf bert model bert pooler dense kernel 0 tf bert model bert pooler dense bias 0 when minimize the loss warn tensorflow gradient do not exist for variable tf bert model bert pooler dense kernel 0 tf bert model bert pooler dense bias 0 when minimize the loss warn tensorflow gradient do not exist for variable tf bert model bert pooler dense kernel 0 tf bert model bert pooler dense bias 0 when minimize the loss 1 25 eta 28 10 unavailableerror traceback most recent call last in 94 95 96 model fit train dataset epoch 6 97 98 10 frame usr local lib python3 6 dist package six py in raise from value from value unavailableerror socket close additional grpc error information create 1581089996 572389050 description error receive from peer file external grpc src core lib surface call cc file line 1039 grpc message socket close grpc status 14 |
tensorflowtensorflow | tf 2 1 0 crash during shape inference | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 standard docker image with ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below python version v2 1 0 rc2 17 ge5bf8de 2 1 0 cuda cudnn version 10 1 243 1 7 6 4 38 1 cuda10 1 gpu model and memory titan rtx 24220mib describe the current behavior use rather large complex model with lot of tf tensorarray operation end up with the crash 100 reproducible but overall codebase python only be quite large and input pipeline be tricky code to reproduce the issue this be the class I use to operate with tensor array which seem to be a reason for crash it work in evaluation mode though class rnnbatch def init self rnn layer rnn batch size true word true length training self rnn batch size rnn batch size self training training self dtype tf float32 self rnn layer rnn layer max sequence len rnn layer max sequence len dictionary size rnn layer dictionary size self true word true word self true length true length self write 0 self rnn process start 0 self output write 0 def run self array select feature array select feature concat batch size tf shape select feature 0 state h tf zero batch size self rnn layer num rnn unit dtype self dtype state c tf zero batch size self rnn layer num rnn unit dtype self dtype state state h state c tw self true word self rnn process start self rnn process start self write tl self true length self rnn process start self rnn process start self write out out ar self rnn layer select feature tw tl state self training array output array output write self output write out array output ar array output write self output write out ar self output write 1 self rnn process start self write self write 0 return array def feed crop self crop feature array array select feature array select feature write self write crop feature self write 1 if self write self rnn batch size array self run array array select feature tf tensorarray crop feature dtype size 0 element shape tf tensorshape none crop feature shape 1 return array def return value self array if self write 0 array self run array return array output concat array output ar concat and the use case be something like this image huge tensor of datum rnn batch rnnbatch self rnn layer 64 true word true length training array select feature tf tensorarray dtype size 0 dynamic size true element shape tf tensorshape none crop size crop size feature full shape 3 output tf tensorarray dtype size 0 dynamic size true element shape tf tensorshape none self max sequence len self dictionary size output ar tf tensorarray dtype size 0 dynamic size true element shape tf tensorshape none self max sequence len self dictionary size for idx in tf range batch size for crop idx in tf range num crop array rnn batch feed crop crop feature array other info log full traceback tf traceback txt there be also a core file gzippe size be about 749 m what should be the next step besides rewrite above class to debug this problem attach stack trace which end up like this thread 1 python3 receive signal sigsegv segmentation fault 0x00007fff647051e0 in tensorflow anonymous namespace lambda tensorflow shape inference inferencecontext 14 fun tensorflow shape inference inferencecontext from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so gdb bt 0 0x00007fff647051e0 in tensorflow anonymous namespace lambda tensorflow shape inference inferencecontext 14 fun tensorflow shape inference inferencecontext from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 1 0x00007fff5d5a6104 in std function handler m invoke std any d ata const tensorflow shape inference inferencecontext from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 2 0x00007fff5a018c22 in tensorflow shape inference inferencecontext run std function const from usr local lib python3 6 dist package tensorflow core python libtensorflow framework so 2 3 0x00007fff64342a14 in tensorflow shaperefiner runshapefn tensorflow node const tensorflow opregistrationdata const tensorflow extendedinferencecontext from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 4 0x00007fff643443ee in tensorflow shaperefiner addnode tensorflow node const from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 5 0x00007fff5df740f1 in tf finishoperation from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so 6 0x00007fff5d529dd6 in wrap tf finishoperation from usr local lib python3 6 dist package tensorflow core python pywrap tensorflow internal so unfortunately there be no debug package I could use to make well trace |
tensorflowtensorflow | tflite micro segfault after fail to allocate tensor in microallocator | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 arch linux nixos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device nope tensorflow instal from source or binary source tensorflow version use command below late master python version c bazel version if compile from source nope make gcc compiler version if compile from source 8 3 0 cuda cudnn version nope gpu model and memory nope describe the current behavior tflite microinterpreter constructor segfault describe the expect behavior it shouldn t segfault code to reproduce the issue grrr call microinterpreter constructor with a small arena size other info log fix l345 this conditional should return an error code instead of continue pr come up |
tensorflowtensorflow | binaryelementwiseop operate template argument not use and can cause unnecessary error | Bug | system information have I write custom code yes os platform and distribution macos 10 tensorflow instal from binary tensorflow version 2 1 python version 3 7 2 describe the current behavior binaryelementwiseop l67 l109 expect child class to define an operate method which be use in the compute method to perform the class s operation this operate method have an int template parameter ndim which represent the dimension of the input and output l84 l107 however this template parameter be not use by any binaryelementwiseop subclass all subclass currently call an operatenotemplate method from inside operate for example relu l66 l79 this lead to seemingly unnecessary error like in the follow python code python import tensorflow as tf x tf reshape 0 0 1 9 too many dimension for relugradop x tf variable x with tf gradienttape as tape tape watch x y tf nn relu x tape gradient y x tensorflow python framework error impl invalidargumenterror we only handle up to tensor dim up to 8 not 9 op relugrad the full list of binaryelementwiseop subclass I be able to find be relugradop l61 l80 relu6gradop l104 l122 leakyrelugradop l154 l182 elugradop l207 l225 selugradop l249 l267 softsigngradop l46 l66 softplusgradop l46 l66 fakequantwithminmaxargsgradientop l102 l148 all of they follow the operate call operatenotemplate pattern it seem like binaryelementwiseop and its subclass can be refactore by remove the ndim template argument from operate and move the content of each subclass s operatenotemplate method into the correspond operate method |
tensorflowtensorflow | tensorflow lite conversion fail check fail start indice size num input axis 2 vs 1 stridedslice op require no more than 1 start index | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux cento 7 5 tensorflow instal from source or binary tensorflow be preinstalle on the server tensorflow version or github sha if from source tensorflow 2 0 0 command use to run the converter or code if you re use the python api import tensorflow as tf note full quantization require tf 1 15 or high import numpy as np from pil import image def representative dataset gen datum np random rand 100 416 416 3 astype dtype np float32 for I in range num calibration step get sample input datum as a numpy array in a method of your choose image datum take 1 image np expand dim datum I axis 0 yield image num calibration step 100 save model dir projectnb ec720prj nakamura spring2020 tf2 yolov3 savedmodel yolov3 1 dir of yolov3 tf implementation converter tf lite tfliteconverter from save model save model dir converter optimization tf lite optimize default converter representative dataset tf lite representativedataset representative dataset gen converter representative dataset representative dataset gen enforce full integer quantization for all op and use int input output converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite quant model converter convert open convert model tflite wb write tflite quant model the output from the converter invocation 2020 02 06 10 53 25 407210 I tensorflow core platform cpu feature guard cc 145 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation avx avx2 fma to enable they in non mkl dnn operation rebuild tensorflow with the appropriate compiler flag 2020 02 06 10 53 25 425952 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2394310000 hz 2020 02 06 10 53 25 427429 I tensorflow compiler xla service service cc 168 xla service 0xce6d420 execute computation on platform host device 2020 02 06 10 53 25 427467 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2020 02 06 10 53 25 436305 I tensorflow core common runtime process util cc 115 create new thread pool with default inter op set 28 tune use inter op parallelism thread for good performance 2020 02 06 10 53 42 002931 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2020 02 06 10 53 42 003093 I tensorflow core grappler cluster single machine cc 356 start new session 2020 02 06 10 53 42 111911 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2020 02 06 10 53 42 111960 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer graph size after 2070 node 1696 5733 edge 5354 time 66 938m 2020 02 06 10 53 42 111973 I tensorflow core grappler optimizer meta optimizer cc 718 function optimizer function optimizer do nothing time 1 093ms 2020 02 06 10 53 48 038844 I tensorflow core grappler device cc 60 number of eligible gpu core count 8 compute capability 0 0 0 note tensorflow be not compile with cuda support 2020 02 06 10 53 48 038997 I tensorflow core grappler cluster single machine cc 356 start new session 2020 02 06 10 53 49 980217 I tensorflow core grappler optimizer meta optimizer cc 716 optimization result for grappler item graph to optimize 2020 02 06 10 53 49 980262 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 1268 node 370 4423 edge 734 time 922 534ms 2020 02 06 10 53 49 980274 I tensorflow core grappler optimizer meta optimizer cc 718 constant folding graph size after 1268 node 0 4423 edge 0 time 255 534ms traceback most recent call last file coral coral quantizer py line 38 in tflite quant model converter convert file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core lite python lite py line 446 in convert converter kwargs file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core lite python convert py line 449 in toco convert impl enable mlir converter enable mlir converter file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core lite python convert py line 200 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2020 02 06 10 54 00 060634 I tensorflow lite toco graph transformation graph transformation cc 39 before remove unused op 676 operator 1271 array 0 quantize 2020 02 06 10 54 00 077126 I tensorflow lite toco graph transformation graph transformation cc 39 before general graph transformation 676 operator 1271 array 0 quantize 2020 02 06 10 54 00 465306 f tensorflow lite toco graph transformation resolve strided slice attribute cc 95 check fail start indice size num input axis 2 vs 1 stridedslice op require no more than 1 start indice fatal python error abort current thread 0x00007fbbde05b740 most recent call first file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package absl app py line 250 in run main file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package absl app py line 299 in run file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core python platform app py line 40 in run file share pkg 7 tensorflow 2 0 0 install lib scc python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main file share pkg 7 tensorflow 2 0 0 install bin toco from protos line 10 in also please include a link to the save model or graphdef put link here or attach to the issue failure detail hello I be try to run the full yolov3 on coral edgetpu to do this I be currently try to convert the savedmodel pb format of train yolov3 into the quantize tflite model use the code I attach above for some reason the conversion fail with this error check fail start indice size num input axis 2 vs 1 stridedslice op require no more than 1 start indice fatal python error abort I be able to convert the official pretraine mobilenetv1 savedmodel file into the quantize tflite model use the exact same code I be use this repo for the yolov3 tensorflow 2 0 0 implementation do you know what be cause this failure of conversion thank you any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach yolov3 s implementation code thank you for your help |
tensorflowtensorflow | mislead documentation for hook parameter in estimator evaluate predict train method | Bug | the documentation for pre make estimator such as tensorflow estimator linearregressor say that each of the method evaluate predict and train take a parameter call hook which be a list of tensorflow train sessionrunhook however no such class exist in the v2 1 api instead the source code say that it expect instance of tensorflow compat v1 train sessionrunhook in the example of linearregressor other pre make estimator share the same problem this be the api doc which I believe be generate directly from the source code see the documentation of the parameter for the train evaluate and predict method evaluate predict train this be the source code that say it expect a class from the compat v1 api l1947 |
tensorflowtensorflow | can not perform full integer post training quantization in convert to flite model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow both os platform and distribution e g linux ubuntu 16 04 macos ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary either tensorflow version use command below 1 15 2 python version 3 bazel version if compile from source 0 26 1 gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a describe the current behavior the issue relate to full integer post training quantization in tflite conversion follow the same example in the documentation but apply to my datum have issue when I identify the input datum for the interpreter as float32 below my datum in r be in float32 already normalize between 0 10 input datum np array r dtype np float32 interpreter set tensor input detail 0 index input datum interpreter invoke when then I call the tflite converter use tf compat v1 lite tfliteconverter from keras model file in tf2 x or tf lite tfliteconverter from keras model file in tf1 x again follow from the documentation converter optimization tf lite optimize optimize for size converter representative dataset representative dataset gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite quant model converter convert the post quantize tflite be correctly create and when compile for the tpu all op be correctly assign however when I run a prediction I get the follow error valueerror can not set tensor get value of type float32 but expect type uint8 for input 16 name conv2d input essentially the input datum np array r dtype np float32 need to be set as uint8 for this to work however that break the model completely as it cast all datum between 0 1 into just zero to be clear this do not happen when use tf2 x by call the converter with tf lite tfliteconverter from keras model use the input datum np array r dtype np float32 the post training quantize tflite model be create and when use for prediction it work as intend however when compile because I be use the tfliteconverterv2 the quantization dequantization op be not assign to the tpu make it useless to deploy in embed device that require full integer quantization any insight be greatly appreciate thank in advance |
tensorflowtensorflow | keras lstm fail to find the dnn implementation | Bug | system information cuda cudnn version 10 1 gpu model and memory geforce rtx 2080 tf 2 1 0 uncommente the lstm layer will yield the follow error unknownerror derive fail to find the dnn implementation node cudnnrnn sequential 6 bidirectional 2 backward lstm 3 statefulpartitionedcall reshape 11 38 op inference distribute function 39046 work code model tf keras sequential tf keras layer embed encoder vocab size 64 tf keras layers bidirectional tf keras layers lstm 64 tf keras layer dense 64 activation relu tf keras layer dense 1 activation sigmoid model compile loss binary crossentropy optimizer tf keras optimizer adam 1e 4 metric accuracy history model fit train dataset epoch 10 validation datum test dataset validation step 30 |
tensorflowtensorflow | tensorflow 1 15 doesn t do incremental memory growth | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 nixos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below unknown 1 15 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 2 89 gpu model and memory rtx 2070 440 44 describe the current behavior I m try to activate incremental memory growth however in all the way I ve do it it always end up allocate almost the entire gpu python import tensorflow as tf import tensorflow kera as keras import tensorflow kera application as application import tensorflow kera util as util import numpy as np num sample 1000 height 224 width 224 num class 1000 config tf compat v1 configproto config gpu option allow growth true sess tf compat v1 session config config keras backend set session sess gpu tf config experimental list physical device gpu for gpu in gpu tf config experimental set memory growth gpu true logical gpu tf config experimental list logical device gpu model application resnet50 weight none input shape height width 3 class num class parallel model util multi gpu model model gpu 2 cpu relocation true parallel model compile loss categorical crossentropy optimizer rmsprop x np random random num sample height width 3 y np random random num sample num class parallel model fit x y epoch 20 batch size 2 print all do as I watch nvidia smi I always see almost the entire gpu allocate this be true for the compat v1 and for the new tf config experimental command and also for the environment variable tf force gpu allow growth the only one that have any effect be config gpu option per process gpu memory fraction 0 5 in all other case I see the the gpu memory get use up entirely and I get these message 2020 02 06 15 48 35 517899 I tensorflow stream executor cuda cuda driver cc 831 fail to allocate 3 18 g 3411184640 byte from device cuda error out of memory out of memory |
tensorflowtensorflow | tensorflow fill do not work in keras layer and model with dynamic shape as it should and as similar function do | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos darwin 19 2 0 x86 64 i386 64bit mac version 10 15 2 x86 64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip install tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version python 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior if I use tensorflow fill in tf keras layers layer where the shape for the fill depend on the input it doesn t work although it work with plenty of other similar function and would be reasonably expect to work specifically I call import tensorflow as tf import numpy as np x tf keras input 1 x2 tf keras layers lambda lambda x tf fill tf shape x 0 1 2 5 x model tf keras model inputs x outputs x2 model predict np random randn 10 1 I get the follow error when I try to run predict with a model build with the label full error output below symbolicexception input to eager execution function can not be keras symbolic tensor but find describe the expect behavior I expect it to output a 10x1 numpy array of one this happen with other similar function use in the same way e g the follow all work fine use in the same way x2 tf keras layers lambda lambda x tf random uniform tf shape x x x2 tf keras layers lambda lambda x tf zero like x x x2 tf keras layers lambda lambda x tf one like x 3 5 x and they all depend on the shape of the input for the case of random uniform it even take the same call tf shape in the input and work fine there s no reason tf fill shouldn t work the similar capability to create a full tensor for an arbitrary value and input shape code to reproduce the issue import tensorflow as tf import numpy as np x tf keras input 1 x2 tf keras layers lambda lambda x tf fill tf shape x 0 1 2 5 x model tf keras model inputs x outputs x2 model predict np random randn 10 1 other info log here be the full error output trace typeerror traceback most recent call last user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 60 op name input attrs 61 num output 62 except core notokstatusexception as e typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name input 1 0 during handling of the above exception another exception occur symbolicexception traceback most recent call last in 5 x2 tf keras layers lambda lambda x tf fill tf shape x 0 1 2 5 x 6 model tf keras model inputs x outputs x2 7 model predict np random randn 10 1 user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python keras engine train py in predict self x batch size verbose step callback max queue size worker use multiprocesse 1011 max queue size max queue size 1012 worker worker 1013 use multiprocesse use multiprocesse 1014 1015 def reset metric self user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python keras engine training v2 py in predict self model x batch size verbose step callback max queue size worker use multiprocesse kwargs 496 model modekey predict x x batch size batch size verbose verbose 497 step step callback callback max queue size max queue size 498 worker worker use multiprocesse use multiprocesse kwargs 499 500 user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python keras engine training v2 py in model iteration self model mode x y batch size verbose sample weight step callback max queue size worker use multiprocesse kwargs 473 mode mode 474 training context training context 475 total epoch 1 476 cbks make logs model epoch log result mode 477 user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python keras engine training v2 py in run one epoch model iterator execution function dataset size batch size strategy step per epoch num sample mode training context total epoch 126 step step mode mode size current batch size as batch log 127 try 128 batch out execution function iterator 129 except stopiteration error outofrangeerror 130 todo kaftan file bug about tf function and error outofrangeerror user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python keras engine training v2 util py in execution function input fn 96 numpy translate tensor to value in eager mode 97 return nest map structure non none constant value 98 distribute function input fn 99 100 return execution function user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager def function py in call self args kwd 636 args kwd 637 if we do not create any variable the trace we have be good enough 638 return self concrete stateful fn filter call canon args canon kwd pylint disable protect access 639 640 def fn with cond inner args inner kwd user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager function py in filter call self args kwargs 1609 if isinstance t op tensor 1610 resource variable op baseresourcevariable 1611 self capture input 1612 1613 def call flat self args capture input cancellation manager none user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1690 no tape be watch skip to run the function 1691 return self build call output self inference function call 1692 ctx args cancellation manager cancellation manager 1693 forward backward self select forward and backward function 1694 args user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager function py in call self ctx args cancellation manager 543 input args 544 attrs executor type executor type config proto config 545 ctx ctx 546 else 547 output execute execute with cancellation user workspace python virtualenv tensorflow2 lib python3 7 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 73 raise core symbolicexception 74 input to eager execution function can not be keras symbolic 75 tensor but find format keras symbolic tensor 76 raise e 77 pylint enable protect access symbolicexception input to eager execution function can not be keras symbolic tensor but find |
tensorflowtensorflow | tf2 1 model save error for model with batch normalization layer and under mirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux redhat 8 1 in docker tensorflow instal from source or binary binary tensorflow version use command below tf2 1 0 python version 3 6 8 cuda cudnn version 10 1 gpu model and memory v100 32 gb describe the current behavior in tf2 1 0 if I build and compile a model with batch normalization layer under mirroredstrategy scope model saving will throw some error like keyerror fail to add concrete function b inference sequential layer call and return conditional loss 869 to object base save model as it capture tensor tf tensor shape dtype resource which be unsupported or not reachable from root one reason could be that a stateful object or a variable that the function depend on be not assign to an attribute of the serialized trackable object see savet test capture unreachable variable I have test with tf2 0 0 and this seem to be fine code to reproduce the issue python import tensorflow as tf def build and compile model input tf keras input 20 x tf keras layers batchnormalization input y tf keras layer dense 2 x model tf keras model input input output y model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model strategy tf distribute mirroredstrategy with strategy scope model build and compile model model save test save format tf other info log traceback log as follow keyerror traceback most recent call last tmp site package tensorflow core python save model function serialization py in serialize concrete function concrete function node ids coder 53 for capture in concrete function capture input 54 bind input append node ids capture 55 except keyerror tmp site package tensorflow core python util object identity py in getitem self key 131 def getitem self key 132 return self storage self wrap key key 133 keyerror objectidentitywrapper wrapping during handling of the above exception another exception occur keyerror traceback most recent call last in 1 model save test10 save format tf tmp site package tensorflow core python keras engine network py in save self filepath overwrite include optimizer save format signature option 1006 1007 save save model self filepath overwrite include optimizer save format 1008 signature option 1009 1010 def save weight self filepath overwrite true save format none tmp site package tensorflow core python keras save save py in save model model filepath overwrite include optimizer save format signature option 113 else 114 save model save save model filepath overwrite include optimizer 115 signature option 116 117 tmp site package tensorflow core python keras save save model save py in save model filepath overwrite include optimizer signature option 76 we use the default replica context here 77 with distribution strategy context get default replica context pylint disable protect access 78 save lib save model filepath signature option 79 80 if not include optimizer tmp site package tensorflow core python save model save py in save obj export dir signature option 921 compat as str constant save model filename pb 922 object graph proto serialize object graph 923 saveable view asset info asset index 924 meta graph def object graph def copyfrom object graph proto 925 tmp site package tensorflow core python save model save py in serialize object graph saveable view asset file def index 645 for concrete function in saveable view concrete function 646 serialize function serialization serialize concrete function 647 concrete function saveable view capture tensor node ids coder 648 if serialize be not none 649 proto concrete function concrete function name copyfrom tmp site package tensorflow core python save model function serialization py in serialize concrete function concrete function node ids coder 61 trackable object 62 see savet test capture unreachable variable 63 concrete function name capture 64 concrete function proto save object graph pb2 savedconcretefunction 65 structured output func graph module convert structure to signature keyerror fail to add concrete function b inference sequential 1 layer call and return conditional loss 1937 to object base save model as it capture tensor tf tensor shape dtype resource which be unsupported or not reachable from root one reason could be that a stateful object or a variable that the function depend on be not assign to an attribute of the serialized trackable object see savet test capture unreachable variable thank so much for your time |
tensorflowtensorflow | tf keras sequential ignore the outer name scope s when build proactively | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos catalina 10 15 2 19c57 tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 5 v3 6 5 f59c0932b4 describe the current behavior when the build method of a tf keras sequential instance be explicitly call within one or several nest tf name scope context manager s the variable weight name of the layer wrap into the sequential don t get the respective name scope prefix describe the expect behavior after build ing a sequential all the tf name scope s surround the build call must be reflect as prefix of the sequential s variable weight name this be what currently happen in tensorflow 1 15 2 code to reproduce the issue python import tensorflow as tf seq tf keras sequential layer tf keras layer dense unit 10 name d1 tf keras layer dense unit 20 name d2 with tf name scope a with tf name scope b seq build input shape 32 784 for w in seq weight print w name the output with tensorflow 2 1 0 d1 kernel 0 d1 bias 0 d2 kernel 0 d2 bias 0 the output with tensorflow 1 15 2 a b d1 kernel 0 a b d1 bias 0 a b d2 kernel 0 a b d2 bias 0 |
tensorflowtensorflow | high ram usage for tf runtime | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 4 lts tensorflow instal from source or binary pip tensorflow version use command below tensorflow gpu 2 0 0 or tensorflow 2 1 0 python version 3 6 8 cuda cudnn version cuda toolkit 10 1 243 cudnn 7 6 5 32 gpu model and memory 8 x geforce rtx 2080 ti 11019mib system memory 32 gb describe the current behavior even the small computation lead to very high ram usage of the system memory not gpu memory as show in the follow a simple single float variable initialization lead to more than 2 gb ram increase the few graphic card be visible for tensorflow the less ram be use after all describe the expect behavior the expect behavior be way less memory usage and also a constant amount independently of the number of gpu visible to tensorflow more than 2 gb per instance be not viable for my our situation since multiple user be work on the machine code to reproduce the issue import os psutil p psutil process os getpid print memory usage before import p memory info rss 1024 1024 mb import tensorflow as tf num visible gpu 8 optional gpu devs tf config experimental list physical device gpu optional tf config experimental set visible device gpu devs num visible gpu gpu optional print memory usage after import p memory info rss 1024 1024 mb tf variable 42 0 do some pseudowork print memory usage after var def p memory info rss 1024 1024 mb output memory usage before import 47 62109375 mb memory usage after import 340 15625 mb memory usage after var def 2485 1796875 mb few gpu result in less memory usage after var def num visible gpu 8 2626 039063 mb num visible gpu 7 2488 765626 mb num visible gpu 6 2267 289063 mb num visible gpu 5 2173 917968 mb num visible gpu 4 2011 917968 mb num visible gpu 3 1809 957031 mb num visible gpu 2 1681 316406 mb num visible gpu 1 1539 554688 mb num visible gpu 0 1201 152343 mb these result be obtain in a jupyter notebook environment when run as a py file the usage be slightly by 200 mb low e g num visible gpu 8 2347 7148437 mb |
tensorflowtensorflow | unable to evaluate fusedbatchnormv3 operation without gpu | Bug | system information have I write custom code yes os platform and distribution linux ubuntu 16 04 mobile device if the issue happen on mobile device n a tensorflow instal from binary instal by pip tensorflow version v1 15 0 rc3 22 g590d6ee 1 15 0 python version 3 5 2 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cuda 10 0 cudnn 7 6 gpu model and memory geforce gtx 1080 ti describe the current behavior I would like to evaluate the output of each operation of our network I succeed to get the output by run the follow code with gpu use tensorflow gpu but without gpu use tensorflow cpu I get an error as follow tensorflow python framework error impl outofrangeerror node batchnorm fusedbatchnormv3 type fusedconv2d num of output 1 do not have output 1 there be different behavior about batchnorm fusedbatchnormv3 between w gpu and w o gpu I think fusedbatchnormv3 s num of output should be 6 but that error show num of output be 1 describe the expect behavior I expect to be enable to evaluate the output of batchnorm fusedbatchnormv3 only use cpu code to reproduce the issue at first create a test script test py as follow import tensorflow as tf import numpy as np batch size 1 image size 64 64 def main graph tf graph with graph as default shape batch size image size 0 image size 1 3 image placeholder tf compat v1 placeholder tf float32 shape shape name image placeholder conv tf layer conv2d image placeholder filter 32 kernel size 3 padding same use bias false batch norme tf contrib layer batch norm conv be train false softmax tf nn softmax batch normed output tf identity softmax name output init op tf global variable initializer session config tf configproto sess tf session graph graph config session config sess run init op image np expand dim np zero image size 0 image size 1 3 dtype float axis 0 feed dict image placeholder image all op graph get operation all output index 0 for op in all op for op output in op output print op output val sess run op output name feed dict feed dict all output append val val name op output name index 1 if name main main after that run command as follow pip install tensorflow cpu 1 15 0 python test py other info log success run test py with gpu tensorflow gpu 1 15 0 python test py warn tensorflow from test py 16 conv2d from tensorflow python layer convolutional be deprecate and will be remove in a future version instruction for update use tf keras layer conv2d instead warn tensorflow from home hadusam tensorflow gpu lib python3 5 site package tensorflow core python layers convolutional py 424 layer apply from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update please use layer call method instead warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow from test py 21 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead warn tensorflow from test py 23 the name tf configproto be deprecate please use tf compat v1 configproto instead warn tensorflow from test py 24 the name tf session be deprecate please use tf compat v1 session instead 2020 02 04 17 12 56 452491 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 avx512f fma 2020 02 04 17 12 56 458872 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3599980000 hz 2020 02 04 17 12 56 459272 I tensorflow compiler xla service service cc 168 xla service 0x6728e70 initialize for platform host this do not guarantee that xla will be use device 2020 02 04 17 12 56 459299 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 02 04 17 12 56 462463 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 02 04 17 12 56 676188 I tensorflow compiler xla service service cc 168 xla service 0x67d26e0 initialize for platform cuda this do not guarantee that xla will be use device 2020 02 04 17 12 56 676219 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1080 ti compute capability 6 1 2020 02 04 17 12 56 677202 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1080 ti major 6 minor 1 memoryclockrate ghz 1 62 pcibusid 0000 af 00 0 2020 02 04 17 12 56 677442 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 02 04 17 12 56 678571 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2020 02 04 17 12 56 679662 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2020 02 04 17 12 56 679943 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2020 02 04 17 12 56 681246 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2020 02 04 17 12 56 682741 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2020 02 04 17 12 56 685936 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 02 04 17 12 56 688818 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2020 02 04 17 12 56 688898 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2020 02 04 17 12 56 689938 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 02 04 17 12 56 689984 I tensorflow core common runtime gpu gpu device cc 1165 0 2020 02 04 17 12 56 690014 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2020 02 04 17 12 56 691698 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 10470 mb memory physical gpu device 0 name geforce gtx 1080 ti pci bus i d 0000 af 00 0 compute capability 6 1 tensor image placeholder 0 shape 1 64 64 3 dtype float32 tensor conv2d kernel initializer random uniform shape 0 shape 4 dtype int32 tensor conv2d kernel initializer random uniform min 0 shape dtype float32 tensor conv2d kernel initializer random uniform max 0 shape dtype float32 tensor conv2d kernel initializer random uniform randomuniform 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel initializer random uniform sub 0 shape dtype float32 tensor conv2d kernel initializer random uniform mul 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel initializer random uniform 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel 0 shape 3 3 3 32 dtype float32 ref tensor conv2d kernel assign 0 shape 3 3 3 32 dtype float32 ref tensor conv2d kernel read 0 shape 3 3 3 32 dtype float32 tensor conv2d dilation rate 0 shape 2 dtype int32 tensor conv2d conv2d 0 shape 1 64 64 32 dtype float32 2020 02 04 17 12 57 365891 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 02 04 17 12 58 204387 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 tensor batchnorm const 0 shape 32 dtype float32 tensor batchnorm beta initializer zeros 0 shape 32 dtype float32 tensor batchnorm beta 0 shape 32 dtype float32 ref tensor batchnorm beta assign 0 shape 32 dtype float32 ref tensor batchnorm beta read 0 shape 32 dtype float32 tensor batchnorm move mean initializer zeros 0 shape 32 dtype float32 tensor batchnorm move mean 0 shape 32 dtype float32 ref tensor batchnorm move mean assign 0 shape 32 dtype float32 ref tensor batchnorm move mean read 0 shape 32 dtype float32 tensor batchnorm move variance initializer one 0 shape 32 dtype float32 tensor batchnorm move variance 0 shape 32 dtype float32 ref tensor batchnorm move variance assign 0 shape 32 dtype float32 ref tensor batchnorm move variance read 0 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 0 shape 1 64 64 32 dtype float32 tensor batchnorm fusedbatchnormv3 1 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 2 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 3 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 4 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 5 dtype float32 tensor batchnorm const 1 0 shape dtype float32 tensor softmax 0 shape 1 64 64 32 dtype float32 tensor output 0 shape 1 64 64 32 dtype float32 error run test py without gpu tensorflow cpu 1 15 0 python test py warn tensorflow from test py 16 conv2d from tensorflow python layer convolutional be deprecate and will be remove in a future version instruction for update use tf keras layer conv2d instead warn tensorflow from home hadusam tensorflow cpu lib python3 5 site package tensorflow core python layers convolutional py 424 layer apply from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update please use layer call method instead warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow from test py 21 the name tf global variable initializer be deprecate please use tf compat v1 global variable initializer instead warn tensorflow from test py 23 the name tf configproto be deprecate please use tf compat v1 configproto instead warn tensorflow from test py 24 the name tf session be deprecate please use tf compat v1 session instead 2020 02 04 17 26 55 513752 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 avx512f fma 2020 02 04 17 26 55 518222 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3599980000 hz 2020 02 04 17 26 55 518610 I tensorflow compiler xla service service cc 168 xla service 0x52ffee0 initialize for platform host this do not guarantee that xla will be use device 2020 02 04 17 26 55 518636 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version tensor image placeholder 0 shape 1 64 64 3 dtype float32 tensor conv2d kernel initializer random uniform shape 0 shape 4 dtype int32 tensor conv2d kernel initializer random uniform min 0 shape dtype float32 tensor conv2d kernel initializer random uniform max 0 shape dtype float32 tensor conv2d kernel initializer random uniform randomuniform 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel initializer random uniform sub 0 shape dtype float32 tensor conv2d kernel initializer random uniform mul 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel initializer random uniform 0 shape 3 3 3 32 dtype float32 tensor conv2d kernel 0 shape 3 3 3 32 dtype float32 ref tensor conv2d kernel assign 0 shape 3 3 3 32 dtype float32 ref tensor conv2d kernel read 0 shape 3 3 3 32 dtype float32 tensor conv2d dilation rate 0 shape 2 dtype int32 tensor conv2d conv2d 0 shape 1 64 64 32 dtype float32 tensor batchnorm const 0 shape 32 dtype float32 tensor batchnorm beta initializer zeros 0 shape 32 dtype float32 tensor batchnorm beta 0 shape 32 dtype float32 ref tensor batchnorm beta assign 0 shape 32 dtype float32 ref tensor batchnorm beta read 0 shape 32 dtype float32 tensor batchnorm move mean initializer zeros 0 shape 32 dtype float32 tensor batchnorm move mean 0 shape 32 dtype float32 ref tensor batchnorm move mean assign 0 shape 32 dtype float32 ref tensor batchnorm move mean read 0 shape 32 dtype float32 tensor batchnorm move variance initializer one 0 shape 32 dtype float32 tensor batchnorm move variance 0 shape 32 dtype float32 ref tensor batchnorm move variance assign 0 shape 32 dtype float32 ref tensor batchnorm move variance read 0 shape 32 dtype float32 tensor batchnorm fusedbatchnormv3 0 shape 1 64 64 32 dtype float32 tensor batchnorm fusedbatchnormv3 1 shape 32 dtype float32 traceback most recent call last file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1365 in do call return fn args file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1350 in run fn target list run metadata file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1443 in call tf sessionrun run metadata tensorflow python framework error impl outofrangeerror node batchnorm fusedbatchnormv3 type fusedconv2d num of output 1 do not have output 1 during handling of the above exception another exception occur traceback most recent call last file test py line 43 in main file test py line 38 in main val sess run op output name feed dict feed dict file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 956 in run run metadata ptr file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1180 in run feed dict tensor option run metadata file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1359 in do run run metadata file home hadusam tensorflow cpu lib python3 5 site package tensorflow core python client session py line 1384 in do call raise type e node def op message tensorflow python framework error impl outofrangeerror node batchnorm fusedbatchnormv3 type fusedconv2d num of output 1 do not have output 1 |
tensorflowtensorflow | even load work | Bug | url s with the issue save model compatibility description of issue what need change the sentence below be unclear in what it mean I think there might be an extra word tensorflow 2 0 save model even load work in tensorflow 1 x if all the op be support |
tensorflowtensorflow | break outbound link from index page for old tf api | Bug | url s with the issue numerous link of the old tf apis list only a few index page as example here link under class and function within link under class and function within link within java link within description of issue what need change a large amount of link on index page e g for function and class be currently break 404 this appear to affect only the old tf version multiple version affect whose index page be markdown file within github repository clear description outbound link from these indexing page be currently break some might be fix by add a md after the current url to point to the correct markdown available in the repository but not all of they can be fix through this way correct link be the link to the source code correct incorrect submit a pull request be you plan to also submit a pull request to fix the issue currently no plan |
tensorflowtensorflow | differentiate an extremely complicated weightless transform | Bug | I m consider insert a weightless transform between cnn layer in a neural network the transform be very complicated detail below I m aware that custom transformation be support via e g keras lambda layer however I don t know if tf be capable of handle something of this complexity the code I have need to be convert for compatibility but I rather it not be a wasted effort can tensorflow s autodifferentiation handle synsq cwt fwd l12 continuous synchrosqueezing transform full dev stage code in the ssqueezepy repository highlight simplify python x np random randn 2000 len wx squeeze shape 2 psihfn lambda w np exp 2 pi 1j 1 w np abs w 1 999 np exp 1 1 w 1 np abs w 1 999 2 for I in range 200 psih psihfn 5 np arange 500 wx I np fft ifftshift np fft ifft psih np fft fft x python wx dtype complex128 u np unwrap np angle wx t w np array np diff u u 1 u 0 t 1 psihfn involve a complex exponential b inverse asymptotic real exponential c boolean mask 2 wx involve a fft of ifftshift of ifft of psih fft x b compute iteratively but not recurrently future value don t depend on past 3 w second blob involve a wx compute in 1 2 a complex128 2d array b apply dif unwrap angle wx t the complex component may be eliminate entirely include in 1 a the main subject of attention be perhaps the fft nest I don t know how numpy compute fft as it encapsulate an involved op there s also a question of performance if it take hour on a small array it s not worth it |
tensorflowtensorflow | unclear documentation for implement custom tensorflow keras optimizer | Bug | url s with the issue tf keras optimizers optimizer specifically the section write a customize optimizer write a customize optimizer 2 description of issue what need change the instruction for create a custom optimizer seem to be inconsistent with how tf keras optimizers optimizer subclass be define in tensorflow and other project clear description this originate as a question on stack overflow which be reproduce below suppose I want to write a custom optimizer class that conform to the tf keras api use tensorflow version 2 0 I be confused about the document way to do this versus what s do in implementation the documentation for tf keras optimizers optimizer state l217 l224 however the current tf keras optimizers optimizer implementation do not define a resource apply dense method but it do define a private looking resource apply dense method stub l916 l928 similarly there be no resource apply sparse or create slot method but there be a resource apply sparse method stub l958 l977 and a create slot method call l434 in official tf keras optimizers optimizer subclass use tf keras optimizer adam as an example there be resource apply dense l192 l227 resource apply sparse l229 l267 and create slot l150 l159 method and there be no such method without the lead underscore there be similar lead underscore method in slightly less official tf keras optimizers optimizer subclass e g tfa optimizer movingaverage from tensorflow addon resource apply dense l73 l76 resource apply sparse l78 l82 create slot l92 l95 another confound point for I be that some of the tensorflow addon optimizer also override the apply gradient method e g tfa optimizer movingaverage l55 l57 whereas the tf keras optimizer optimizer do not moreover I notice that the apply gradient method of tf keras optimizers optimizer method call create slot but the base tf keras optimizers optimizer class do not have a create slot method so it seem that a create slot method must be define in an optimizer subclass if that subclass do not override apply gradient question what be the correct way to subclass a tf keras optimizers optimizer specifically 1 do the tf keras optimizers optimizer documentation list at the top simply mean to override the lead underscore version of the method they mention e g resource apply dense instead of resource apply dense if so be there any api guarantee about these private looking method not change their behavior in future version of tensorflow what be the signature of these method 2 when would one override apply gradient in addition to the apply resource dense sparse method |
tensorflowtensorflow | categorical encoding in doc csv ipynb return inconsistent result | Bug | url s with the issue which be available on github at description of issue this be about the output of the last cell in the section categorical datum namely the output of print categorical layer example batch numpy 0 if we remove the index 0 we re suppose to get the one hot encoding of the categorical feature I e a 5 by 20 matrix where 5 be the batch size and 20 be the total dimensionality of all categorical feature if for the sake of reproducibility we also set shuffle false in the call to tf datum experimental make csv dataset at the very top of the notebook the matrix we then get be 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 this do not match up with the input categorical feature for that batch namely sex b male b female b female b female b male class b third b first b third b first b third deck b unknown b c b unknown b c b unknown embark town b southampton b cherbourg b southampton b southampton b queenstown alone b n b n b y b n b y for example b male b female b female b female b male do not match up with the first two column of the matrix |
tensorflowtensorflow | save to file a model within tpustrategy | Bug | on tensorflow 2 0 and 2 1 try to save to file a tpu train model or a model that be even create within the scope of a tf distribute experimental tpustrategy yield error below despite throw an unimplementedexception the code do create a folder on disk with some content the reproducible code can be find in a collab notebook here the exception and it s stack python unimplementederror traceback most recent call last in 48 print 1 4 err 49 50 model save model new 1 51 52 model compile 14 frame tensorflow 2 1 0 python3 6 tensorflow core python keras engine network py in save self filepath overwrite include optimizer save format signature option 1006 1007 save save model self filepath overwrite include optimizer save format 1008 signature option 1009 1010 def save weight self filepath overwrite true save format none tensorflow 2 1 0 python3 6 tensorflow core python keras save save py in save model model filepath overwrite include optimizer save format signature option 113 else 114 save model save save model filepath overwrite include optimizer 115 signature option 116 117 tensorflow 2 1 0 python3 6 tensorflow core python keras save save model save py in save model filepath overwrite include optimizer signature option 76 we use the default replica context here 77 with distribution strategy context get default replica context pylint disable protect access 78 save lib save model filepath signature option 79 80 if not include optimizer tensorflow 2 1 0 python3 6 tensorflow core python save model save py in save obj export dir signature option 914 savedmodel proto itself 915 util impl get or create variable dir export dir 916 object saver save util impl get variable path export dir 917 builder impl copy asset to destination dir asset info asset filename map 918 export dir tensorflow 2 1 0 python3 6 tensorflow core python training tracking util py in save self file prefix checkpoint number session 1166 file io recursive create dir os path dirname file prefix 1167 save path new feed addition self save cache when graph build 1168 file prefix file prefix tensor object graph tensor object graph tensor 1169 if new feed addition 1170 feed dict update new feed addition tensorflow 2 1 0 python3 6 tensorflow core python training tracking util py in save cache when graph build self file prefix object graph tensor 1114 or context execute eagerly or op inside function 1115 saver functional saver multidevicesaver name saveable object 1116 save op saver save file prefix 1117 with op device cpu 0 1118 with op control dependency save op tensorflow 2 1 0 python3 6 tensorflow core python training save functional saver py in save self file prefix 228 singledevicesaver will use the cpu device when necessary but initial 229 read operation should be place on the saveableobject s device 230 sharde save append saver save shard prefix 231 232 with op control dependency sharde save tensorflow 2 1 0 python3 6 tensorflow core python training save functional saver py in save self file prefix 67 for spec in saveable spec 68 tensor name append spec name 69 tensor append spec tensor 70 tensor slice append spec slice spec 71 with op device cpu 0 tensorflow 2 1 0 python3 6 tensorflow core python training save saveable object py in tensor self 50 property 51 def tensor self 52 return self tensor if callable self tensor else self tensor 53 54 tensorflow 2 1 0 python3 6 tensorflow core python training save saveable object util py in f 89 def f 90 with op device v device 91 x v read value 92 to allow variable place on non cpu device to be checkpointe 93 we copy they to cpu on the same machine first tensorflow 2 1 0 python3 6 tensorflow core python op resource variable op py in read value self 633 634 with op name scope read 635 value self read variable op 636 return an identity so it can get place on whatever device the context 637 specifie instead of the device where the variable be tensorflow 2 1 0 python3 6 tensorflow core python op resource variable op py in read variable op self 611 variable access self 612 result gen resource variable op read variable op self handle 613 self dtype 614 maybe set handle data self dtype self handle result 615 tensorflow 2 1 0 python3 6 tensorflow core python op gen resource variable op py in read variable op resource dtype name 477 pass add node to the tensorflow graph 478 except core notokstatusexception as e 479 op raise from not ok status e name 480 add node to the tensorflow graph 481 dtype execute make type dtype dtype tensorflow 2 1 0 python3 6 tensorflow core python framework op py in raise from not ok status e name 6604 message e message name name if name be not none else 6605 pylint disable protect access 6606 six raise from core status to exception e code message none 6607 pylint enable protect access 6608 usr local lib python3 6 dist package six py in raise from value from value unimplementederror file system scheme local not implement file model new 1 variable variable temp 17ffcf98334348fd8ef1e339869f0bfc encounter when execute an operation use eagerexecutor this error cancel all future operation and poison their output tensor op readvariableop system information the error be reproduce in collab so I m skip tf env collect sh output |
tensorflowtensorflow | eager | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | discrepancy between keras layers reshape and tf keras layers reshape | Bug | tf version 2 1 0 keras version 2 2 4 tf in kera accord to the documentation we expect python model add reshape 1 2 2 now model output shape none 3 2 2 but in tf keras documentation python model add reshape 1 2 2 now model output shape none none 2 2 the second dimension be now none instead of the compute value this make tf keras incompatible with kera and make it hard to write code that be parameterize by of the output shape of an opaque model be there a way to get the true output shape |
tensorflowtensorflow | tf lite update 3rd party repo script r2 1 branch | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 linux raspbian mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version r2 1 python version na instal use virtualenv pip conda na bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the problem the update 3rd party repository script crap out because it can t find the eigen url provide the exact sequence of command step that you execute before run into the problem tensorflow lite tool make download dependency sh any other info log it be fix in head but wasn t backporte to the release branch patch tf lite url patch txt |
tensorflowtensorflow | fail to get device attribute 13 for device 0 | Bug | when I be try to run yolo detection example I get that error 2020 02 02 21 39 00 821721 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll warning tensorflow from c user dominux appdata local program python python37 lib site package tensorflow core python op resource variable op py 1635 call baseresourcevariable init from tensorflow python op resource variable op with constraint be deprecate and will be remove in a future version instruction for update if use keras pass constraint argument to layer 2020 02 02 21 39 03 863436 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 02 02 21 39 04 431694 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 02 00 0 name geforce mx230 computecapability 6 1 coreclock 1 531ghz corecount 2 devicememorysize 2 00gib devicememorybandwidth 44 76gib s 2020 02 02 21 39 04 437212 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 02 02 21 39 04 444498 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 02 02 21 39 04 450110 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 02 02 21 39 04 453997 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 02 02 21 39 04 459404 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 02 02 21 39 04 464501 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 02 02 21 39 04 477818 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 02 02 21 39 04 480586 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 2020 02 02 21 39 09 674559 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 02 02 21 39 09 678508 f tensorflow stream executor lib statusor cc 34 attempt to fetch value instead of handle error internal fail to get device attribute 13 for device 0 cuda error unknown unknown error my stack win 10 tensoflow 2 1 intel core i5 nvidia geforce mx230 2 gb 8gd ddr4 I check similar issue but they don t have solution just despair people but it problem be talk about by they only with tf 1 14 I didn t find other but how you could notice at I stack above I m use tf 2 1 already please can you help I maybe I ve get problem with driver or cuda software any idea |
tensorflowtensorflow | tf 2 1 do not support bazel 0 26 | Bug | url s with the issue description of issue what need change clear description it s say that tensorflow 2 1 0 be build by bazel 0 26 1 but when I use bazel 0 26 1 to build I get the follow error please upgrade your bazel installation to version 0 27 1 or high to build tensorflow so what be the correct bazel version |
tensorflowtensorflow | file io get match file indefinitely hang | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 gcp cloud shell mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary come pre instal in gcp cloud shell tensorflow version use command below tf 2 1 python version 3 7 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior file io get match file indefinitely hang describe the expect behavior file io get match file should not hang code to reproduce the issue note the in the first command gsutil cp readme cloudshell txt gs test bug txt gsutil cp readme cloudshell txt gs test bug txt this create a weird folder under the test folder now open python from tensorflow python lib io import file io file io get match file gs test bug txt this will hang delete the folder and this would work fine one of our training job hang because tf somehow create a folder in model output directory |
tensorflowtensorflow | attributeerror callback object have no attribute validation datum | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change currently the page say validation datum deprecate do not use but if I attempt to access the attribute validation datum inside the callback I get the error attributeerror mycustomcallbackclass object have no attribute validation datum you should change the documentation to remove the attribute validation datum which apparently be remove see also this issue |
tensorflowtensorflow | different behavior tf keras and keras for stateful true | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 all platform test on ubuntu 18 04 and macos tensorflow instal from source or binary pip tensorflow version use command below 2 1 python version 3 8 describe the current behavior the original keras api specifie that when an lstm be set stateful true it s batch size must be know beforehand by specify batch shape the same be true for tf kera but it add another hidden requirement that be not there in the original kera tf keras require that the full input shape include batch size be know if one of the dimension be none it emit the if a rnn be stateful it need to know its batch size error describe the expect behavior as with the keras api it should be allow to have none dimension besides the batch size code to reproduce the issue kera from keras model import model from keras layers import input lstm reshape def model input layer input batch shape 1 none reshape layer reshape 1 100 input layer lstm layer lstm unit 100 stateful true reshape layer return model input input layer output lstm layer model model code run perfectly fine tf keras from tensorflow keras model import model from tensorflow keras layers import input lstm reshape def model input layer input batch shape 1 none reshape layer reshape 1 100 input layer lstm layer lstm unit 100 stateful true reshape layer return model input input layer output lstm layer model model valueerror if a rnn be stateful it need to know its batch size specify the batch size of your input tensor if use a sequential model specify the batch size by pass a batch input shape argument to your first layer if use the functional api specify the batch size by pass a batch shape argument to your input layer there be some very legitimate use case for allow non batch dimension to be unknown this change in functionality prevent I from migrate a variable multi stream cnn model from keras to tf keras |
tensorflowtensorflow | official release of tensorflow 1 15 2 do not include gpu support | Bug | system information os platform and distribution linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary file tensorflow version 1 15 2 python version python2 python3 6 instal use virtualenv pip conda pip describe the problem as announce topic developer irct5m4quz0 tensorflow 1 15 contain gpu support by default tensorflow 1 15 2 no long have gpu support |
tensorflowtensorflow | empty step stat from context export run metadata in tf 2 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google colab as session be no more available in tf 2 api we be use context enable run metadata and context export run metadata to accure run metadata step stat which then we feed to timeline timeliner to export trace information about session execution and visualise it usgin chrome trace this approch work well with tf 2 0 but once we update to 2 1 run metadata step stat appear to be empty here be code which acquire and print step stat tf 2 0 tf 2 1 I know that accord to doc we should use summary trace on or profiler client start trace to acquire trace information however there be seem an issue with gpu tracing |
tensorflowtensorflow | non ok status gpulaunchkernel swapdimension1and2intensor3usingtile internal invalid configuration argument when use the tf keras maxpooling3d layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I have write my own model training script os platform and distribution e g linux ubuntu 16 04 linux ubuntu 19 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary from binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 5 pip version 20 0 2 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version cuda v10 1 243 10 1 cudnn 7 6 5 32 1 cuda10 1 gpu model and memory 2x nvidia geforce gtx 1070 ti 8 gb vram each describe the current behavior when use tensorflow s tensorflow distribute mirrorstrategy to scale a tf keras base model with the tf keras layers maxpooling3d layer tensorflow output the error 2020 01 29 13 14 07 089464 f tensorflow core kernel conv 2d gpu h 659 non ok status gpulaunchkernel swapdimension1and2intensor3usingtile total tile count numthread 0 d stream input input dim output status internal invalid configuration argument follow by abort core dump when try to train the model I be able to verify that the maxpooling3d layer appear to be the issue because when take it out of the test script see code to reproduce the issue the issue do not persist the same issue also occur when use the now deprecate tf keras util multi gpu model I be yet unsure whether this be a problem in tensorflow itself or any of the used framework please note that this issue be most likely not relate to 30665 or 33696 since they describe an issue relate to the internal cuda kernel fillphiloxrandomkernellaunch rather than the swapdimension1and2intensor3usingtile describe here I be happy to provide further more specific information on the used training script or the issue itself if request describe the expect behavior the model should train normally and not output any error as it do when execute without the distribute scope or when reduce the mirroredstrategy to only use one gpu code to reproduce the issue python from tensorflow keras model import from tensorflow keras layer import from tensorflow keras import import tensorflow as tf import numpy as np strat tf distribute mirroredstrategy device gpu 0 gpu 1 with strat scope model sequential model add maxpooling3d 1 2 2 input shape none 640 480 1 model add convlstm2d 16 kernel size 3 3 model add flatten model add dense 1 model summary model compile loss mean squared error optimizer adam metric acc x np random rand 1 10 640 480 1 y np random rand 1 1 model fit x x y y epoch 10 other info log full console output venv redact redact redact python test py 2020 01 29 13 32 31 798280 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer so 6 2020 01 29 13 32 31 799195 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libnvinfer plugin so 6 2020 01 29 13 32 32 130276 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 29 13 32 32 168392 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 168824 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 ti computecapability 6 1 coreclock 1 683ghz corecount 19 devicememorysize 7 91gib devicememorybandwidth 238 66gib s 2020 01 29 13 32 32 168866 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 169410 I tensorflow core common runtime gpu gpu device cc 1555 find device 1 with property pcibusid 0000 02 00 0 name geforce gtx 1070 ti computecapability 6 1 coreclock 1 683ghz corecount 19 devicememorysize 7 93gib devicememorybandwidth 238 66gib s 2020 01 29 13 32 32 169467 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 29 13 32 32 169549 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 29 13 32 32 170673 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 29 13 32 32 170902 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 29 13 32 32 171948 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 29 13 32 32 172544 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 29 13 32 32 172565 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 29 13 32 32 172641 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 172988 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 173522 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 173857 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 174368 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 1 2020 01 29 13 32 32 174654 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 29 13 32 32 198716 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3699850000 hz 2020 01 29 13 32 32 199091 I tensorflow compiler xla service service cc 168 xla service 0x47afc50 initialize for platform host this do not guarantee that xla will be use device 2020 01 29 13 32 32 199105 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 01 29 13 32 32 293103 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 303707 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 304142 I tensorflow compiler xla service service cc 168 xla service 0x4845b80 initialize for platform cuda this do not guarantee that xla will be use device 2020 01 29 13 32 32 304154 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1070 ti compute capability 6 1 2020 01 29 13 32 32 304159 I tensorflow compiler xla service service cc 176 streamexecutor device 1 geforce gtx 1070 ti compute capability 6 1 2020 01 29 13 32 32 304924 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 305196 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1070 ti computecapability 6 1 coreclock 1 683ghz corecount 19 devicememorysize 7 91gib devicememorybandwidth 238 66gib s 2020 01 29 13 32 32 305233 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 305503 I tensorflow core common runtime gpu gpu device cc 1555 find device 1 with property pcibusid 0000 02 00 0 name geforce gtx 1070 ti computecapability 6 1 coreclock 1 683ghz corecount 19 devicememorysize 7 93gib devicememorybandwidth 238 66gib s 2020 01 29 13 32 32 305520 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 29 13 32 32 305527 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 29 13 32 32 305535 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 01 29 13 32 32 305543 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 01 29 13 32 32 305551 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 01 29 13 32 32 305558 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 01 29 13 32 32 305565 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 29 13 32 32 305593 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 305872 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 306159 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 306438 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 306747 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 1 2020 01 29 13 32 32 306766 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 01 29 13 32 32 546439 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2020 01 29 13 32 32 546463 I tensorflow core common runtime gpu gpu device cc 1102 0 1 2020 01 29 13 32 32 546468 I tensorflow core common runtime gpu gpu device cc 1115 0 n y 2020 01 29 13 32 32 546471 I tensorflow core common runtime gpu gpu device cc 1115 1 y n 2020 01 29 13 32 32 546636 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 546935 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 547217 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 547478 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 6807 mb memory physical gpu device 0 name geforce gtx 1070 ti pci bus i d 0000 01 00 0 compute capability 6 1 2020 01 29 13 32 32 547823 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 29 13 32 32 548093 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 7562 mb memory physical gpu device 1 name geforce gtx 1070 ti pci bus i d 0000 02 00 0 compute capability 6 1 model sequential layer type output shape param max pooling3d maxpooling3d none none 320 240 1 0 conv lst m2d convlstm2d none 318 238 16 9856 flatten flatten none 1210944 0 dense dense none 1 1210945 total param 1 220 801 trainable param 1 220 801 non trainable param 0 train on 1 sample epoch 1 10 2020 01 29 13 32 35 394061 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 01 29 13 32 35 638536 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 29 13 32 35 639247 f tensorflow core kernel conv 2d gpu h 659 non ok status gpulaunchkernel swapdimension1and2intensor3usingtile total tile count numthread 0 d stream input input dim output status internal invalid configuration argument abort core dump last few line of output with the tf cpp min vlog level 2 environment variable set 2020 01 29 13 42 39 018679 I tensorflow core common runtime gpu gpu device cc 523 gpudevice computehelper schedule replica 1 cast op cast on gpu 1 stream 0 2020 01 29 13 42 39 018689 I tensorflow core framework log memory cc 34 log memory memorylogtensoroutput step i d 6348371833153678773 kernel name replica 1 cast tensor dtype dt float shape dim dim size 10 dim size 640 dim size 480 dim size 1 2020 01 29 13 42 39 018696 I tensorflow core common runtime executor cc 1912 synchronous kernel do 167 step 6348371833153678773 node replica 1 cast cast dstt dt float srct dt double truncate false xlahasreferencevar false device job localhost replica 0 task 0 device gpu 1 cond 3 output 37 device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018703 I tensorflow core common runtime executor cc 1756 process node 168 step 6348371833153678773 node replica 1 metric acc remove squeezable dimension squeeze squeeze t dt float xlahasreferencevar false squeeze dim 1 device job localhost replica 0 task 0 device gpu 1 replica 1 metric acc cast device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018709 I tensorflow core common runtime gpu gpu device cc 497 gpudevice computehelper replica 1 metric acc remove squeezable dimension squeeze op squeeze on gpu 1 stream 0 2020 01 29 13 42 39 018720 I tensorflow core framework log memory cc 34 log memory memorylogtensorallocation step i d 6348371833153678773 kernel name replica 1 metric acc remove squeezable dimension squeeze tensor dtype dt float shape dim 2020 01 29 13 42 39 018726 I tensorflow core common runtime gpu gpu device cc 523 gpudevice computehelper schedule replica 1 metric acc remove squeezable dimension squeeze op squeeze on gpu 1 stream 0 2020 01 29 13 42 39 018734 I tensorflow core framework log memory cc 34 log memory memorylogtensoroutput step i d 6348371833153678773 kernel name replica 1 metric acc remove squeezable dimension squeeze tensor dtype dt float shape dim dim size 10 2020 01 29 13 42 39 018740 I tensorflow core common runtime executor cc 1912 synchronous kernel do 168 step 6348371833153678773 node replica 1 metric acc remove squeezable dimension squeeze squeeze t dt float xlahasreferencevar false squeeze dim 1 device job localhost replica 0 task 0 device gpu 1 replica 1 metric acc cast device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018750 I tensorflow core common runtime executor cc 1756 process node 169 step 6348371833153678773 node replica 1 stride slice stridedslice index dt int32 t dt int32 xlahasreferencevar false begin mask 0 ellipsis mask 0 end mask 0 new axis mask 0 shrink axis mask 1 device job localhost replica 0 task 0 device gpu 1 replica 1 shape replica 1 stride slice stack replica 1 stride slice stack 1 replica 1 stride slice stack 1 device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018755 I tensorflow core common runtime gpu gpu device cc 497 gpudevice computehelper replica 1 stride slice op stridedslice on gpu 1 stream 0 2020 01 29 13 42 39 018763 I tensorflow core common runtime bfc allocator cc 227 allocateraw gpu host bfc 4 2020 01 29 13 42 39 018775 I tensorflow core framework log memory cc 34 log memory memorylogtensorallocation step i d 6348371833153678773 kernel name replica 1 stride slice tensor dtype dt int32 shape allocation description request byte 4 allocate byte 256 allocator name gpu host bfc allocation i d 193 have single reference true ptr 140615866159616 2020 01 29 13 42 39 018785 I tensorflow core common runtime gpu gpu device cc 523 gpudevice computehelper schedule replica 1 stride slice op stridedslice on gpu 1 stream 0 2020 01 29 13 42 39 018793 I tensorflow core framework log memory cc 34 log memory memorylogtensoroutput step i d 6348371833153678773 kernel name replica 1 stride slice tensor dtype dt int32 shape allocation description request byte 4 allocate byte 256 allocator name gpu host bfc allocation i d 193 have single reference true ptr 140615866159616 2020 01 29 13 42 39 018801 I tensorflow core common runtime executor cc 1912 synchronous kernel do 169 step 6348371833153678773 node replica 1 stride slice stridedslice index dt int32 t dt int32 xlahasreferencevar false begin mask 0 ellipsis mask 0 end mask 0 new axis mask 0 shrink axis mask 1 device job localhost replica 0 task 0 device gpu 1 replica 1 shape replica 1 stride slice stack replica 1 stride slice stack 1 replica 1 stride slice stack 1 device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018808 I tensorflow core framework log memory cc 34 log memory memorylogtensordeallocation allocation i d 16619 allocator name cpu 2020 01 29 13 42 39 018819 I tensorflow core common runtime executor cc 1756 process node 170 step 6348371833153678773 node replica 1 sequential max pooling3d maxpool3d maxpool3d t dt float xlahasreferencevar false datum format ndhwc ksize 1 1 2 2 1 padding valid stride 1 1 2 2 1 device job localhost replica 0 task 0 device gpu 1 replica 1 cast device job localhost replica 0 task 0 device gpu 1 2020 01 29 13 42 39 018824 I tensorflow core common runtime gpu gpu device cc 497 gpudevice computehelper replica 1 sequential max pooling3d maxpool3d op maxpool3d on gpu 1 stream 0 2020 01 29 13 42 39 018838 I tensorflow core framework log memory cc 34 log memory memorylogtensorallocation step i d 6348371833153678773 kernel name replica 1 sequential max pooling3d maxpool3d tensor dtype dt float shape dim dim size 10 dim size 320 dim size 240 dim size 1 2020 01 29 13 42 39 018849 I tensorflow core framework log memory cc 34 log memory memorylogtensorallocation step i d 6348371833153678773 kernel name replica 1 sequential max pooling3d maxpool3d tensor dtype dt float shape dim dim size 1 dim size 10 dim size 640 dim size 480 2020 01 29 13 42 39 018862 f tensorflow core kernel conv 2d gpu h 659 non ok status gpulaunchkernel swapdimension1and2intensor3usingtile total tile count numthread 0 d stream input input dim output status internal invalid configuration argument abort core dump |
tensorflowtensorflow | not sufficient documentation on custom tf lite classification model | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | unsupported operand type s for int and nonetype in train v2 | Bug | system information tensorflow instal from source or binary source tensorflow version use command below 2 1 0 run a simple evaluation of a lenet model with mnist dataset from tfds with a large batch size e g due to use many worker lead to tensorflow python framework error impl outofrangeerror 2 root error s find 0 out of range end of sequence node iteratorgetnext 2 define at git tensorflow test train distribute py 212 metric accuracy div no nan allreduce 1 collectivereduce 70 1 out of range end of sequence node iteratorgetnext 2 define at git tensorflow test train distribute py 212 and file scratch ws s3248973 easybuild easybuild haswell software tensorflow 2 1 0 fosscuda 2019b python 3 7 4 lib python3 7 site package tensorflow core python keras engine training v2 py line 152 in run one epoch total epoch step per epoch typeerror unsupported operand type s for int and nonetype the bug be pretty obvious in l152 where step per epoch be use although it can be and be none see also l135 |
tensorflowtensorflow | tf debugging assert shape do not work for sparsetensor | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os x 10 15 tensorflow instal from source or binary binary tensorflow version use command below 2 1 python version 3 8 describe the current behavior tf debugging assert shape can not be use with sparse tensor describe the expect behavior tf debugging assert shape should allow you to mix and match dense and sparse tensor when check for dimensional consistency code to reproduce the issue import tensorflow as tf a tf range 3 tf debugging assert shape a 3 work raise valueerror attempt to convert a value with an unsupported type to a tensor tf debugging assert shape tf sparse from dense a 3 |
tensorflowtensorflow | tensorflow keras metric can not be use straight into the keras compile method | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu tensorflow instal from source or binary use pip tensorflow version use command below 2 1 0 keras version 2 3 1 python version 3 6 4 tensorflow version git version tensorflow version version v2 1 0 rc2 17 ge5bf8de 2 1 0 describe the current behavior I find an anomalous behavior when specify tensorflow kera metric directly into the keras compile api from tensorflow keras metric import recall precision model compile metric recall precision when look at the history track the precision and recall plot at each epoch use keras callback history I observe very similar performance to both the training set and the validation set the weird thing be that both recall and precision increase at each epoch while the loss be clearly not improve anymore I find the issue to be relate to the statefulness of the tensorflow metric object everytime you call the metric object it will append a new batch of datum that get mixed with both training and validation datum and cumulate at each epoch describe the expect behavior the expect behavior be that the metric object should be stateless and do not depend on previous call each time we calculate the metric precision recall or anything else the function should only depend on the specified y true and y pre to workaround the issue we need to have either have kera to be smart enough to re instantiate the metric object at every call or to provide a tensorflow wrapper that be stateless maybe a decorator code to reproduce the issue from tensorflow keras metric import recall precision auc topkcategoricalaccuracy precisionatrecall recall recall y train 0 1 0 1 1 0 0 0 y train pre 0 1 0 50001 0 4 0 7 0 5 0 51 1 0 y test 1 1 0 0 0 0 0 1 y test pre 0 1 0 80 0 8 0 9 0 1 0 4 0 99 0 print recall y train y train pre print recall y test y test pre recall recall print recall y test y test pre recall recall print recall y test y test pre print recall y train y train pre other info log the code above will print tf tensor 0 6666667 shape dtype float32 tf tensor 0 5 shape dtype float32 tf tensor 0 33333334 shape dtype float32 tf tensor 0 33333334 shape dtype float32 tf tensor 0 5 shape dtype float32 as you can see the behavior be not stateless but be the concatenation of all of the apply call since the object instantiation |
tensorflowtensorflow | xla warn and could not interpret set environment | Bug | system information os platform and distribution 18 04 tensorflow version 1 14 0 python version 2 7 17 couldn t interpret value home hyadav deephyp lib python2 7 site package tensorflow compiler xla tf xla cpu global jit home hyadav deephyp lib python2 7 site package tensorflow compiler xla tf xla cpu global jit tf xla cpu global jit for flag tf xla cpu global jit I try use the following export tf xla flag tf xla cpu global jit mytensorflowpath tensorflow compiler xla tf xla flag tf xla cpu global jit from issue 30308 and get the above |
tensorflowtensorflow | hessian vector product raise exception about fetch argument | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 arch linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v1 15 0 rc3 22 g590d6ee 1 15 0 python version python 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior try to compute the hessian vector product of a graph with a linear dependence on a tensor produce a typeerror exception raw typeerror fetch argument none have invalid type exception code to reproduce the issue python import itertool import os import numpy random as rnd import tensorflow as tf from tensorflow python op gradient impl import hessian vector product os environ tf cpp min log level 3 def main n 10 x tf variable tf zero n dtype tf float64 name x y tf variable tf zero n dtype tf float64 name y z tf variable tf zero n dtype tf float64 name z function tf reduce sum x 2 y z 3 argument x y z zero tf zero like argument for argument in argument hessian hessian vector product function argument zero x y z rnd randn n for in range 3 u v w rnd randn n for in range 3 feed dict key value for key value in zip itertool chain argument zero x y z u v w with tf session as session hessian vector product session run hessian feed dict print hessian vector product if name main main |
tensorflowtensorflow | malformatte doc page for sgd optimizer | Bug | url s with the issue apply gradient description of issue what need change clear description the formatting in the reference section be wrong there be a nesterov true that have nothing to do with the reference |
tensorflowtensorflow | layer name tf keras application vs keras application not match | Bug | not sure if this be a documentation issue or a functional bug description of issue the layer name in tf keras application be not consistent with the layer name in keras application example keras application resnet50 resnet50 weight imagenet summary print the follow layer conv1 pad zeropadding2d none 230 230 3 0 input 3 0 0 conv1 conv2d none 112 112 64 9472 conv1 pad 0 0 bn conv1 batchnormalization none 112 112 64 256 conv1 0 0 activation 50 activation none 112 112 64 0 bn conv1 0 0 pool1 pad zeropadding2d none 114 114 64 0 activation 50 0 0 tf keras application resnet50 resnet50 weight imagenet summary print the follow layer conv1 pad zeropadding2d none 230 230 3 0 input 6 0 0 conv1 conv conv2d none 112 112 64 9472 conv1 pad 0 0 conv1 bn batchnormalization none 112 112 64 256 conv1 conv 0 0 conv1 relu activation none 112 112 64 0 conv1 bn 0 0 pool1 pad zeropadding2d none 114 114 64 0 conv1 relu 0 0 this can lead to error if code rely on layer name be migrate from keras to tf keras even after look for it for quite a bit I have not find any documentation explain the change of layer name also I manually need to map keras layer name to tf keras layer name which would be avoidable with some nice documentation url s with the issue probably this should be mention in the migration doc or the application doc a note on slim contriblayer usage example the follow code snippet run when application be import from keras but not with tf kera as the layer be not find load model application resnet50 weight imagenet partial model model input loaded model input output loaded model get layer res5c branch2c output some easily findable documentation should be explain why the layer be not find rename layer name and where to find the mapping from keras layer name to tf keras layer name I e the mapping from res5c branch2c to conv5 block3 3 conv which be the layer name use in tf keras use version in these example tensorflow 2 1 0 keras 2 3 1 final remark I m not sure if rename the layer be that useful while the tf keras layer name might be well readable paper and tutorial often refer to the keras label name and user now seem to have to find a mapping themself |
tensorflowtensorflow | gradienttape gradient fail when tf gather be use after lstm gru in tf function | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 and 2 0 0 python version 3 6 describe the current behavior consider the follow simple model run a gru tf gather and a linear regression python import tensorflow as tf input tf keras layers input shape none 1 dtype tf float32 hide tf keras layers gru 10 input hide tf gather hide 0 output tf keras layer dense 1 hide model tf keras model inputs input output output tf function def train x y with tf gradienttape as tape prediction model x train true loss tf loss mean square error y prediction gradient tape gradient loss model trainable variable train tf constant 1 2 3 dtype tf float32 tf constant 1 dtype tf float32 both tf 2 0 0 and tf 2 1 0 fail with a similar error message the follow be tf 2 1 valueerror all input to concretefunction s must be tensor on invocation of backward standard gru 918 the 0 th input indexedslice indice tensor reshape 2 0 shape 1 dtype int32 value tensor reshape 1 0 shape 1 10 dtype float32 dense shape tensor cast 2 0 shape 2 dtype int32 be not a tensor describe the expect behavior the code should work other info log the problem be cause by tf gather generate tf indexedslice as a derivative however current lstm gru be graph function and the backprop algorithm for graph function assume the input must be tf tensor s one possible workaround be not to use tf function and use eager then the code work another solution be to force conversion of the derivative from tf indexedslice to tf tensor the easy solution I find be to use 1 which add operation mul to the computation graph for which the conversion from tf indexedslice to tensor work so the follow work python input tf keras layers input shape none 1 dtype tf float32 hide tf keras layers gru 10 input hide tf gather hide 1 0 output tf keras layer dense 1 hide model tf keras model inputs input output output tf function def train x y with tf gradienttape as tape prediction model x train true loss tf loss mean square error y prediction gradient tape gradient loss model trainable variable train tf constant 1 2 3 dtype tf float32 tf constant 1 dtype tf float32 possible solution I assume an automatic conversion of tf indexedslice to tf tensor should be perform as a proof of concept I change the line l1731 to python tensor input append op convert to tensor arg and the code then work as expect convert the tf indexedslice to tf tensor however a proper fix be definitely more complicated |
tensorflowtensorflow | tensorflow java gpu compute capabiltie 6 0 instead of 3 7 | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 ubuntu linux 18 06 tensorflow instal from source or binary binary tensorflow version 1 13 0 1 13 1 1 13 2 1 14 0 1 15 0 python version 3 6 instal use virtualenv pip conda nope cuda cudnn version 10 1 gpu model and memory k80 describe the problem I be load a model in a java server use the java api to do inference the inference be work but not run on the gpu I get an error message provide below we have be use the python version of the same tensorflow release on the same gpu k80 without an issue this be only a problem with the java driver I believe the culprit be this line l41 l44 the comment seem to indicate that this env variable be not use I don t really understand the setup but it look like the java build be pick it up no other build seem to be require compute 6 0 everything be set to 3 7 k80 which would make sense since k80 be probably the cheap gpu available on cloud l31 l28 search of all tf cuda compute capability in the repo provide the exact sequence of command step that you execute before run into the problem use the java driver on gpu any other info log 2020 01 25 23 17 43 458562 I tensorflow core common runtime gpu gpu device cc 1717 ignore visible gpu device device 0 name tesla k80 pci bus i d 0000 00 04 0 compute capability 3 7 with cuda compute capability 3 7 the min imum require cuda capability be 6 0 2020 01 25 23 17 43 519712 I tensorflow core common runtime gpu gpu device cc 1181 device interconnect streamexecutor with strength 1 edge matrix 2020 01 25 23 17 43 519748 I tensorflow core common runtime gpu gpu device cc 1187 0 2020 01 25 23 17 43 519762 I tensorflow core common runtime gpu gpu device cc 1200 0 n ci log that contain compute restriction |
tensorflowtensorflow | tf keras nan loss when use multiple gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip tensorflow version use command below 2 0 python version 3 7 3 bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version 10 gpu model and memory 2x nvidia gtx 1080 ti 11 gb driver version 440 33 01 I be currently use tensorflow 2 0 python and the tf keras library to train a cnn however I be encounter an issue when I try to train my model by call model fit after I begin train the loss be normal for 1 2 step for the first epoch but after that it suddenly become nan loss this issue only happen when use multiple gpu the code I m use work perfectly fine on a single gpu I have wrap all of my code inside the scope of a tf distribute mirroredstrategy use with strategy scope I be feed my network with datum from a tf datum dataset though this error occur regardless of the data format I then run some test 1 I try to replace the datum in my dataset with random number but the loss still go to nan 2 I also try feed the numpy array directly to fit but that didn t solve the issue 3 I try use different optimizer adam rmsprop sgd batch size 4 8 16 32 and learn rate none of which help to solve this problem 4 I swap out my network for a simple multi layer perceptron but the error persist this doesn t appear to be an oom issue since the datum be relatively small and run watch n0 1 nvidia smi reveal that memory usage never exceed 30 on either of my gpu there doesn t seem to be any warning or error in the console output that might hint at the issue either any help be appreciate |
tensorflowtensorflow | model summary do not work in some case | Bug | tf version 2 1 have be suggest by reedwm I be file this bug please see the last comment in issue 35441 for detail cc reedwm |
tensorflowtensorflow | unable to build hello world for esp32 | Bug | tensorflow micro system information host os platform and distribution linux ubuntu 16 04 tensorflow instal from source tensorflow version ca0d5142f640d42037f22367ce3530b6b7a23b44 target platform esp32 describe the problem build the example hello world for esp32 fail please provide the exact sequence of command step when you run into the problem 1 I successfully generate example by command make f tensorflow lite micro tool make makefile target esp generate hello world esp project 2 I run build project by command idf py build 3 I receve compilation error 84 building c object esp idf protocomm cmakefile idf protocomm dir proto c constant pb c c obj in file include from home dmytro esp project hello world tf component tfmicro tensorflow lite micro kernels svdf cc 25 home dmytro esp project hello world tf component tfmicro tensorflow lite micro kernels activation util h in function float tflite op micro activationvalfloat tflitefusedactivation float home dmytro esp project hello world tf component tfmicro tensorflow lite micro kernels activation util h 45 1 error control reach end of non void function werror return type |
tensorflowtensorflow | break link share your tensorflow lite story | Bug | url s with the issue description of issue what need change this button image link to 403 page image |
tensorflowtensorflow | tf custom gradient do not behave as expect when use with tf function for 2nd order derivative | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 dev20200123 observe in other version python version 3 7 6 cuda cudnn version these be example of version though this have be replicate in other version of cuda cudnn cuda version 10 1 243 define cudnn major 7 define cudnn minor 6 define cudnn patchlevel 4 define cudnn version cudnn major 1000 cudnn minor 100 cudnn patchlevel gpu model and memory nvidia smi 440 33 01 driver version 440 33 01 cuda version 10 1 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 quadro p3200 on 00000000 01 00 0 off n a n a 53c p5 9w n a 5949mib 6078mib 10 default exact command to reproduce import tensorflow as tf print tf version tf function tf custom gradient def fn2 a logdet a def sec order grad a dash bug still appear if this fn outside custom grad call return a dash 5 tf one like logdet a 2 return a sec order grad tf function tf custom gradient def fwd a sign a logdet a tf linalg slogdet a logdet a tf stop gradient logdet a include to function def grad dy da fn2 a logdet a return dy da return 1 grad ndim 2 def call gradient 1st a tf zeros ndim ndim with tf gradienttape as g g watch a y fwd a z g gradient y a return y z def call gradient 2nd a tf zeros ndim ndim with tf gradienttape as g g watch a with tf gradienttape as gg gg watch a y fwd a z gg gradient y a gg g gradient z a return y z gg y z call gradient 1st print y z y z gg call gradient 2nd print y z gg describe the problem autograd attempt to call the gradient of a variable inside a custom gradient function if the gradient be not computable I e in this case when logdet have inf value then it crash this only appear in the second order see trace first order call second order doesn t this doesn t appear if we stop the gradient see note in function fwd this doesn t appear if we do not use the tf function wrapper source code log 2 2 0 dev20200123 tf tensor 1 0 shape dtype float32 tf tensor 0 0 0 0 shape 2 2 dtype float32 invalidargumenterror traceback most recent call last in 48 y z call gradient 1st 49 print y z 50 y z gg call gradient 2nd 51 print y z gg in call gradient 2nd 42 gg watch a 43 y fwd a 44 z gg gradient y a 45 gg g gradient z a 46 return y z gg anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager backprop py in gradient self target source output gradient unconnected gradient 1032 output gradient output gradient 1033 source raw flat source raw 1034 unconnected gradient unconnected gradient 1035 1036 if not self persistent anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager imperative grad py in imperative grad tape target source output gradient source raw unconnected gradient 75 output gradient 76 source raw 77 compat as str unconnected gradient value anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager function py in backward function wrapper args 1303 break 1304 return backward call flat pylint disable protect access 1305 process args remappe capture 1306 1307 return backward function wrapper record output anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager function py in call flat self args capture input cancellation manager 1748 flat output forward function call 1749 ctx args with tangent 1750 cancellation manager cancellation manager 1751 else 1752 with op get default graph override gradient function pylint disable protect access anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager function py in call self ctx args cancellation manager 596 input args 597 attrs attrs 598 ctx ctx 599 else 600 output execute execute with cancellation anaconda3 envs fermi21 lib python3 7 site package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 58 ctx ensure initialize 59 tensor pywrap tfe tfe py execute ctx handle device name op name 60 input attrs num output 61 except core notokstatusexception as e 62 if name be not none invalidargumenterror 2 root error s find 0 invalid argument input be not invertible node gradient logmatrixdeterminant grad matrixinverse define at 43 gradient grad ys 0 4 1 invalid argument input be not invertible node gradient logmatrixdeterminant grad matrixinverse define at 43 0 successful operation 0 derive error ignore op forward backward fwd 655 746 function call stack backward fwd 655 backward fwd 655 thank for any insight |
tensorflowtensorflow | repair break link | Bug | fix 36099 |
tensorflowtensorflow | multiworkermirroredstrategy training do not work with multiple epoch | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow minor adaption of os platform and distribution e g linux ubuntu 16 04 rhel 7 5 tensorflow instal from source or binary source tensorflow version use command below 2 1 0 python version 3 7 4 bazel version if compile from source 0 29 1 gcc compiler version if compile from source 8 3 0 cuda cudnn version 10 1 gpu model and memory k80 describe the current behavior execute training over multiple epoch and worker as per the example fail with empty training datum and warn tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch in this case 1407 batch you may need to use the repeat function when build your dataset describe the expect behavior run training over multiple epoch automatically repeat the training datum code to reproduce the issue pretty much the exact example code from the multiworkerdistributed tutorial usr bin env python import os import json import tensorflow dataset as tfds import tensorflow as tf from slurm utils import create tf config tfds disable progress bar buffer size 10000 batch size 64 def make dataset unbatched scale mnist datum from 0 255 to 0 1 def scale image label image tf cast image tf float32 image 255 return image label dataset info tfds load name mnist with info true as supervise true return dataset train map scale cache shuffle buffer size def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model tfconfig create tf config gpu per task 2 print use config format tfconfig os environ tf config json dump tfconfig strategy tf distribute experimental multiworkermirroredstrategy print number of worker nparameter device nworker format strategy num replicas in sync strategy extend parameter device strategy extend worker device here the batch size scale up by number of worker since tf datum dataset batch expect the global batch size global batch size batch size strategy num replicas in sync with strategy scope creation of dataset and model building compile need to be within strategy scope train dataset make dataset unbatched batch global batch size multi worker model build and compile cnn model multi worker model fit x train dataset epoch 3 other info log the above create tf config create the follow config from slurm environment cluster worker taurusa9 8888 task type worker index 0 trial 1 job none run with 1 epoch work without any other code change |
tensorflowtensorflow | model save error object have no attribute compile metric | Bug | with python3 model saving stop to work after switch from python2 to python3 I start to get for my not compile model the follow error error model object have no attribute compile metric later I switch to tf 2 right now run with 2 0 1 but nothing have change system information ubuntu debian tensorflow instal from binary tensorflow version from 1 x 2 0 1 python version 3 5 2 cuda cudnn version 8 10 1 base on tf need gpu model and memory various |
tensorflowtensorflow | tensorflow model fit error node iteratorgetnext | Bug | hey tensorflow team there be an error that accur when you try to follow the tutorial how to load csv datum link when the model run through all the epoch each time there be this error code basecollectiveexecutor startabort out of range end of sequence node iteratorgetnext there be already some entry here on github but none of they give a solution or a workaround the people say that this be an issue with tf v2 0 0 and 2 1 0 be there a solution for this problem I will leave my complete code here it s just 161 line maybe I do a mistake I also attach a screenshot from the error thank you very much in advance kind regard christian richter code python from future import absolute import division print function unicode literal import functool import tensorflow as tf import xlrd import panda as pd import csv import numpy as np import csv tf compat v1 enable eager execution train datum url test data url train file path tf keras util get file train datum csv csv train datum url test file path tf keras util get file test data csv csv test datum url np set printoption precision 3 suppress true head train file path label column besucher label 0 1 2 3 4 5 6 7 8 9 10 100 200 300 400 500 600 700 800 900 train dataset tf datum experimental make csv dataset datum train data csv csv batch size 52609 select column datum uhrzeit wochentag wochenende ferien feiertag brueckentag schneechaos streik besucher label name besucher num epoch 1 shuffle false test dataset tf datum experimental make csv dataset datum alt test data csv csv batch size 1 select column datum uhrzeit wochentag wochenende feiertag besucher label name besucher num epoch 1 shuffle false def show batch dataset for batch label in dataset take 1 for key value in batch item print 20 format key value numpy show batch train dataset def pack feature label return tf stack list feature value axis 1 label pack dataset1 train dataset map pack pack dataset2 test dataset map pack for feature label in pack dataset1 take 1 print feature numpy print print label numpy example batch label batch next iter train dataset class packnumericfeature object def init self name self name name def call self feature label numeric feature feature pop name for name in self name numeric feature tf cast feat tf float32 for feat in numeric feature numeric feature tf stack numeric feature axis 1 feature numeric numeric feature return feature label numeric feature datum uhrzeit wochentag wochenende ferien feiertag brueckentag schneechaos streik pack train datum train dataset map packnumericfeature numeric feature pack test datum train dataset map packnumericfeature numeric feature show batch pack train datum example batch label batch next iter pack train datum desc pd read csv datum train data csv csv numeric feature describe desc mean np array desc t mean std np array desc t std def normalize numeric data datum mean std return datum mean std normalizer functool partial normalize numeric datum mean mean std std numeric column tf feature column numeric column numeric normalizer fn normalizer shape len numeric feature numeric column numeric column numeric column example batch numeric numeric layer tf keras layer densefeature numeric column numeric layer example batch numpy preprocesse layer numeric layer print preprocesse layer example batch numpy 0 model tf keras sequential preprocesse layer tf keras layer dense 128 activation relu tf keras layer dense 128 activation relu tf keras layer dense 1 activation sigmoid model compile loss binary crossentropy optimizer adam metric accuracy train datum pack train datum shuffle 500 test datum pack test datum model fit train datum epoch 20 test loss test accuracy model evaluate test datum print n ntest loss test accuracy format test loss test accuracy |
tensorflowtensorflow | code will not compile image classification | Bug | description of issue what need change clear description there s a bug in code history model fit generator train datum gen step per epoch total train batch size epoch epoch validation datum val datum gen validation step total val batch size where step per epoch total train batch size will not compile because the be behind the comment the correct code should be history model fit generator train datum gen step per epoch total train batch size epoch epoch validation datum val datum gen validation step total val batch size |
tensorflowtensorflow | tftrt not convert dilated convolution | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 15 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 7 tensorrt 5 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version 1 15 0 describe the current behavior tftrt skip dilate convolution and give leave this in the output other convolution work fine this be what it look like in tensorboard dilate convolution it appear that there be some batchtospacend and spacetobatchnd operation perhaps these be not support describe the expect behavior base on the pr that be merge into tensorflow in january last year tftrt support dilate convolution 24674 I would expect that convert a convolution with dilation would be support instead it s not I try dilation rate of 2 8 16 code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem from tensorflow python compiler tensorrt import trt convert as trt import tensorflow as tf inp tf keras layers input input 400 400 3 x tf keras layer conv2d 8 kernel size 3 3 stride 1 1 dilation rate 16 16 inp model tf keras model model inp x sess tf keras backend get session output nodes n name 2 for n in model output graph def tf graph util convert variable to constant sess sess graph as graph def output node converter trt trtgraphconverter input graph def graph def node blacklist model output precision mode fp32 max batch size 1 minimum segment size 1 max workspace size byte 2048 20 use calibration true calib graph converter convert with tf gfile gfile model trt pbtxt w as f f write str calib graph other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | update the documentation link to the tflite conversion command | Bug | tflite converter be miss the usage documentation exmaple link link 1 link 2 the flag be need to run such command bazel run config opt tensorflow lite toco toco input file output dir tflite graph pb output file output dir detect tflite input shape 1 300 300 3 input array normalize input image tensor output array tflite detection postprocess tflite detection postprocess 1 tflite detection postprocess 2 tflite detection postprocess 3 inference type float allow custom op please update the link or some one point I at the file that contain these flag to investigate more other possibility and method |
tensorflowtensorflow | save tf keras sequential model fail with rnn contain more than one grucell | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device if the issue happen on mobile device tensorflow instal from binary tf nightly 2 0 preview tensorflow version git version v1 12 1 7529 g3e0ad8a004 version 2 0 0 dev20190731 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cpu only gpu model and memory cpu only describe the current behavior save a tf keras sequential model with tf keras layer rnn fail when contain more than one tf keras layer grucell with error message runtimeerror unable to create link name already exist describe the expect behavior saving should succeed not only when have one cell code to reproduce the issue python import tensorflow as tf saving succeed for number of cell 1 but fail for number of cell 1 number of cell 2 model tf keras sequential model add tf keras layers input shape 1 1 cell for I in range number of cell cell append tf keras layer grucell 10 model add tf keras layer rnn cell model save rnn h5 other info log behavior be the same when use simplernncell instead of grucell traceback in case of failure bash traceback most recent call last file test rnn gru py line 17 in model save rnn h5 file home test dev tensorflow tf2 lib python3 6 site package tensorflow core python keras engine network py line 1157 in save save save model self filepath overwrite include optimizer save format file home test dev tensorflow tf2 lib python3 6 site package tensorflow core python keras save save py line 105 in save model model filepath overwrite include optimizer file home test dev tensorflow tf2 lib python3 6 site package tensorflow core python keras save hdf5 format py line 103 in save model to hdf5 save weight to hdf5 group model weights group model layer file home test dev tensorflow tf2 lib python3 6 site package tensorflow core python keras save hdf5 format py line 625 in save weight to hdf5 group param dset g create dataset name val shape dtype val dtype file home test dev tensorflow tf2 lib python3 6 site package h5py hl group py line 139 in create dataset self name dset file home test dev tensorflow tf2 lib python3 6 site package h5py hl group py line 373 in setitem h5o link obj i d self i d name lcpl lcpl lapl self lapl file h5py object pyx line 54 in h5py object with phil wrapper file h5py object pyx line 55 in h5py object with phil wrapper file h5py h5o pyx line 202 in h5py h5o link runtimeerror unable to create link name already exist |
tensorflowtensorflow | frozen graph generation warning lead to error in run the model | Bug | system information os platform and distribution linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version tensorflow gpu 1 15 0 python version 3 6 10 instal use virtualenv pip conda pip bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 4 cuda cudnn version 10 2 gpu model and memory geforce gtx 960 m 16 gb describe the problem aim be to convert ckpt and config file to tflite for ssd mobilenet v2 quantize coco model follow error be see when convert from ckpt and config file to pb info tensorflow skip quant after featureextractor mobilenetv2 conv add fold provide the exact sequence of command step that you execute before run into the problem follow be the sequence of command py36 tf gpu ridlr ridlr107 tensorflow model master research object detection python export tflite ssd graph py pipeline config path home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 pipeline config train checkpoint prefix home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 model ckpt output directory home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 tflite graph export tflite the above command generate a tflite graph pb file tflite convert output file tflite graph tflite graph def file tflite graph pb input array normalize input image tensor output array tflite detection postprocess input shape 1 300 300 3 inference type quantize uint8 std dev value 0 mean value 1 default range min 0 default range max 6 allow custom op the above command generate a tflite graph tflite file py36 tf gpu ridlr ridlr107 tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 tflite graph export tflite edgetpu compiler tflite graph tflite edge tpu compiler version 2 0 267685300 error 106 std abs input product scale bias scale 1e 6 std min input product scale bias scale be not true error node number 0 conv 2d fail to prepare model compile successfully in 11 ms input model tflite graph tflite input size 5 89mib output model tflite graph edgetpu tflite output size 5 88mib on chip memory available for cache model parameter 0 00b on chip memory use for cache model parameter 0 00b off chip memory use for stream uncached model parameter 0 00b number of edge tpu subgraph 0 total number of operation 0 operation log tflite graph edgetpu log see the operation log file for individual operation detail the above command generate a tflite grapf edgetpu tflite file inspite of the error when I run this model on the coral I get the follow error info initialize tensorflow lite runtime traceback most recent call last file detect image py line 124 in main file detect image py line 91 in main interpreter allocate tensor file home ankit anaconda3 envs py35 lib python3 5 site package tflite runtime interpreter py line 244 in allocate tensor return self interpreter allocatetensor file home ankit anaconda3 envs py35 lib python3 5 site package tflite runtime interpreter wrapper py line 114 in allocatetensor return interpreter wrapper interpreterwrapper allocatetensor self runtimeerror tensorflow lite kernels kernel util cc 119 std abs input product scale bias scale 1e 6 std min input product scale bias scale be not true node number 0 conv 2d fail to prepare I suspect that there be warn info display when generate the tflite graph pb file which be lead to the above error the build log for the same be below the line from the beolow log that concern I be info tensorflow skip quant after featureextractor mobilenetv2 conv add fold how can I resolve this warning py36 tf gpu ridlr ridlr107 tensorflow model master research object detection python export tflite ssd graph py pipeline config path home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 pipeline config train checkpoint prefix home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 model ckpt output directory home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 tflite graph export tflite warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow from home ridlr tensorflow model master research slim net inception resnet v2 py 374 the name tf graphkey be deprecate please use tf compat v1 graphkey instead warn tensorflow from home ridlr tensorflow model master research slim net mobilenet mobilenet py 397 the name tf nn avg pool be deprecate please use tf nn avg pool2d instead warn tensorflow from export tflite ssd graph py 143 the name tf app run be deprecate please use tf compat v1 app run instead warn tensorflow from export tflite ssd graph py 133 the name tf gfile gfile be deprecate please use tf io gfile gfile instead w0121 12 09 58 011585 140284674922304 module wrapper py 139 from export tflite ssd graph py 133 the name tf gfile gfile be deprecate please use tf io gfile gfile instead warn tensorflow from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 193 the name tf gfile makedirs be deprecate please use tf io gfile makedirs instead w0121 12 09 58 016252 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 193 the name tf gfile makedirs be deprecate please use tf io gfile makedirs instead warn tensorflow from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 237 the name tf placeholder be deprecate please use tf compat v1 placeholder instead w0121 12 09 58 016631 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 237 the name tf placeholder be deprecate please use tf compat v1 placeholder instead warn tensorflow from home ridlr tensorflow model master research object detection meta architecture ssd meta arch py 597 the name tf variable scope be deprecate please use tf compat v1 variable scope instead w0121 12 09 58 020018 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection meta architecture ssd meta arch py 597 the name tf variable scope be deprecate please use tf compat v1 variable scope instead warn tensorflow from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core contrib layers python layers layer py 1057 layer apply from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update please use layer call method instead w0121 12 09 58 022555 140284674922304 deprecation py 323 from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core contrib layers python layers layer py 1057 layer apply from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update please use layer call method instead warn tensorflow from home ridlr tensorflow model master research object detection core anchor generator py 171 the name tf assert equal be deprecate please use tf compat v1 assert equal instead w0121 12 09 59 660506 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection core anchor generator py 171 the name tf assert equal be deprecate please use tf compat v1 assert equal instead warn tensorflow from home ridlr tensorflow model master research object detection predictor convolutional box predictor py 150 the name tf log info be deprecate please use tf compat v1 log info instead w0121 12 09 59 668454 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection predictor convolutional box predictor py 150 the name tf log info be deprecate please use tf compat v1 log info instead info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 668638 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 696758 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 722609 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 747994 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 774057 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 info tensorflow depth of additional conv before box predictor 0 i0121 12 09 59 801787 140284674922304 convolutional box predictor py 151 depth of additional conv before box predictor 0 warn tensorflow from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 52 the name tf session be deprecate please use tf compat v1 session instead w0121 12 09 59 837386 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 52 the name tf session be deprecate please use tf compat v1 session instead 2020 01 21 12 09 59 838240 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 01 21 12 09 59 847343 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 21 12 09 59 847611 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 960 m major 5 minor 0 memoryclockrate ghz 1 176 pcibusid 0000 01 00 0 2020 01 21 12 09 59 847750 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 0 dlerror libcudart so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 847852 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcubla so 10 0 dlerror libcubla so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 847962 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcufft so 10 0 dlerror libcufft so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 848044 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcurand so 10 0 dlerror libcurand so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 848109 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusolver so 10 0 dlerror libcusolver so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 848188 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcusparse so 10 0 dlerror libcusparse so 10 0 can not open share object file no such file or directory 2020 01 21 12 09 59 851251 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 01 21 12 09 59 851275 w tensorflow core common runtime gpu gpu device cc 1641 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 01 21 12 09 59 851543 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 01 21 12 09 59 875285 I tensorflow core platform profile util cpu util cc 94 cpu frequency 2808000000 hz 2020 01 21 12 09 59 876004 I tensorflow compiler xla service service cc 168 xla service 0x559d0e122aa0 initialize for platform host this do not guarantee that xla will be use device 2020 01 21 12 09 59 876041 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 01 21 12 09 59 903778 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 01 21 12 09 59 904104 I tensorflow compiler xla service service cc 168 xla service 0x559d0e124900 initialize for platform cuda this do not guarantee that xla will be use device 2020 01 21 12 09 59 904122 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 960 m compute capability 5 0 2020 01 21 12 09 59 904227 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 01 21 12 09 59 904235 I tensorflow core common runtime gpu gpu device cc 1165 warn tensorflow from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 267 the name tf train get or create global step be deprecate please use tf compat v1 train get or create global step instead w0121 12 10 00 032385 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection export tflite ssd graph lib py 267 the name tf train get or create global step be deprecate please use tf compat v1 train get or create global step instead warn tensorflow from home ridlr tensorflow model master research object detection builder graph rewriter builder py 41 the name tf get default graph be deprecate please use tf compat v1 get default graph instead w0121 12 10 00 034862 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection builder graph rewriter builder py 41 the name tf get default graph be deprecate please use tf compat v1 get default graph instead info tensorflow skip quant after featureextractor mobilenetv2 conv add fold i0121 12 10 01 108431 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 conv add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv depthwise add fold i0121 12 10 01 108743 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 1 expand add fold i0121 12 10 01 108995 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 1 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 1 depthwise add fold i0121 12 10 01 109181 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 1 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 2 expand add fold i0121 12 10 01 109415 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 2 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 2 depthwise add fold i0121 12 10 01 109586 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 2 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 3 expand add fold i0121 12 10 01 109818 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 3 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 3 depthwise add fold i0121 12 10 01 109977 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 3 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 4 expand add fold i0121 12 10 01 110195 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 4 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 4 depthwise add fold i0121 12 10 01 110363 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 4 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 5 expand add fold i0121 12 10 01 110591 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 5 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 5 depthwise add fold i0121 12 10 01 110761 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 5 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 6 expand add fold i0121 12 10 01 111005 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 6 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 6 depthwise add fold i0121 12 10 01 111187 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 6 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 7 expand add fold i0121 12 10 01 111420 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 7 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 7 depthwise add fold i0121 12 10 01 111594 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 7 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 8 expand add fold i0121 12 10 01 111815 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 8 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 8 depthwise add fold i0121 12 10 01 112004 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 8 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 9 expand add fold i0121 12 10 01 112233 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 9 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 9 depthwise add fold i0121 12 10 01 112395 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 9 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 10 expand add fold i0121 12 10 01 112610 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 10 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 10 depthwise add fold i0121 12 10 01 112753 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 10 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 11 expand add fold i0121 12 10 01 112978 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 11 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 11 depthwise add fold i0121 12 10 01 113132 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 11 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 12 expand add fold i0121 12 10 01 113349 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 12 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 12 depthwise add fold i0121 12 10 01 113510 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 12 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 13 expand add fold i0121 12 10 01 113732 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 13 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 13 depthwise add fold i0121 12 10 01 113906 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 13 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 14 expand add fold i0121 12 10 01 114142 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 14 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 14 depthwise add fold i0121 12 10 01 114297 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 14 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 15 expand add fold i0121 12 10 01 114525 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 15 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 15 depthwise add fold i0121 12 10 01 114687 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 15 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 16 expand add fold i0121 12 10 01 114912 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 16 expand add fold info tensorflow skip quant after featureextractor mobilenetv2 expand conv 16 depthwise add fold i0121 12 10 01 115073 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 expand conv 16 depthwise add fold info tensorflow skip quant after featureextractor mobilenetv2 conv 1 add fold i0121 12 10 01 115322 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 conv 1 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 2 1x1 256 add fold i0121 12 10 01 115458 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 2 1x1 256 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 2 3x3 s2 512 add fold i0121 12 10 01 115600 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 2 3x3 s2 512 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 3 1x1 128 add fold i0121 12 10 01 115738 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 3 1x1 128 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 3 3x3 s2 256 add fold i0121 12 10 01 115870 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 3 3x3 s2 256 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 4 1x1 128 add fold i0121 12 10 01 116004 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 4 1x1 128 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 4 3x3 s2 256 add fold i0121 12 10 01 116144 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 4 3x3 s2 256 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 5 1x1 64 add fold i0121 12 10 01 116278 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 1 conv2d 5 1x1 64 add fold info tensorflow skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 5 3x3 s2 128 add fold i0121 12 10 01 116429 140284674922304 quantize py 299 skip quant after featureextractor mobilenetv2 layer 19 2 conv2d 5 3x3 s2 128 add fold 2020 01 21 12 10 01 121862 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 01 21 12 10 01 121904 I tensorflow core common runtime gpu gpu device cc 1165 warn tensorflow from home ridlr tensorflow model master research object detection exporter py 111 the name tf train saver be deprecate please use tf compat v1 train saver instead w0121 12 10 01 122092 140284674922304 module wrapper py 139 from home ridlr tensorflow model master research object detection exporter py 111 the name tf train saver be deprecate please use tf compat v1 train saver instead info tensorflow restore parameter from home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 model ckpt i0121 12 10 01 465342 140284674922304 saver py 1284 restore parameter from home ridlr tensorflow base model ssd mobilenet v2 quantize 300x300 coco 2019 01 03 model ckpt warn tensorflow from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python tool freeze graph py 127 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix w0121 12 10 03 793860 140284674922304 deprecation py 323 from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python tool freeze graph py 127 checkpoint exist from tensorflow python training checkpoint management be deprecate and will be remove in a future version instruction for update use standard file apis to check for file with this prefix 2020 01 21 12 10 04 520189 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2020 01 21 12 10 04 520234 I tensorflow core common runtime gpu gpu device cc 1165 info tensorflow restore parameter from tmp tmpkmw fvex i0121 12 10 04 521256 140284674922304 saver py 1284 restore parameter from tmp tmpkmw fvex warn tensorflow from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python tool freeze graph py 233 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant w0121 12 10 05 583929 140284674922304 deprecation py 323 from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python tool freeze graph py 233 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python framework graph util impl py 277 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph w0121 12 10 05 584106 140284674922304 deprecation py 323 from home ridlr anaconda3 envs py36 tf gpu lib python3 6 site package tensorflow core python framework graph util impl py 277 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph info tensorflow freeze 632 variable i0121 12 10 06 121339 140284674922304 graph util impl py 334 freeze 632 variable info tensorflow convert 632 variable to const op i0121 12 10 06 188232 140284674922304 graph util impl py 394 convert 632 variable to const op 2020 01 21 12 10 06 305956 I tensorflow tool graph transform transform graph cc 317 apply strip unused node |
tensorflowtensorflow | hang on model fit | Bug | although I can t extract the code to reproduce the problem I think that document it here will help improve this project and anyone who encounter the same problem system window 10 version tf nightly gpu 2020 1 19 I use tf datum dataset to provide sample when I use gpu eager batch size 16 it will hang on model fit and continue to occupy a core cpu when I use cpu or turn off eager mode or set batch 16 he will run normally it will continue like this 2020 01 21 01 20 24 249995 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 01 21 01 20 28 403903 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 01 21 01 20 29 708097 w tensorflow stream executor gpu redzone allocator cc 312 internal invoke gpu asm compilation be support on cuda non window platform only rely on driver to perform ptx compilation modify path to customize ptxas location this message will be only log once 9 1406 eta 42 30 loss 30 3274 accuracy 0 0000e 00 |
tensorflowtensorflow | save model cli break nightly package | Bug | our in house nightly build be break since 2020 01 16 when auditwheel try to repair my nightly package the reason under the hood seem to be an incorrect link from the recent change of add xla support to save model cli in 9959c04433623e0b7ebf6248e0f75bc7a24bd7cb install the late nightly and navigate to the directory of tensorflow core compiler aot ldd pywrap tfcompile so linux vdso so 1 0x00007ffc5e064000 libtensorflow framework so 2 usr local lib python3 7 dist package tensorflow core compiler aot libtensorflow framework so 2 0x00007fa798bba000 pywrap tensorflow internal so not find libdl so 2 lib x86 64 linux gnu libdl so 2 0x00007fa798b77000 libpthread so 0 lib x86 64 linux gnu libpthread so 0 0x00007fa798b56000 libm so 6 lib x86 64 linux gnu libm so 6 0x00007fa7989d1000 libstdc so 6 usr lib x86 64 linux gnu libstdc so 6 0x00007fa79884d000 libgcc s so 1 lib x86 64 linux gnu libgcc s so 1 0x00007fa798833000 libc so 6 lib x86 64 linux gnu libc so 6 0x00007fa798672000 lib64 ld linux x86 64 so 2 0x00007fa79afbc000 librt so 1 lib x86 64 linux gnu librt so 1 0x00007fa798668000 obviously it link to pywrap tensorflow internal so but it be not find with the relative path ps we be use auditwheel 3 0 0 to produce manylinux2014 build but the official tf nightly use an old version which fail to catch this pps directly use save model cli do not give this error as pywrap tensorflow internal so seem to be preloade but I be pretty sure this be a bug that we need to fix ping ebrevdo mihaimaruseac |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.