repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf datum experimental make csv dataset modifie mutable variable pass to it | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information tensorflow instal from source or binary docker tensorflow version use command below tf 2 1 0 describe the current behavior tf datum experimental make csv dataset modifie pass variable in place so if you call tf datum experimental make csv dataset file pattern batch size select column column to use the variable column to use be change it s in line 463 l463 but it may happen to other variable pass specifically the list send to select column be replace by a list of the index of those column in the file to read describe the expect behavior a function should never modify the mutable object pass to it this be only ever appropriate for method of a class |
tensorflowtensorflow | can not concat raggedtensor in custom keras layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary 2 1 0 tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory gtx2080 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior raise exception describe the expect behavior not raise exception standalone code to reproduce the issue import tensorflow as tf from tensorflow keras model import model import tensorflow keras layer as layer max len 20 lookuptable tf lookup staticvocabularytable tf lookup textfileinitializer vocab txt tf string 0 tf int64 1 delimiter num oov bucket 1 input encoding string layer input dtype tf string shape 1 def custom tokenizer input tensor string width max len ragged tensor tf string split input tensor string word index tf ragged map flat value lookuptable lookup ragged tensor rt word index width truncate row to have at most width item pad row length width rt row length pad value tf zero width rt nrow tf size rt tf int64 rt dtype pad tf raggedtensor from row length pad value pad row length return tf concat padding rt axis 1 to tensor def custom tokenizer shape shape return shape 0 max len process input layer lambda custom tokenizer output shape custom tokenizer shape input encoding string tm model input encoding string process input the vocab txt be just a word index map like a 1 b 2 c 3 d 4 provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach typeerror traceback most recent call last miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 542 try 543 str value compat as bytes x for x in proto value 544 except typeerror miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework tensor util py in 0 542 try 543 str value compat as bytes x for x in proto value 544 except typeerror miniconda3 envs tf2 lib python3 7 site package tensorflow core python util compat py in as bytes byte or text encode 86 raise typeerror expect binary or unicode string get r 87 byte or text 88 typeerror expect binary or unicode string get tf raggedtensor value tensor lambda 24 1 zero 0 shape none dtype int64 row split tensor lambda 24 1 raggedfromrowlength concat 0 shape none dtype int64 during handling of the above exception another exception occur typeerror traceback most recent call last miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework op def library py in apply op helper op type name name keyword 411 prefer dtype default dtype 412 as ref input arg be ref 413 if input arg number attr and len miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework op py in internal convert n to tensor value dtype name as ref prefer dtype ctx 1381 prefer dtype prefer dtype 1382 ctx ctx 1383 return ret miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework op py in convert to tensor value dtype name as ref prefer dtype dtype hint ctx accept result type 1313 if ret be none 1314 ret conversion func value dtype dtype name name as ref as ref 1315 miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py in constant tensor conversion function v dtype name as ref 316 as ref 317 return constant v dtype dtype name name 318 miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py in constant value dtype shape name 257 return constant impl value dtype shape name verify shape false 258 allow broadcast true 259 miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 295 value dtype dtype shape shape verify shape verify shape 296 allow broadcast allow broadcast 297 dtype value attr value pb2 attrvalue type tensor value tensor dtype miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 546 content s consider cast element to a 547 support type type value value 548 tensor proto string val extend str value typeerror fail to convert object of type to tensor content tf raggedtensor value tensor lambda 24 1 zero 0 shape none dtype int64 row split tensor lambda 24 1 raggedfromrowlength concat 0 shape none dtype int64 consider cast element to a support type during handling of the above exception another exception occur typeerror traceback most recent call last miniconda3 envs tf2 lib python3 7 site package tensorflow core python util dispatch py in wrapper args kwargs 179 try 180 return target args kwargs 181 except typeerror valueerror miniconda3 envs tf2 lib python3 7 site package tensorflow core python op array ops py in concat value axis name 1516 return identity value 0 name name 1517 return gen array op concat v2 value value axis axis name name 1518 miniconda3 envs tf2 lib python3 7 site package tensorflow core python ops gen array op py in concat v2 value axis name 1125 op output op def library apply op helper 1126 concatv2 value value axis axis name name 1127 result output miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework op def library py in apply op helper op type name name keyword 439 else 440 raise typeerror s that don t all match prefix 441 else typeerror tensor in list pass to value of concatv2 op have type that don t all match during handling of the above exception another exception occur valueerror traceback most recent call last in 1 l input encoding string miniconda3 envs tf2 lib python3 7 site package tensorflow core python keras engine base layer py in call self input args kwargs 771 not base layer util be in eager or tf function 772 with auto control dep automaticcontroldependencie as acd 773 output call fn cast input args kwargs 774 wrap tensor in output in tf identity to avoid 775 circular dependency miniconda3 envs tf2 lib python3 7 site package tensorflow core python keras layers core py in call self input mask training 844 with backprop gradienttape watch access variable true as tape 845 variable scope variable creator scope variable creator 846 result self function input kwargs 847 self check variable create variable tape watch variable 848 return result in custom tokenizer input tensor string width 10 tf print pad padding dtype 11 tf print pad rt dtype 12 return tf concat padding rt axis 1 to tensor 13 miniconda3 envs tf2 lib python3 7 site package tensorflow core python util dispatch py in wrapper args kwargs 182 note convert to eager tensor currently raise a valueerror not a 183 typeerror when give unexpected type so we need to catch both 184 result dispatch wrapper args kwargs 185 if result be not opdispatcher not support 186 return result miniconda3 envs tf2 lib python3 7 site package tensorflow core python util dispatch py in dispatch op args kwargs 99 100 for dispatcher in getattr op dispatch attr 101 result dispatcher handle args kwargs 102 if result be not opdispatcher not support 103 return result miniconda3 envs tf2 lib python3 7 site package tensorflow core python op rag ragged dispatch py in handle self args kwargs 251 def handle self args kwargs 252 if self be support args kwargs 253 return self ragged op args kwargs 254 else 255 return self not support miniconda3 envs tf2 lib python3 7 site package tensorflow core python op rag ragged concat op py in concat value axis name 68 value value 69 with op name scope name raggedconcat value 70 return ragged stack concat helper value axis stack value false 71 72 miniconda3 envs tf2 lib python3 7 site package tensorflow core python op rag ragged concat op py in ragged stack concat helper rt input axis stack value 159 ndim rt shape ndim 160 else 161 rt shape assert have rank ndim 162 163 out ndim ndim if ndim be none or not stack value else ndim 1 miniconda3 envs tf2 lib python3 7 site package tensorflow core python framework tensor shape py in assert have rank self rank 988 989 if self rank not in none rank 990 raise valueerror shape s must have rank d self rank 991 992 def with rank self rank valueerror shape none none none must have rank 2 |
tensorflowtensorflow | shuffle and repeat fusion optimizer content incorrect on s390x arch big endian | Bug | have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 s390x ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below v2 2 0 rc4 0 g70087ab4f4 2 2 0 rc4 python version python 3 6 9 bazel version if compile from source build label 2 0 0 non git build target bazel out s390x opt bin src main java com google devtool build lib bazel bazelserver deploy jar gcc compiler version if compile from source gcc version 7 5 0 ubuntu 7 5 0 3ubuntu1 18 04 cuda cudnn version n a gpu model and memory n a describe the current behavior when run tensorflow test case on s390x several hundred testcase fail with the follow error run shuffleandrepeatfusiont testshuffleandrepeatfusion test mode eager tfapiversion 2 2020 04 30 19 20 21 809298 w tensorflow core framework op kernel cc 1753 op require fail at optimize dataset op cc 66 internal try to register a dataset optimizer that doesn t exist describe the expect behavior shuffle and repeat fusion optimizer should be correctly identify on s390x standalone code to reproduce the issue on s390x system run the follow test case python tensorflow python data experimental kernel test optimization shuffle and repeat fusion test py the problem seem to occur when tensorflow core kernel data optimize dataset op cc try to optimizedatasetop makedataset for shuffle and repeat fusion optimizer on s390x arch optimization content be incorrect on s390x gdb p optimization 3 std vector of length 1 capacity 1 tstr u smll size 0 000 str 000 000 000 000 000 000e 000 000 000 000 000 000 000 000 000 000 000 003 266 large size 101 cap 47 ptr 0x3b63b60 shuffle and repeat fusion offset size 0 offset 101 count 0 view size 101 ptr 0x2f raw raw 000 000 000 000 000 000 000e 000 000 000 000 000 000 000 000 000 000 000 003 266 on x86 gdb p optimization 65 tstr u smll size 101 e str 000 000 000 000 000 000 000 000 000 000 000 000 000 000 250 202 004 000 000 000 large size 101 cap 47 ptr 0x482a860 shuffle and repeat fusion offset size 101 offset 0 count 47 view size 101 ptr 0x2f raw raw e 000 000 000 000 000 000 000 000 000 000 000 000 000 000 250 202 004 000 000 000 as we can see size be incorrectly extract on s390x it be very likely that optimization content need to be endian sensitive specifically positioning of 000e look suspect I look at shuffle and repeat fusion cc to get an idea but couldn t nail down where this be set any pointer appreciate thank |
tensorflowtensorflow | no gradient provide for any variable | Bug | system information os platform and distribution e g linux ubuntu 16 04 arch linux gnome tensorflow version tf nightly 2 2 0 dev20200504 from pip install tf nightly python version 3 8 2 describe the current behavior when run the training on the localhost the error occur epoch 1 1000 traceback most recent call last file main py line 155 in h model fit x dtgen next train batch file home arthur code handwritten text recognition src network model py line 166 in fit out self model fit x x y y batch size batch size epoch epoch verbose verbose file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py line 72 in method wrapper return method self args kwargs file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py line 921 in fit tmp log train function iterator file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager def function py line 695 in call result self call args kwd file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager def function py line 737 in call self initialize args kwd add initializer to initializer file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager def function py line 616 in initialize self stateful fn get concrete function internal garbage collect pylint disable protect access file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager function py line 2902 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager function py line 3232 in maybe define function graph function self create graph function args kwargs file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager function py line 3111 in create graph function func graph module func graph from py func file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python framework func graph py line 981 in func graph from py func func output python func func args func kwargs file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python eager def function py line 528 in wrap fn return weak wrap fn wrap args kwd file home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python framework func graph py line 968 in wrapper raise e ag error metadata to exception e valueerror in user code home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py 631 train function return step function self iterator home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py 621 step function output model distribute strategy run run step args datum home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python distribute distribute lib py 952 run return self extend call for each replica fn args args kwargs kwargs home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python distribute distribute lib py 2292 call for each replica return self call for each replica fn args kwargs home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python distribute distribute lib py 2651 call for each replica return fn args kwargs home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py 614 run step output model train step datum home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py 581 train step minimize self distribute strategy tape self optimizer loss home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras engine training py 1946 minimize gradient optimizer aggregate gradient zip gradient pylint disable protect access home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py 554 aggregate gradient filter grad and var filter grad grad and var home arthur code handwritten text recognition venv lib python3 8 site package tensorflow python keras optimizer v2 optimizer v2 py 1251 filter grad raise valueerror no gradient provide for any variable s valueerror no gradient provide for any variable conv2d kernel 0 conv2d bias 0 conv2d 1 kernel 0 conv2d 1 bias 0 gate conv2d kernel 0 gate conv2d bias 0 conv2d 2 kernel 0 conv2d 2 bias 0 gate conv2d 1 kernel 0 gate conv2d 1 bias 0 conv2d 3 kernel 0 conv2d 3 bias 0 gate conv2d 2 kernel 0 gate conv2d 2 bias 0 conv2d 4 kernel 0 conv2d 4 bias 0 bidirectional forward lstm lstm cell 1 kernel 0 bidirectional forward lstm lstm cell 1 recurrent kernel 0 bidirectional forward lstm lstm cell 1 bias 0 bidirectional backward lstm lstm cell 2 kernel 0 bidirectional backward lstm lstm cell 2 recurrent kernel 0 bidirectional backward lstm lstm cell 2 bias 0 dense kernel 0 dense bias 0 bidirectional 1 forward lstm 1 lstm cell 4 kernel 0 bidirectional 1 forward lstm 1 lstm cell 4 recurrent kernel 0 bidirectional 1 forward lstm 1 lstm cell 4 bias 0 bidirectional 1 backward lstm 1 lstm cell 5 kernel 0 bidirectional 1 backward lstm 1 lstm cell 5 recurrent kernel 0 bidirectional 1 backward lstm 1 lstm cell 5 bias 0 dense 1 kernel 0 dense 1 bias 0 describe the expect behavior this issue didn t happen in previous version but I m have to use python 3 8 thus the tf nightly version I also notice that in the collab the code work with the tf 2 1 version and python 3 7 in addition I find issue suggest use with tf gradienttape as tape however the example I see use a customize training function train step in my case I only use the standard fit function which be enough for the context maybe I m not sure how to use this in new version or maybe it s an issue anyway if anyone can help thank you very much standalone code to reproduce the issue project code and model class |
tensorflowtensorflow | invalid argument error with tf math add n | Bug | I be have an issue with the tf add n tf math add n command I keep on get an error no matter how I change it I be use tensorflow 2 1 0 while use jupyter notebook and be still new to use tensorflow here be my code and the error that have be produce it seem like a simple fix but I have no clue what to do import tensorflow as tf a tf constant 6 name constant a b tf constant 3 name constant b c tf constant 10 name constant c d tf constant 15 name constant d mul tf multiply a b name mul div tf math divide c d name div addn tf math add n mul div name addn invalidargumenterror traceback most recent call last in 5 division operation which divide c by d and have the name div 6 tf add n sum up the element in an array 7 addn tf math add n mul div name addn c user galvi anaconda3 envs tensorflow lib site package tensorflow core python util dispatch py in wrapper args kwargs 178 call target and fall back on dispatcher if there be a typeerror 179 try 180 return target args kwargs 181 except typeerror valueerror 182 note convert to eager tensor currently raise a valueerror not a c user galvi anaconda3 envs tensorflow lib site package tensorflow core python ops math ops py in add n input name 3051 return array op identity value name name 3052 return value 3053 return gen math op add n input name name 3054 3055 c user galvi anaconda3 envs tensorflow lib site package tensorflow core python ops gen math op py in add n input name 410 pass add node to the tensorflow graph 411 except core notokstatusexception as e 412 op raise from not ok status e name 413 add node to the tensorflow graph 414 if not isinstance input list tuple c user galvi anaconda3 envs tensorflow lib site package tensorflow core python framework op py in raise from not ok status e name 6604 message e message name name if name be not none else 6605 pylint disable protect access 6606 six raise from core status to exception e code message none 6607 pylint enable protect access 6608 c user galvi anaconda3 envs tensorflow lib site package six py in raise from value from value invalidargumenterror can not compute addn as input 1 zero base be expect to be a int32 tensor but be a double tensor op addn name addn |
tensorflowtensorflow | can not make padded batch from dataset make from ragged tensor slice | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc4 python version 3 7 6 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior I can create a dataset from a ragged tensor use tf datum dataset from tensor slice from tensor slice but I can not make a padded batch from it with tf datum dataset padded batch padded batch describe the expect behavior if the api be mean to support ragged tensor then the dataset should allow I to make a padded batch from the ragged tensor slice otherwise the api could be restrict to disallow ragged tensor standalone code to reproduce the issue python import tensorflow as tf a tf ragged stack 1 2 3 4 5 6 7 8 9 10 dataset tf datum dataset from tensor slice a print dataset for it in dataset print it numpy 1 2 3 4 5 6 7 8 9 10 batch dataset padded batch batch size 2 pad shape 5 typeerror padded batching of component of type be not support compare with a similar work example with a dataset take from the documentation of tf datum dataset from generator from generator python import tensorflow as tf import itertool def gen for I in itertool count 1 yield I 1 I dataset tf datum dataset from generator gen tf int64 tf int64 tf tensorshape tf tensorshape none print dataset for it1 it2 in dataset take 3 print it1 numpy it2 numpy 1 1 2 1 1 3 1 1 1 batch dataset padded batch batch size 2 pad shape 10 for it1 it2 in batch take 3 print it1 numpy it2 numpy 1 2 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 3 4 1 1 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 5 6 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 other info log na |
tensorflowtensorflow | support for tensorlist cross the xla tf boundary be not implement | Bug | a colab notebook to reproduce the issue I be try to implement a simple rnn compile with xla the code work without xla but when I try to compile one tf function with xla and I get a strange error unimplementederror support for tensorlist cross the xla tf boundary be not implement node dummy name statefulpartitionedcall define at 30 op inference simple train 5054 error may have originate from an input operation input source operation connect to node dummy name statefulpartitionedcall datum define at 38 I be not sure what go wrong when experimental compile be add and if that s an expected behavior |
tensorflowtensorflow | issue in google codelabs handwritten digit classifier code | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide 7 please provide a link to the documentation entry for example description of issue what need change clear description there be an error in the code of step 4 of the codelab get a null safety error when implement the code |
tensorflowtensorflow | tfliteconverter | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook copy and paste here the exact command the output from the converter invocation copy and paste the output here also please include a link to the save model or graphdef put link here or attach to the issue failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | experimental compile regression in 2 2 rc4 | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 tensorflow instal from pip tensorflow version 2 2 0 rc4 v2 2 0 rc3 33 g70087ab4f4 python version 3 6 the follow code work with experimental compile true in 2 1 but cause an error in 2 2 rc4 standalone code to reproduce the issue import tensorflow as tf tf function experimental compile true def process line line return tf string split line text tf datum dataset from tensor slice 1 2 text text map process line for x in text print x this one work in neither 2 1 nor 2 2rc4 process line 1 2 other info log tensorflow python framework error impl invalidargumenterror function invoke by the follow node be not compilable node partitionedcall partitionedcall tin dt string tout dt string xlamustcompile true read only resource input config config proto n 007 n 003gpu 020 000 n 007 n 003cpu 020 0012 002j 0008 001 executor type f inference process line 84 args 0 uncompilable node partitionedcall could not instantiate call inference process line 84 stacktrace node partitionedcall function partitionedcall op makeiterator |
tensorflowtensorflow | can t save stack lstm | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior the follow model cause error the error message be runtimeerror unable to create link name already exist python usr bin python3 import tensorflow as tf def lstm input tf keras input 25 256 result tf keras layer rnn tf keras layers lstmcell unit 512 for I in range 2 input return tf keras model input input output result if name main m lstm m save lstm h5 but model without stack lstm can be serialize successfully python usr bin python3 import tensorflow as tf def lstm input tf keras input 25 256 result tf keras layer rnn tf keras layers lstmcell unit 512 input return tf keras model input input output result if name main m lstm m save lstm h5 describe the expect behavior the serialization should be process without problem standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python usr bin python3 import tensorflow as tf def lstm input tf keras input 25 256 result tf keras layer rnn tf keras layers lstmcell unit 512 for I in range 2 input return tf keras model input input output result if name main m lstm m save lstm h5 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file line 3 in file home xieyi local lib python3 6 site package tensorflow core python keras engine network py line 1008 in save signature option file home xieyi local lib python3 6 site package tensorflow core python keras save save py line 112 in save model model filepath overwrite include optimizer file home xieyi local lib python3 6 site package tensorflow core python keras save hdf5 format py line 109 in save model to hdf5 save weight to hdf5 group model weights group model layer file home xieyi local lib python3 6 site package tensorflow core python keras save hdf5 format py line 631 in save weight to hdf5 group param dset g create dataset name val shape dtype val dtype file home xieyi local lib python3 6 site package h5py hl group py line 139 in create dataset self name dset file home xieyi local lib python3 6 site package h5py hl group py line 373 in setitem h5o link obj i d self i d name lcpl lcpl lapl self lapl file h5py object pyx line 54 in h5py object with phil wrapper file h5py object pyx line 55 in h5py object with phil wrapper file h5py h5o pyx line 202 in h5py h5o link runtimeerror unable to create link name already exist |
tensorflowtensorflow | dataset unbatch set cardinality to 2 even when batch size be know | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 x64 1909 tensorflow instal from source or binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 7 cuda cudnn version use cpu gpu model and memory use cpu describe the current behavior do dataset unbatch on dataset with know batch size reset cardinality to 2 unknown describe the expect behavior when batch size of dataset be know it should set cardinality to batch size cardinality standalone code to reproduce the issue import tensorflow as tf ds tf datum dataset range 10 shape ds ds batch 2 drop remainder true shape 2 print tf datum experimental cardinality ds 5 ds ds unbatch shape print tf datum experimental cardinality ds should be 10 but be 2 unknown other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach although cardinality be currently experimental it be use when trane keras model |
tensorflowtensorflow | tensorflow in python | Bug | 1 I try import tnsorflow and get alot of warning messemge 2 after import I run this code a tf variable 1 name a b tf variable 2 name b f a b print f but this be my put put tensor add 0 shape dtype int32 can someone know what be the problem and what to do I do pip install upgrade tensorflow but it do not help |
tensorflowtensorflow | tflite interpreter fail to load quantize model on iphone | Bug | similar issue system information have I write custom code as oppose to use a stock example script provide in tensorflow stock mobilenetv2 os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device iphone 5s tensorflow instal from source or binary pip install tensorflow 2 1 tensorflow version use command below 2 1 0 python version 3 7 4 cuda cudnn version 10 1 gpu model and memory geforce gtx 1650 4 gb describe the current behavior I have create a new tflite model base on mobilenetv2 tf keras applications mobilenetv2 input shape size size 3 include top false it work well without quantization use cpu on io I should say that tensorflow team do a great job many thank unfortunately there be a problem with latency I have read tf documentation relate to optimization post trainig quantization and workerd with dynamic range quantization I execute the follow python code tflite model dir pathlib path tmp mnist tflite model tflite model dir mkdir exist ok true parent true converter tf lite tfliteconverter from save model c work python nn mobilenet v2 128 converter optimization tf lite optimize default tflite quant model converter convert tflite model quant file tflite model dir mobilenet v2 128 quant tflite tflite model quant file write bytes tflite quant model after this model be add to xcode project on mac pod file contain the follow framework pod tensorflowlite 1 13 1 error tflite interpreterbuilder return this error didn t find op for builtin opcode conv 2d version 2 describe the expect behavior I suppose this should work fast without error other info log I also try to use float16 quantization python code converter tf lite tfliteconverter from save model c work python nn mobilenet v2 128 converter optimization tf lite optimize optimize for size tflite quant model converter convert open mobilenet v2 128 qua float16 tflite wb write tflite quant model in swift code I use metaldelegate with pod tensorflowliteswift 0 0 1 nightly I have no error but model doesn t work with pod tensorflowliteswift 2 1 0 I have the follow error 2020 05 01 21 36 13 578369 0300 tfl segmentation 6367 330410 initialize tensorflow lite runtime 2020 05 01 21 36 20 877393 0300 tfl segmentation 6367 330397 execution of the command buffer be abort due to an error during execution cause gpu hang error ioaf code 3 be it possible to use mobilenetv2 tflite quantize model in xcode swift project good regard dmitriy |
tensorflowtensorflow | python keras model in c | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 win10 and ubuntu 18 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary yes tensorflow version use command below 2 01 python version 3 7 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 7 5 0 cuda cudnn version gpu model and memory describe the current behavior if I set name in kera and want to use the train model in c then there seem to be no name for the nodes so I can t set input or get output standalone code to reproduce the issue python code on the window 10 machine import tensorflow as tf from tensorflow keras import layer model tf keras sequential model add layer input 2 name input add a densely connect layer with 64 unit to the model model add layer dense 8 name layer1 input shape 2 activation tanh add another model add layer dense 4 name layer2 input shape 8 activation sigmoid add an output layer with 1 output unit model add layer dense 1 name outputlayer input shape 4 activation sigmoid model compile optimizer tf keras optimizer adam 0 01 loss tf keras loss binarycrossentropy metric accuracy model summary print model input 0 name import numpy as np datum np array 0 0 1 0 0 1 1 1 label np array 0 1 1 0 model fit datum label epoch 500 tf keras model save model model model c code on the linux machine tensorflow tensor inputs tensorflow dt float tensorflow tensorshape 1 2 auto input map input tensor input map 0 0 0 0 input map 0 1 1 0 std vector inp input input std vector output std vector output name output status model session run inp output name output what be the problem here also if I set name for the layer in keras save the model load it again than the name be replace with other and if I use these other name in the c application it doesn t work here be the error message 2020 05 02 13 38 31 256880 e tensorflow core grappler optimizer meta optimizer cc 502 model pruner fail internal could not find node with name outputlayer invalid argument tensor input 0 specify in either feed device or fetch device be not find in the graph |
tensorflowtensorflow | head train file path | Bug | head |
tensorflowtensorflow | it be not possible to train the trainable parameter of the randomfourierfeature keras layer in eager mode | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes minimal working example provide os platform and distribution e g linux ubuntu 16 04 linux 5 3 0 46 generic x86 64 with ubuntu 18 04 bionic mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 dev20200501 python version 3 7 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior it be not possible to train the trainable parameter of the randomfourierfeature keras layer when use eager execution describe the expect behavior it should be possible to train the trainable parameter of the randomfourierfeature keras layer even when use eager execution standalone code to reproduce the issue import tensorflow as tf from tensorflow core python keras layers import randomfourierfeature fourier feature randomfourierfeature 1 kernel initializer gaussian scale 1 0 trainable true dtype tf float64 input tf keras input shape 1 dtype tf float64 name input output fourier feature input model tf keras model input input output output model compile loss mean squared error model fit tf constant 1 0 tf constant 1 0 epoch 1 other info log the call to fit throw the follow error valueerror no gradient provide for any variable random fourier feature random feature scale 0 1 1 0s 17ms sample |
tensorflowtensorflow | cnn make use tf keras yield different and bad accuracy if compare to the same cnn build use kera | Bug | system information have I write custom code I m use the cnn mnist example from the keras documentation os platform and distribution linux google colab tensorflow instal from source or binary colab tensorflow version 2 2 0 rc3 python version 3 6 9 cuda cudnn version colab gpu default gpu model and memory colab gpu default describe the current behavior if I train a simple cnn on the mnist dataset follow the keras mnist cnn example I get different accuracy depend on whether I use tf keras or kera describe the expect behavior I think that the accuracy should be the same standalone code to reproduce the issue you can find the code reproduce this possible bug in a colab notebook here this be the first issue I open in this repository so I hope that the information I have provide be clear enough thank you for your work with tensorflow |
tensorflowtensorflow | about website link 404 not find in readme | Bug | could you add the website link in the follow url image the link mark in blue be 404 not find thank you |
tensorflowtensorflow | forwardaccumulator fail with experimental run function eagerly true | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos catalina tensorflow instal from source or binary binary tensorflow version use command below 2 2 0rc4 python version 3 7 5 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior run the example in tf forwardaccumulator docs fail with recursionerror maximum recursion depth exceed when run with tf config experimental run function eagerly true describe the expect behavior run the example in tf forwardaccumulator docs with tf config experimental run function eagerly true work the same way as when run with tf config experimental run function eagerly false standalone code to reproduce the issue this be the standard example from with just the experimental run function eagerly true call add python import tensorflow as tf tf config experimental run function eagerly true v tf variable 1 2 with tf autodiff forwardaccumulator v the vector in hessian vector product tf constant 1 0 as acc with tf gradienttape as tape y tf reduce sum v 3 backward tape gradient y v backward gradient from backprop acc jvp backward forward over backward hessian vector product other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach self push tape file user hartikainen conda envs policy evaluation lib python3 7 site package tensorflow python eager backprop py line 849 in push tape watch access variable self watch access variable file user hartikainen conda envs policy evaluation lib python3 7 site package tensorflow python eager tape py line 48 in push new tape return tape tape recursionerror maximum recursion depth exceed |
tensorflowtensorflow | loading model with add loss fail | Bug | system information have I write custom code yes minimal fail example give below os platform and distribution macos mojave version 10 14 6 also test on linux ubuntu 18 04 tensorflow instal from binary tensorflow version use command below 2 1 0 v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 7 cpu execution only describe the current behavior model loading use tf keras model load model do not work for model with custom layer that add loss with self add loss in the example below I create a one layer model with a custom layer the layer add a dummy loss use self add loss this be the only loss of the model and there be no other loss pass to model compile adam which be intentional the model compile correctly and can successfully be store to disk in savedmodel format with model save my model python import numpy as np import tensorflow as tf class customlayer tf keras layers layer imaginary layer that add a custom loss in the call def init self a super init self var tf variable a name var a def call self input training false output tf reduce sum input self var axis 1 self add loss tf reduce mean output return output def get model input dim int tf keras model layer customlayer 0 1 input tf keras input input dim name input output layer input model tf keras model inputs input output output model compile optimizer adam return model if name main num datum 100 x y np random randn num datum 1 np random randn num datum 1 model get model input dim x shape 1 model summary print model loss model loss model save my model reconstruct model tf keras model load model my model break reconstruct model summary let s check np testing assert allclose model predict x reconstruct model predict x problem the issue arise when load the model when the load method try to compile the model the program terminate with the follow stack trace bash traceback most recent call last file save model py line 36 in reconstruct model tf keras model load model my model file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python keras save save py line 150 in load model return save model load load filepath compile file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python keras save save model load py line 99 in load training config file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python keras engine training py line 446 in compile self compile weight loss and weight metric file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python keras engine training py line 1592 in compile weight loss and weight metric self total loss self prepare total loss mask file user vincent miniconda3 envs gpflux lib python3 7 site package tensorflow core python keras engine training py line 1691 in prepare total loss raise valueerror the model can not be compile valueerror the model can not be compile because it have no loss to optimize so it look like the model can not be reconstruct compile because there be no loss specify the original model however do have a loss python print model loss model loss model loss interestingly if we load the model without compile we get python reconstruct model tf keras model load model my model compile false print reconstruct model loss reconstruct model loss reconstruct model loss which indicate that the loss indeed aren t correctly load into the reconstructed model not compile the model however be not an option as the model predict doesn t work as long as the model be not compile additional info 1 pass the source code of the layer in the load method result in the same crash python reconstruct model tf keras model load model my model custom object customlayer customlayer however in my use case I don t have access to the source code at load time so this solution would not fit my need but be also doesn t work 2 interestingly the program terminate correctly when the original model do not get compile again this doesn t fit my use case describe the expect behavior accord to the doc the keras api save model and the savedmodel format support the saving and loading of model that add loss use add loss in their call method many thank for the support |
tensorflowtensorflow | xla function crash when call under gradienttape | Bug | hello describe the current behavior I have a function compile with xla tf function experimental compile true which write the process output to the tensorarray when call inside gradienttape it crash without experimental compile true everything work fine describe the expect behavior function do not crash standalone code to reproduce the issue colab link to reproduce the crash scrollto mpemlewgje0n |
tensorflowtensorflow | inference block indefinitely on gpu when eager mode be enable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow I make small change use of opencv to capture image to the object detection tutorial file os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary python m pip install tensorflow tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 cuda cudnn version 10 1 7 6 5 32 gpu model and memory gtx 960 m 2 gb or rtx2070 super 8 gb describe the current behavior with gpu and eager mode enable run inference block indefinitely after process a few frame 15 if I then disable eager mode it run fine when eager be disabled and I use a session to process the datum it also block indefinitely on gpu everything work fine when only use the cpu describe the expect behavior be able to run inference on gpu without indefinite block with eager mode enable code to reproduce the issue import os import pathlib import cv2 import numpy as np import tensorflow as tf from object detection util import op as util op def load model model name base url model file model name tar gz model dir tf keras util get file fname model name origin base url model file untar true model dir pathlib path model dir save model model tf save model load str model dir model model signature serve default return model def run inference for single image model image image np asarray image input tensor tf convert to tensor image input tensor input tensor tf newaxis run inference print inference start model input tensor print inference end if model in pathlib path cwd part while model in pathlib path cwd part os chdir disable eager mode tf compat v1 disable eager execution modelname ssd mobilenet v1 coco 2017 11 17 detection model load model modelname util op tf tf compat v1 tf gfile tf io gfile imgpath path to image image np zero 640 480 3 np uint8 while true run inference for single image detection model image I run this from the research object detection folder other info log I be not sure how to support the claim of it be a bug I try it on different machine and the code be base on an example I think it be because there be no error or warning message before hang it work fine when just use the cpu with or without eager mode it work on gpu without eager mode and it hang in a library function I never do anything like this before if I do something wrong or more information be require please let I know |
tensorflowtensorflow | docstring be not mislead | Bug | the docstring say it be a negative quantity between 1 and 0 where 0 indicate orthogonality and value close to 1 indicate great similarity although it be true that the function reverse the sign of the classic cosine similarity so that 1 will denote similarity instead of 1 in the original formula the actual range be still 1 to 1 not 1 to 0 as misleading by the docstre l1073 |
tensorflowtensorflow | tensorflow gpu installation | Bug | url s with the issue description of issue what need change the tf nightly pip package do not support gpu it should be tf nightly gpu on the window setup section the path set path c program files nvidia gpu computing toolkit cuda v10 1 extra cupti libx64 path should be set path c program files nvidia gpu computing toolkit cuda v10 1 extra cupti lib64 path since the cuda toolkit generate this path with lib64 and not libx64 |
tensorflowtensorflow | ownedmultideviceiterator can cause an error on tpu pod | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc4 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture describe the current behavior ownedmultideviceiterator can cause an error on tpu pod the result stack trace gc 0 e0428 07 24 15 171094 132699 main app py 658 top level exception could not parse rpc response node iterator 13 89 op inference minimize 49326 function call stack minimize gc 0 e0428 07 24 15 172459 132699 main app py 659 traceback most recent call last file launcher root google3 third party py absl app py line 463 in run run main main args file launcher root google3 third party py absl app py line 392 in run main sys exit main argv file launcher root google3 experimental user wendyshang smarl football football vtrace main py line 153 in main env create environment 0 create agent create optimizer file launcher root google3 experimental user wendyshang smarl football football learner object py line 574 in learner loop minimize it file launcher root google3 third party tensorflow python eager def function py line 695 in call result self call args kwd file launcher root google3 third party tensorflow python eager def function py line 760 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file launcher root google3 third party tensorflow python eager function py line 1904 in filter call cancellation manager cancellation manager file launcher root google3 third party tensorflow python eager function py line 1981 in call flat ctx args cancellation manager cancellation manager file launcher root google3 third party tensorflow python eager function py line 615 in call ctx ctx file launcher root google3 third party tensorflow python eager execute py line 60 in quick execute input attrs num output google3 third party tensorflow python framework error impl internalerror could not parse rpc response node iterator 13 89 op inference minimize 49326 function call stack minimize describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tflite fail to create hexagon delegate on pixel 3 | Bug | this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to device information adb shell getprop ro product device blueline adb shell getprop ro board platform sdm845 app root app src main jnilibs arm64 v8a libhexagon nn skel so libhexagon nn skel v65 so libhexagon nn skel v66 so armeabi v7a libhexagon nn skel so libhexagon nn skel v65 so libhexagon nn skel v66 so app root app build gradle apply plugin com android application android compilesdkversion 28 defaultconfig applicationid org tensorflow lite example classification minsdkversion 21 targetsdkversion 28 versioncode 1 versionname 1 0 buildtype release minifyenable false proguardfile getdefaultproguardfile proguard android txt proguard rule pro aaptoption nocompress tflite compileoption sourcecompatibility 1 8 targetcompatibility 1 8 repository flatdir dir libs download default model if you wish to use your own model then place they in the asset directory and comment out this line apply from download gradle dependency implementation filetree dir lib include jar implementation com android support appcompat v7 28 0 0 implementation com android support design 28 0 0 build off of nightly tensorflow lite implementation org tensorflow tensorflow lite 0 0 0 nightly implementation org tensorflow tensorflow lite hexagon 0 0 0 nightly I follow the tensorflow lite hexagon delegate guide on pixel 3 tensorflow lite be fail to create hexagon delegate log w inference type 1400 audit 0 0 127864 avc deny search for name soc0 dev sysfs ino 66478 scontext u r untrusted app 27 s0 c190 c256 c512 c768 tcontext u object r sysfs soc s0 tclass dir permissive 0 d org tensorflow lite example classification vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1615 error device node open fail for domain 3 errno permission deny d org tensorflow lite example classification vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1916 error 0x57 app dev init fail for domain 3 errno permission deny d org tensorflow lite example classification vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 2009 error 0x57 open dev 1 fail for domain 3 d org tensorflow lite example classification vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1179 error 0x3b remote handle control domain fail for request i d 1 on domain 3 d org tensorflow lite example classification vendor qcom proprietary commonsys intf adsprpc src fastrpc app user c 1190 error 0x3b remote handle control fail for request i d 1 w tflite fail to fetch hexagon nn version this might be because you re use incompatible version of libhexagon interface and libhexagon nn skel you must use compatible version refer to tensorflow lite hexagon delegate guide I tflite hexagon delegate be not support w system err java lang unsupportedoperationexception this device doesn t support hexagon dsp execution w system err at org tensorflow lite experimental hexagondelegate hexagondelegate java 40 w system err at org tensorflow lite example classification tflite classifier classifier java 182 w system err at org tensorflow lite example classification tflite classifierquantizedmobilenet classifierquantizedmobilenet java 37 w system err at org tensorflow lite example classification tflite classifier create classifier java 97 w system err at org tensorflow lite example classification classifieractivity recreateclassifier classifieractivity java 167 w system err at org tensorflow lite example classification classifieractivity lambda oninferenceconfigurationchange 0 classifieractivity java 146 w system err at org tensorflow lite example classification lambda classifieractivity 83lgy2tujuj0m5n4bhmb9qllgsy run unknown source 8 w system err at android os handler handlecallback handler java 883 w system err at android os handler dispatchmessage handler java 100 w system err at android os looper loop looper java 214 w system err at android os handlerthread run handlerthread java 67 |
tensorflowtensorflow | buggy behaviour of dataset api | Bug | system information have I write custom code yes see os platform and distribution google colab ubuntu 18 04 3 lts tensorflow instal from source or binary provide by colab tensorflow version use command below 2 2 0 rc3 python version 3 6 9 describe the current behavior at dataset graph branching point the node which be the root of the branching be resample for each branch during one round of execution with non randomize input to the dataset this do not cause any problem if the root node be after a shuffle call the branch will receive different input in the same computation round describe the expect behavior downstream branch should receive the same datum even if shuffle be apply standalone code to reproduce the issue more info this behaviour be also present if the dataset be create from a generator which handle the shuffling implicitly edit fix the link here as well |
tensorflowtensorflow | bug with valid scope name regex | Bug | system information os platform and distribution e g linux ubuntu 16 04 mac 10 13 5 tensorflow version use command below 1 11 0 python version 3 6 5 describe the current behavior the valid scope name regex and valid op name regex be define in line 1583 of tensorflow python framework op py valid op name regex re compile a za z0 9 a za z0 9 valid scope name regex re compile a za z0 9 which should recognize the symbol the result be valid scope name regex re compile a za z0 9 valid scope name regex match n catcntc campaign c campaign sre sre match object span 0 29 match n catcntc campaign c campaign valid scope name regex match n catcntc campaign c campaign valid scope name regex match n catcntc campaign c campaign valid scope name regex match n catcntc campaign c campaign the above pattern can t recognize but with below pattern it work valid scope name regex re compile r a za z0 9 valid scope name regex match n catcntc campaign c campaign sre sre match object span 0 29 match n catcntc campaign c campaign valid scope name regex match n catcntc campaign c campaign sre sre match object span 0 29 match n catcntc campaign c campaign valid scope name regex match n catcntc campaign c campaign sre sre match object span 0 29 match n catcntc campaign c campaign describe the expect behavior |
tensorflowtensorflow | huber loss crash training loop due to data type mismatch | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow somewhat custom os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device laptop tensorflow instal from source or binary tensorflow version use command below python version 3 6 10 bazel version if compile from source gcc compiler version if compile from source 7 3 0 cuda cudnn version none gpu model and memory none describe the current behavior huber loss crash the script with the follow error message typeerror input y of mul op have type float64 that do not match type float32 of argument x that happen even though I cast everything to either tf float32 or tf float64 manually it do work if I put this line tf keras backend set floatx float32 or if I remove the original line with float64 seem to I like set the global data type fail somewhere and I get the follow warning that tensor be be re cast automatically warn tensorflow layer dense 8 be cast an input tensor from dtype float64 to the layer s dtype of float32 which be new behavior in tensorflow 2 the layer have dtype float32 because it s dtype default to floatx standalone code to reproduce the issue import os os environ tf cpp min log level 3 from sklearn dataset import load linnerud import tensorflow as tf tf keras backend set floatx float64 from tensorflow keras model import model from tensorflow keras layer import dense dropout lstm concatenate x y load linnerud return x y true datum tf datum dataset from tensor slice x y map lambda a b tf divide a tf reduce max x axis 0 keepdim true b train datum datum take 16 shuffle 16 batch 4 test datum datum skip 16 shuffle 4 batch 4 class fullyconnectednetwork model def init self super fullyconnectednetwork self init self layer1 dense 9 input shape 3 self layer2 lstm 8 return sequence true self layer3 dense 27 self layer4 dropout 5e 1 self layer5 dense 27 self layer6 concatenate self layer7 dense 3 def call self x args kwargs x tf nn tanh self layer1 x y self layer2 x x tf nn selu self layer3 x x self layer4 x x tf nn relu self layer5 x x self layer6 x y x self layer7 x return x model fullyconnectednetwork loss object tf keras loss huber train loss tf keras metric mean test loss tf keras metric mean optimizer tf keras optimizer adamax tf function def train step input target with tf gradienttape as tape output model input loss loss object output target train loss loss gradient tape gradient loss model trainable variable optimizer apply gradient zip gradient model trainable variable tf function def test step input target output model input print output dtype target dtype loss loss object output target test loss loss def main train loss reset state test loss reset state for epoch in range 1 10 000 1 for x y in train datum train step x y for x y in test datum test step x y if epoch 25 0 print f epoch epoch 4 train loss train loss result numpy 2f f test loss test loss result numpy 2f if name main main traceback most recent call last file home nicolas anaconda3 envs condaenv lib python3 6 site package ipython core interactiveshell py line 3331 in run code exec code obj self user global ns self user n file line 86 in main file line 75 in main train step x y file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager def function py line 568 in call result self call args kwd file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager def function py line 615 in call self initialize args kwd add initializer to initializer file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager def function py line 497 in initialize args kwd file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python eager def function py line 439 in wrap fn return weak wrap fn wrap args kwd file home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python framework func graph py line 968 in wrapper raise e ag error metadata to exception e typeerror in convert code 54 train step loss loss object output target home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python keras loss py 126 call loss self call y true y pre home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python keras loss py 221 call return self fn y true y pre self fn kwargs home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python keras loss py 915 huber loss math op multiply delta linear home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python util dispatch py 180 wrapper return target args kwargs home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python ops math op py 334 multiply return gen math op mul x y name home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python ops gen math op py 6125 mul mul x x y y name name home nicolas anaconda3 envs condaenv lib python3 6 site package tensorflow core python framework op def library py 504 apply op helper infer from input arg type attr typeerror input y of mul op have type float64 that do not match type float32 of argument x |
tensorflowtensorflow | speech command recognition example crash with gpudelegate | Bug | tensorflow lite issue system information have I write custom code as oppose to use a stock example script provide in tensorflow yes in speechactivity when the interpreter be create line 182 deprecate tflite new interpreter loadmodelfile getasset actualmodelfilename I change to interpreter option option new interpreter option option adddelegate new gpudelegate option setnumthread 1 tflite new interpreter loadmodelfile getasset actualmodelfilename option standalone code to reproduce the issue I have an instrumented test that reproduce the bug I will attach it crash with gpudelegate do not crash with nnapidelegate change to java for use also include the proper dependency in gradle and create the proper directory for gradle to find and build out test androidtest java etc speecht txt |
tensorflowtensorflow | tf2 2rc3 dict of tensor as an input of a keras layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary pip tensorflow version use command below 2 2rc3 python version 3 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when we create a keras layer which take a dict as input the behavior of the layer be not always consistent we create a custom layer such as class embeddingmerger tf keras layers layer def init self list feature kwargs super init kwargs self embedding feature embed 10 3 for feature in list feature def call self input tensor self embedding col input col for col in input return add tensor we can create a model from it list feature feature 1 feature 2 feature 1 tf constant 0 1 3 feature 2 tf constant 1 2 4 f feature 1 feature 1 feature 2 feature 2 f input feature 1 input shape name feature 1 feature 2 input shape name feature 2 out embeddingmerger list feature f input model model f input out if we pass to the model the dict with the two feature with the correct name it work as expect call truth feature 1 feature 1 feature 12 feature 2 the dict with the two feature but dict insert in the wrong order give same result as above feature 2 feature 2 feature 1 feature 1 the dict with only one feature break with an assertion error because it can not compute such result its the good behavior feature 1 feature 1 the dict with the two feature and one additional key of name test same result as truth which be the correct behavior ignore the feature not use by the layer feature 1 feature 1 feature 2 feature 2 test feature 2 a dict with key feature 1 but not feature 2 and one additional key test of value feature 2 it give a result which be not the same as the truth feature 1 feature 1 test feature 2 that one should not be calculable since we do not pass feature 2 the same behavior happen with 3 4 etc key I think I can extrapolate that if the correct key be available everything be fine and the calculation be correct however if one feature be miss it will take another feature in the dict and use it as it be describe the expect behavior if the key need to run be not available always break with an assertion error standalone code to reproduce the issue code to do the experiment above ping I for more use case |
tensorflowtensorflow | mirroredstrategy keras example hang | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary jupyter nodebook of a official kubeflow image gcr io kubeflow image public tensorflow 2 1 0 notebook gpu 1 0 0 on my kubeflow platform tensorflow version use command below 2 1 0 gpu python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source gcc ubuntu 7 4 0 1ubuntu1 18 04 1 7 4 0 cuda cudnn version release 10 1 v10 1 243 but I can t find cudnn library use a command cat usr local cuda include cudnn h grep cudnn major a 2 gpu model and memory t4 16 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with v1 12 1 30591 g2d3828de27 2 2 0 dev20200427 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior in my notebook have 4 gpu I run a mirroredstrategy keras example document in here and all gpu s memory be occupy but after print below log it hang log mkc choi hbseo m kubectl nhanbae seo log cdemo02 demo02 0 w 09 04 55 882 notebookapp all authentication be disabled anyone who can connect to this server will be able to run code I 09 04 56 129 notebookapp jupyterlab extension load from usr local lib python3 6 dist package jupyterlab i 09 04 56 129 notebookapp jupyterlab application directory be usr local share jupyter lab i 09 04 56 349 notebookapp serve notebook from local directory home jovyan i 09 04 56 349 notebookapp the jupyter notebook be run at i 09 04 56 349 notebookapp i 09 04 56 349 notebookapp use control c to stop this server and shut down all kernel twice to skip confirmation I 09 05 07 112 notebookapp 302 get notebook hanbae seo demo02 127 0 0 1 0 85ms I 09 05 12 368 notebookapp create new notebook in i 09 05 13 255 notebookapp kernel start 85970d11 8ffb 4259 a0f9 29614d194712 i 09 07 14 315 notebookapp saving file at untitled1 ipynb i 09 07 31 203 notebookapp start buffering for 85970d11 8ffb 4259 a0f9 29614d194712 f7966791845643a9bcb0bc02a3b60f8c 2020 04 28 09 07 46 689575 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 09 07 54 871181 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 04 28 09 07 55 100937 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 101951 I tensorflow core common runtime gpu gpu device cc 1544 find device 0 with property pcibusid 0000 00 04 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 55 102094 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 103013 I tensorflow core common runtime gpu gpu device cc 1544 find device 1 with property pcibusid 0000 00 05 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 55 103126 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 104070 I tensorflow core common runtime gpu gpu device cc 1544 find device 2 with property pcibusid 0000 00 06 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 55 104191 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 105138 I tensorflow core common runtime gpu gpu device cc 1544 find device 3 with property pcibusid 0000 00 07 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 55 105182 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 09 07 55 108010 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 04 28 09 07 55 110338 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 04 28 09 07 55 111236 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 04 28 09 07 55 113969 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 04 28 09 07 55 115693 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 04 28 09 07 55 120921 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 28 09 07 55 121047 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 122079 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 123059 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 124069 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 125118 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 126190 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 127161 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 128182 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 55 129220 I tensorflow core common runtime gpu gpu device cc 1686 add visible gpu device 0 1 2 3 2020 04 28 09 07 55 130294 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 04 28 09 07 55 138816 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2200000000 hz 2020 04 28 09 07 55 139503 I tensorflow compiler xla service service cc 168 xla service 0x58629a0 initialize for platform host this do not guarantee that xla will be use device 2020 04 28 09 07 55 139544 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 04 28 09 07 56 141806 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 157822 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 171327 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 191060 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 193694 I tensorflow compiler xla service service cc 168 xla service 0x208f270 initialize for platform cuda this do not guarantee that xla will be use device 2020 04 28 09 07 56 193722 I tensorflow compiler xla service service cc 176 streamexecutor device 0 tesla t4 compute capability 7 5 2020 04 28 09 07 56 193728 I tensorflow compiler xla service service cc 176 streamexecutor device 1 tesla t4 compute capability 7 5 2020 04 28 09 07 56 193733 I tensorflow compiler xla service service cc 176 streamexecutor device 2 tesla t4 compute capability 7 5 2020 04 28 09 07 56 193738 I tensorflow compiler xla service service cc 176 streamexecutor device 3 tesla t4 compute capability 7 5 2020 04 28 09 07 56 199679 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 201632 I tensorflow core common runtime gpu gpu device cc 1544 find device 0 with property pcibusid 0000 00 04 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 56 201722 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 203759 I tensorflow core common runtime gpu gpu device cc 1544 find device 1 with property pcibusid 0000 00 05 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 56 203839 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 205793 I tensorflow core common runtime gpu gpu device cc 1544 find device 2 with property pcibusid 0000 00 06 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 56 205876 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 207830 I tensorflow core common runtime gpu gpu device cc 1544 find device 3 with property pcibusid 0000 00 07 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 09 07 56 207873 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 09 07 56 207902 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 04 28 09 07 56 207918 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 04 28 09 07 56 207933 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 04 28 09 07 56 207942 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 04 28 09 07 56 207955 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 04 28 09 07 56 207965 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 28 09 07 56 208030 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 209924 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 211728 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 212713 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 213698 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 214697 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 215598 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 216512 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 56 217462 I tensorflow core common runtime gpu gpu device cc 1686 add visible gpu device 0 1 2 3 2020 04 28 09 07 56 217520 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 09 07 59 142161 I tensorflow core common runtime gpu gpu device cc 1085 device interconnect streamexecutor with strength 1 edge matrix 2020 04 28 09 07 59 142208 I tensorflow core common runtime gpu gpu device cc 1091 0 1 2 3 2020 04 28 09 07 59 142216 I tensorflow core common runtime gpu gpu device cc 1104 0 n y n n 2020 04 28 09 07 59 142221 I tensorflow core common runtime gpu gpu device cc 1104 1 y n n n 2020 04 28 09 07 59 142226 I tensorflow core common runtime gpu gpu device cc 1104 2 n n n y 2020 04 28 09 07 59 142230 I tensorflow core common runtime gpu gpu device cc 1104 3 n n y n 2020 04 28 09 07 59 142584 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 143574 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 144696 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 145672 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 146611 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 147517 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 13969 mb memory physical gpu device 0 name tesla t4 pci bus i d 0000 00 04 0 compute capability 7 5 2020 04 28 09 07 59 148310 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 149214 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 13969 mb memory physical gpu device 1 name tesla t4 pci bus i d 0000 00 05 0 compute capability 7 5 2020 04 28 09 07 59 149788 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 150683 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 13969 mb memory physical gpu device 2 name tesla t4 pci bus i d 0000 00 06 0 compute capability 7 5 2020 04 28 09 07 59 151243 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 09 07 59 152188 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 13969 mb memory physical gpu device 3 name tesla t4 pci bus i d 0000 00 07 0 compute capability 7 5 2020 04 28 09 07 59 157773 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 i 09 09 33 647 notebookapp saving file at untitled1 ipynb i 09 10 40 103 notebookapp 302 get notebook hanbae seo demo02 127 0 0 1 0 62ms i 09 12 43 364 notebookapp saving file at untitled1 ipynb I 09 14 42 569 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 44 638 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 46 183 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 50 198 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 52 650 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 54 470 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 55 575 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 i 09 14 58 246 notebookapp kernel interrupt 85970d11 8ffb 4259 a0f9 29614d194712 error in atexit run exitfunc traceback most recent call last file usr lib python3 6 log init py line 1945 in shutdown h flush file usr local lib python3 6 dist package absl log init py line 892 in flush self current handler flush file usr local lib python3 6 dist package absl log init py line 785 in flush self stream flush file usr local lib python3 6 dist package ipykernel iostream py line 341 in flush if self pub thread thread be alive attributeerror nonetype object have no attribute thread I 09 15 06 715 notebookapp kernel restart 85970d11 8ffb 4259 a0f9 29614d194712 2020 04 28 09 15 12 082666 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 09 15 42 166324 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 04 28 09 15 42 402363 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 127628 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 130778 I tensorflow core common runtime gpu gpu device cc 1544 find device 0 with property pcibusid 0000 00 04 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 42 130944 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 135680 I tensorflow core common runtime gpu gpu device cc 1544 find device 1 with property pcibusid 0000 00 05 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 42 135781 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 143098 I tensorflow core common runtime gpu gpu device cc 1544 find device 2 with property pcibusid 0000 00 06 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 42 143189 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 150307 I tensorflow core common runtime gpu gpu device cc 1544 find device 3 with property pcibusid 0000 00 07 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 42 150351 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 11 50 42 152504 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 04 28 11 50 42 154902 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 04 28 11 50 42 155426 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 04 28 11 50 42 157576 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 04 28 11 50 42 158808 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 04 28 11 50 42 163515 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 28 11 50 42 163622 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 170033 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 173795 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 178082 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 187495 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 194851 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 202891 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 208997 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 42 213464 I tensorflow core common runtime gpu gpu device cc 1686 add visible gpu device 0 1 2 3 2020 04 28 11 50 42 214783 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 04 28 11 50 42 222662 I tensorflow core platform profile util cpu util cc 102 cpu frequency 2200000000 hz 2020 04 28 11 50 42 223394 I tensorflow compiler xla service service cc 168 xla service 0x5d3a7c0 initialize for platform host this do not guarantee that xla will be use device 2020 04 28 11 50 42 223423 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 04 28 11 50 43 541123 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 558363 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 569507 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 585723 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 587840 I tensorflow compiler xla service service cc 168 xla service 0x5669ea0 initialize for platform cuda this do not guarantee that xla will be use device 2020 04 28 11 50 43 587865 I tensorflow compiler xla service service cc 176 streamexecutor device 0 tesla t4 compute capability 7 5 2020 04 28 11 50 43 587871 I tensorflow compiler xla service service cc 176 streamexecutor device 1 tesla t4 compute capability 7 5 2020 04 28 11 50 43 587876 I tensorflow compiler xla service service cc 176 streamexecutor device 2 tesla t4 compute capability 7 5 2020 04 28 11 50 43 587880 I tensorflow compiler xla service service cc 176 streamexecutor device 3 tesla t4 compute capability 7 5 2020 04 28 11 50 43 593961 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 595914 I tensorflow core common runtime gpu gpu device cc 1544 find device 0 with property pcibusid 0000 00 04 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 43 595983 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 597987 I tensorflow core common runtime gpu gpu device cc 1544 find device 1 with property pcibusid 0000 00 05 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 43 598069 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 600110 I tensorflow core common runtime gpu gpu device cc 1544 find device 2 with property pcibusid 0000 00 06 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 43 600215 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 602185 I tensorflow core common runtime gpu gpu device cc 1544 find device 3 with property pcibusid 0000 00 07 0 name tesla t4 computecapability 7 5 coreclock 1 59ghz corecount 40 devicememorysize 14 75gib devicememorybandwidth 298 08gib s 2020 04 28 11 50 43 602229 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 11 50 43 602250 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 04 28 11 50 43 602265 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 04 28 11 50 43 602275 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 04 28 11 50 43 602287 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 04 28 11 50 43 602296 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 04 28 11 50 43 602305 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 04 28 11 50 43 602363 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 604392 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 606481 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 608546 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 610742 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 612800 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 615044 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 616887 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 43 618745 I tensorflow core common runtime gpu gpu device cc 1686 add visible gpu device 0 1 2 3 2020 04 28 11 50 43 618810 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 04 28 11 50 45 761140 I tensorflow core common runtime gpu gpu device cc 1085 device interconnect streamexecutor with strength 1 edge matrix 2020 04 28 11 50 45 761202 I tensorflow core common runtime gpu gpu device cc 1091 0 1 2 3 2020 04 28 11 50 45 761210 I tensorflow core common runtime gpu gpu device cc 1104 0 n y n n 2020 04 28 11 50 45 761216 I tensorflow core common runtime gpu gpu device cc 1104 1 y n n n 2020 04 28 11 50 45 761220 I tensorflow core common runtime gpu gpu device cc 1104 2 n n n y 2020 04 28 11 50 45 761225 I tensorflow core common runtime gpu gpu device cc 1104 3 n n y n 2020 04 28 11 50 45 761562 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 762656 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 763675 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 764693 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 765747 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 766736 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 13969 mb memory physical gpu device 0 name tesla t4 pci bus i d 0000 00 04 0 compute capability 7 5 2020 04 28 11 50 45 767518 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 768518 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 13969 mb memory physical gpu device 1 name tesla t4 pci bus i d 0000 00 05 0 compute capability 7 5 2020 04 28 11 50 45 769142 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 770076 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 2 with 13969 mb memory physical gpu device 2 name tesla t4 pci bus i d 0000 00 06 0 compute capability 7 5 2020 04 28 11 50 45 770664 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 04 28 11 50 45 771659 I tensorflow core common runtime gpu gpu device cc 1230 create tensorflow device job localhost replica 0 task 0 device gpu 3 with 13969 mb memory physical gpu device 3 name tesla t4 pci bus i d 0000 00 07 0 compute capability 7 5 2020 04 28 11 50 47 091531 I tensorflow core profiler lib profiler session cc 154 profiler session start 2020 04 28 11 50 47 091683 I tensorflow core profiler internal gpu cupti tracer cc 1372 profiler find 4 gpu 2020 04 28 11 50 47 093991 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcupti so 10 1 2020 04 28 11 50 47 194680 e tensorflow core profiler internal gpu cupti tracer cc 1422 function cupti interface subscribe subscriber cupti callbackfunc apicallback this fail with error cupti error insufficient privilege 2020 04 28 11 50 47 283735 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 302659 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 321059 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 341426 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 342187 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 344125 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 345834 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 347581 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 404901 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 424905 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 444686 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 463115 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 463881 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 465529 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 467173 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 47 468860 w tensorflow core kernel datum capture function cc 458 disable multi device execution for a function that use the experimental int on device attribute 2020 04 28 11 50 51 405795 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 04 28 11 50 52 668100 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 output generate with the example on nodebook warn pip be be invoke by an old script wrapper this will fail in a future version of pip please see for advice on fix the underlying issue to avoid this problem you can invoke python with m pip instead of run pip directly 2 2 0 dev20200427 info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device gpu 0 job localhost replica 0 task 0 device gpu 1 job localhost replica 0 task 0 device gpu 2 job localhost replica 0 task 0 device gpu 3 info tensorflow use mirroredstrategy with device job localhost replica 0 task 0 device gpu 0 job localhost replica 0 task 0 device gpu 1 job localhost replica 0 task 0 device gpu 2 job localhost replica 0 task 0 device gpu 3 number of device 4 epoch 1 12 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow batch all reduce 6 all reduce with algorithm nccl num pack 1 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 info tensorflow reduce to job localhost replica 0 task 0 device cpu 0 then broadcast to job localhost replica 0 task 0 device cpu 0 the mirroredstrategy keras example I apply import tensorflow dataset as tfds import tensorflow as tf tfds disable progress bar import os print tf version dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync you can also do info split total num example to get the total number of example in the dataset num train example info split train num example num test example info split test num example buffer size 10000 batch size per replica 64 batch size batch size per replica strategy num replicas in sync def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale cache shuffle buffer size batch batch size eval dataset mnist test map scale batch batch size with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer adam metric accuracy define the checkpoint directory to store the checkpoint checkpoint dir training checkpoint name of the checkpoint file checkpoint prefix os path join checkpoint dir ckpt epoch function for decay the learning rate you can define any decay function you need def decay epoch if epoch 3 return 1e 3 elif epoch 3 and epoch 7 return 1e 4 else return 1e 5 callback for print the lr at the end of each epoch class printlr tf keras callbacks callback def on epoch end self epoch log none print nlearne rate for epoch be format epoch 1 model optimizer lr numpy callback tf keras callbacks tensorboard log dir log tf keras callbacks modelcheckpoint filepath checkpoint prefix save weight only true tf keras callbacks learningratescheduler decay printlr model fit train dataset epoch 12 callback callback and gpu memory be occupy but all gpu s gpu util compute m be 0 mkc choi hbseo g1 tf operator example v1 multi nvidia smi tue apr 28 13 50 50 2020 nvidia smi 440 82 driver version 440 82 cuda version 10 2 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 tesla t4 off 00000000 00 04 0 off 0 n a 71c p0 32w 70w 14612mib 15109mib 0 default 1 tesla t4 off 00000000 00 05 0 off 0 n a 41c p0 27w 70w 14612mib 15109mib 0 default 2 tesla t4 off 00000000 00 06 0 off 0 n a 41c p0 26w 70w 14612mib 15109mib 0 default 3 tesla t4 off 00000000 00 07 0 off 0 n a 41c p0 27w 70w 14612mib 15109mib 0 default process gpu memory gpu pid type process name usage 0 24210 c usr bin python3 14601mib 1 24210 c usr bin python3 14601mib 2 24210 c usr bin python3 14601mib 3 24210 c usr bin python3 14601mib when I specify cuda visible device number as below it run but on only one gpu os environ cuda visible device 0 and I try to set tf datum experimental autoshardpolicy off describe in an exist issue here but the result be same code be below import os import tensorflow dataset as tfds import tensorflow as tf strategy tf distribute mirroredstrategy strategy tf distribute mirroredstrategy nccl vs ring print number of device format strategy num replicas in sync buffer size 10000 batch size 64 def make dataset unbatched scale mnist datum from 0 255 to 0 1 def scale image label image tf cast image tf float32 image 255 return image label dataset info tfds load name mnist with info true as supervise true return dataset train map scale cache shuffle buffer size def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 activation softmax model compile loss tf keras loss sparse categorical crossentropy optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model global batch size 64 2 with strategy scope train dataset make dataset unbatched batch global batch size repeat option tf datum option option experimental distribute auto shard policy tf datum experimental autoshardpolicy datum train dataset train dataset with option option multi worker model build and compile cnn model multi worker model fit x train dataset epoch 3 step per epoch 5 describe the expect behavior the example run with multiple gpu standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | doc multi worker with keras ipynb show init instead of init | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change when render the literal init be replace with init clear description init be a build in python function for class in the ipynb source code it be correct however when render at it be incorrect for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content yes please see the notebook in the doc the text be note tf config be parse and tensorflow s grpc server be start at the time multiworkermirroredstrategy init be call so tf config environment variable must be set before a tf distribute strategy instance be create vs note tf config be parse and tensorflow s grpc server be start at the time multiworkermirroredstrategy init be call so tf config environment variable must be set before a tf distribute strategy instance be create notice that init be show when init should be show submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide no I didn t know how to escape the underscore but I believe it be as in this report use a backslash before each underscore |
tensorflowtensorflow | unrecognizedflagerror when use tf vectorized map with lbfgs minimize | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 2 mobile device e g iphone unrecognizedflagerrortf8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 1 0 v2 1 0 rc2 17 ge5bf8de python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior when try to access the tfp optimizer lbfgs minimize object parameter during a tf vectorized map operation it throw the error unrecognizedflagerror unknown command line flag f as well as error tensorflow get error while pfor be convert op name loop body partitionedcall describe the expect behavior you can simply extract the position of this object during the loop standalone code to reproduce the issue the issue be originally create with follow code on my laptop with above mention version but can also be reproduce in this colab other info log the large error traceback file be append tf vectorized map traceback txt |
tensorflowtensorflow | keras model fit generator or model fit not work properly | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution ubuntu 18 04 3 lts tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 6 9 cuda cudnn version colab gpu model and memory colab describe the current behavior while use model fit or model fit generator the sub iteration in epoch show unknown number of iteration which be go on despite surpass the batch size describe the expect behavior while use model fit or model fit generator the sub iteration in epoch must be definite standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach shell warning tensorflow from 1 model fit generator from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update please use model fit which support generator epoch 1 20 13 unknown 20 2s step loss 1 0732 accuracy 0 2602 |
tensorflowtensorflow | shuffle and batch operation result in keras model not run | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source google colab tensorflow version use command below 2 2 0 rc3 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 I have a dataset of about 70 000 image file I m follow the tutorial load use tfdata to load they and preprocess they however this function be cause problem def prepare for train ds cache true shuffle buffer size 1000 this be a small dataset only load it once and keep it in memory use cache filename to cache preprocesse work for dataset that don t fit in memory if cache if isinstance cache str ds ds cache cache else ds ds cache ds ds shuffle buffer size shuffle buffer size repeat forever ds ds repeat ds ds batch batch size prefetch let the dataset fetch batch in the background while the model be train ds ds prefetch buffer size autotune return ds after apply this function as the final processing step my model would run but not even start on the first epoch the same apply to even a model consist of a single dense unit examine the function step by step I try to do just the shuffle step or batch step ds ds shuffle buffer size shuffle buffer size ds ds batch batch size this alone result in the issue describe above the preprocessing I have do before this be the same as in the tutorial note the image datum have no label by design this might possibly cause the issue |
tensorflowtensorflow | new tfliteconverter not work with tf complex64 | Bug | system information os platform and distribution e g linux ubuntu 16 04 colab cpu tensorflow version use command below 2 2 0 rc3 describe the current behavior new tfliteconverter not work with tf complex64 disable the new converter converter experimental new converter false work standalone code to reproduce the issue python3 tf function def foo x y return x y x tf constant 1 2 3 4 5 6 dtype tf complex64 y tf constant 2 3 4 5 6 7 dtype tf complex64 foo concrete foo get concrete function x y converter tf lite tfliteconverter from concrete function foo concrete foo tflite converter convert colab example thank |
tensorflowtensorflow | get grpc error when I use tensorflow java api to load model | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 macos 10 14 2 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 python version 3 7 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I run tensorflow java api to load model in flink cluster it work fine the first time but when I run the job the second time in cluster it turn out to be an error like below libprotobuf error external protobuf archive src google protobuf descriptor database cc 58 file already exist in database tensorflow core protobuf eager service proto libprotobuf fatal external protobuf archive src google protobuf descriptor cc 1358 check fail generateddatabase add encode file descriptor size libc abi dylib terminate with uncaught exception of type google protobuf fatalexception check fail generateddatabase add encode file descriptor size describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | icu dependency need to be update to at least 66 1 for c 20 | Bug | tf depend on icu 64 2 version 66 1 contain a bug fix need to compile in c 20 mode might as well update to 67 1 though since it just come out see |
tensorflowtensorflow | mirroredstrategy not work parallel gpu work like serial | Bug | I m work with tf2 0 I find a situation confuse when use mirroredstrategy just the same as the tutorial strategy tf distribute mirroredstrategy create a dataset dataset dataset op dataset tfrecorddataset a 1 tfr a 2 tfr a 3 tfr a 4 tfr distribute that dataset dist dataset strategy experimental distribute dataset dataset iterate over the distribute dataset for x in dist dataset process dataset element strategy experimental run v2 train step args x here s step fn in function train step I make 2 time stamp in the code tf function experimental relax shape true def train step dist input def step fn inputs mel spec pre inp spec length label length label input with tf gradienttape as tape output model mel spec pre inp training true add line below begin time time tme loss loss fn label output spec length label length loss 1 batch size if train metric be not none metric result run metric mel spec label metric train metric metric result name result 1 max len gpu 1 for name result in metric result item gradient tape gradient loss model trainable variable add line below before update gradient end time time time print begin time end time end time begin time optimizer apply gradient zip gradient model trainable variable return loss metric result loss metric result strategy experimental run v2 step fn args dist input mean loss strategy reduce tf distribute reduceop sum loss axis 0 mean metric name strategy reduce tf distribute reduceop sum result axis 0 for name result in metric result item return mean loss mean metric I find that if use 2 gpu say gpu0 and gpu1 the two gpu printing time stamp bg1 ed1 and bg2 ed2 the four param relationship be like this bg1 ed1 bg2 ed2 that be only when gpu0 finish inference gpu1 start inference be the mirroredstrategy not work I comment the tf function be there any relationship when I not comment the tf function the code run into the error python segment fault core dump someone can help I thank |
tensorflowtensorflow | typeerror len be not well define for symbolic tensor transpose 0 please call x shape rather than len x for shape information | Bug | I m try to make my padd spectrogram function execute fast so I add tf function but when I add tf function decorator I m get typeerror len be not well define for symbolic tensor transpose 0 please call x shape rather than len x for shape information I m use colab tensorflow 2 x if I do not use tf function it work without any error python tf function def padd spectrogram spectogram padd len t tf transpose spectogram feature x timestep timestep x feature p tf keras preprocesse sequence pad sequences t maxlen padd len dtype float padding post truncating post return tf transpose p back to feature x timestep python typeerror in user code content util helper py 83 padd spectrogram p tf keras preprocesse sequence pad sequences usr local lib python3 6 dist package tensorflow python keras preprocesse sequence py 158 pad sequence padding padding truncating truncating value value usr local lib python3 6 dist package kera preprocesse sequence py 56 pad sequence num sample len sequences usr local lib python3 6 dist package tensorflow python framework op py 754 len shape information format self name typeerror len be not well define for symbolic tensor transpose 0 please call x shape rather than len x for shape information thank |
tensorflowtensorflow | bash cd model research protoc object detection proto proto python out object detection proto anchor generator pb2 py permission deny | Bug | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem |
tensorflowtensorflow | memoryoptimizer produce break graph with alreadyexistserror exception while run gru layer on tensorflow 2 2 0rc 3 | Bug | system information custom model build use keras macbook pro 8 core intel core i9 macos catalina 10 15 4 tensorflow instal from pip in virtual environment tensorflow v2 2 0 rc2 77 gaad398b5e9 2 2 0 rc3 python 3 7 5 run on cpu describe the current behavior the code snippet list below output multiple tensorflow core framework op kernel cc 1753 op require fail at variable op cc 104 already exist resource warning and finally exist with tensorflow python framework error impl alreadyexistserror exception note the code work correctly if the gru layer size be decrease from 320 to 80 it also work if tensorflow be downgrade to version 2 0 1 the issue be relate to issue report in 2018 this issue offer code to reproduce it and occur on the late version of tensorflow describe the expect behavior the code should work without exception standalone code to reproduce the issue python import numpy as np from tensorflow keras model import sequential from tensorflow keras layer import dense flatten bidirectional gru from tensorflow keras layer import conv1d maxpooling1d x np random rand 1000 401 17 y np random choice 0 1 size 1000 301 model sequential model add conv1d filter 320 kernel size 26 activation relu input shape 401 x shape 2 model add maxpooling1d pool size 13 stride 13 model add bidirectional gru 320 dropout 0 2 recurrent dropout 0 2 return sequence true model add flatten model add dense 2000 activation relu model add dense 301 activation sigmoid model compile loss binary crossentropy optimizer rmsprop metric accuracy model summary model fit x x y y epoch 1 verbose 1 the google colab notebook be available here the error be reproducible other info log the code above generate follow output model sequential layer type output shape param conv1d conv1d none 376 320 141760 max pooling1d maxpooling1d none 28 320 0 bidirectional bidirectional none 28 640 1232640 flatten flatten none 17920 0 dense dense none 2000 35842000 dense 1 dense none 301 602301 total param 37 818 701 trainable param 37 818 701 non trainable param 0 2020 04 26 10 19 57 349570 w tensorflow core framework op kernel cc 1753 op require fail at variable op cc 104 already exist resource per step 0 gradient tape sequential bidirectional backward gru while sequential bidirectional backward gru while grad body 877 gradient addn 8 tmp var n10tensorflow19temporaryvariableop6tmpvare 2020 04 26 10 19 57 363399 w tensorflow core framework op kernel cc 1753 op require fail at variable op cc 104 already exist resource per step 0 gradient tape sequential bidirectional backward gru while sequential bidirectional backward gru while grad body 877 gradient addn 7 tmp var n10tensorflow19temporaryvariableop6tmpvare 2020 04 26 10 19 57 377361 w tensorflow core framework op kernel cc 1753 op require fail at variable op cc 104 already exist resource per step 0 gradient tape sequential bidirectional backward gru while sequential bidirectional backward gru while grad body 877 gradient addn 8 tmp var n10tensorflow19temporaryvariableop6tmpvare repeat multiple time 2020 04 26 10 19 57 677304 w tensorflow core framework op kernel cc 1753 op require fail at variable op cc 104 already exist resource per step 0 gradient tape sequential bidirectional forward gru while sequential bidirectional forward gru while grad body 577 gradient addn 7 tmp var n10tensorflow19temporaryvariableop6tmpvare traceback most recent call last file alreadyexist err py line 21 in model fit x x y y epoch 1 verbose 1 file venv lib python3 7 site package tensorflow python keras engine training py line 66 in method wrapper return method self args kwargs file venv lib python3 7 site package tensorflow python keras engine training py line 851 in fit tmp log train function iterator file venv lib python3 7 site package tensorflow python eager def function py line 580 in call result self call args kwd file venv lib python3 7 site package tensorflow python eager def function py line 644 in call return self stateless fn args kwd file venv lib python3 7 site package tensorflow python eager function py line 2420 in call return graph function filter call args kwargs pylint disable protect access file venv lib python3 7 site package tensorflow python eager function py line 1665 in filter call self capture input file venv lib python3 7 site package tensorflow python eager function py line 1746 in call flat ctx args cancellation manager cancellation manager file venv lib python3 7 site package tensorflow python eager function py line 598 in call ctx ctx file venv lib python3 7 site package tensorflow python eager execute py line 60 in quick execute input attrs num output tensorflow python framework error impl alreadyexistserror resource per step 0 gradient tape sequential bidirectional backward gru while sequential bidirectional backward gru while grad body 877 gradient addn 8 tmp var n10tensorflow19temporaryvariableop6tmpvare node gradient tape sequential bidirectional backward gru while sequential bidirectional backward gru while grad body 877 gradient addn 8 tmp var op inference train function 7551 function call stack train function |
tensorflowtensorflow | there be no control input between assign and read node | Bug | I think there should be some dependency between assign and read node so that read only execute after assign be do but through the following toy example this seem to be not the case import tensorflow as tf a tf get variable a shape 2 3 print op name control input input op output 0 shape for op in tf get default graph get operation print op name op control input inp op name for inp in op input op output 0 shape output op name control input input op output 0 shape a initializer random uniform shape 2 a initializer random uniform min a initializer random uniform max a initializer random uniform randomuniform a initializer random uniform shape 2 3 a initializer random uniform sub a initializer random uniform max a initializer random uniform min a initializer random uniform mul a initializer random uniform randomuniform a initializer random uniform sub 2 3 a initializer random uniform a initializer random uniform mul a initializer random uniform min 2 3 a 2 3 a assign a a initializer random uniform 2 3 a read a 2 3 so a assign and a read node have no dependency and may execute in any order be there suppose to be any |
tensorflowtensorflow | can not save tensorflow probability distribution pixelcnn with tf train checkpoint | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory gtx 1070 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior try to save a model which include a tfd pixelcnn give the traceback traceback most recent call last file test py line 16 in checkpoint save file prefix fail before here file home equint github pyroclast env lib python3 6 site package tensorflow core python training track util py line 1902 in save file path self write s d file prefix checkpoint number file home equint github pyroclast env lib python3 6 site package tensorflow core python training track util py line 1832 in write output self saver save file prefix file prefix file home equint github pyroclast env lib python3 6 site package tensorflow core python training track util py line 1168 in save file prefix file prefix tensor object graph tensor object graph tensor file home equint github pyroclast env lib python3 6 site package tensorflow core python training track util py line 1108 in save cache when graph building object graph tensor object graph tensor file home equint github pyroclast env lib python3 6 site package tensorflow core python training track util py line 1076 in gather saveable feed addition self graph view serialize object graph file home equint github pyroclast env lib python3 6 site package tensorflow core python training track graph view py line 379 in serialize object graph trackable object path to root self breadth first traversal file home equint github pyroclast env lib python3 6 site package tensorflow core python training track graph view py line 199 in breadth first traversal for name dependency in self list dependency current trackable file home equint github pyroclast env lib python3 6 site package tensorflow core python training track graph view py line 159 in list dependency return obj checkpoint dependency file home equint github pyroclast env lib python3 6 site package tensorflow core python training tracking datum structure py line 509 in checkpoint dependency automatically un wrap and subsequently ignore self valueerror unable to save the object listwrapper 0 1 2 a list wrapper construct to track trackable tensorflow object a list element be replace setitem setslice delete delitem delslice or move sort in order to support restoration on object creation tracking be exclusively for append only datum structure describe the expect behavior shouldn t have a problem save a distribution use tf train checkpoint save standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import tensorflow probability as tfp tfd tfp distribution model tfd pixelcnn image shape 28 28 1 conditional shape 28 28 1 num resnet 1 num hierarchy 2 num filter 32 num logistic mix 4 dropout p 3 checkpoint tf train checkpoint model model checkpoint save file prefix fail before here other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | k in train phase break in eager mode when use outside of lambda layer | Bug | system information os platform and distribution e g linux ubuntu 16 04 google colab tensorflow version use command below 2 2 0 rc3 v2 2 0 rc3 0 gaad398b5e9 python version 3 0 describe the current behavior k in train phase it always return the alternative option both during model fit and model predict in kera but it do work when you disable eager mode or when you wrap it in a lambda layer this be unexpected because in tf1 and when eager be disable the function work as expect describe the expect behavior I would expect k in train phase to work as describe in the documentation standalone code to reproduce the issue |
tensorflowtensorflow | pass tf keras model as tf function argument do not create concrete function | Bug | system information have I write custom code yes tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 5 describe the current behavior pass a tf keras model or tf keras optimizer as argument into tf function do not create a concrete function I expect that it would since function trace work as it should if the model optimizer be a global variable standalone code to reproduce the issue python import tensorflow as tf class mymodel tf keras model def init self super init def call self input return 2 input tf function def step model model input return model input tf function def step input return model input input tf convert to tensor 1 dtype tf float32 model mymodel this work as expect print f step step input 2 0 print f step concrete function step list all concrete function for serialization this do not no concrete function be save print f step model step model model input 2 0 print f step model concrete function step model list all concrete function for serialization output bash step 2 0 step concrete function step model 2 0 step model concrete function it appear that pass a tf keras model as an argument into tf function be not support as trace fail in a different use case this error appear bash info tensorflow unsupported signature for serialization my use case require limit usage of global variable since there be several model run simultaneously and they need to be garbage collect efficiently how can I pass a model as a function argument into a tf function |
tensorflowtensorflow | gradient for tf py function with mixed argument | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 linux mint 19 3 tensorflow instal from source or binary conda binary tensorflow version use command below 2 1 python version 3 7 describe the current behavior gradient calculation throw an error if a py function be use that have both integer and float point input output describe the expect behavior gradient with respect to all integer should be zero none and other should be correctly calculate standalone code to reproduce the issue import tensorflow as tf def pf x y return x 2 y 2 def pyf x y return tf py function pf x y tf int32 tf float32 x tf constant 5 v tf variable 0 5 with tf gradienttape as tape y m pyf x v z tf cast y tf float32 m print tape gradient z v when call pf gradient computation work but for pyf we get first a warning and then an error the problematic code seem to be in script op py op registergradient eagerpyfunc def eagerpyfuncgrad op dy compute the gradient of an eagerpyfunc token op get attr token def eagerly execute grad dy tape eager input eager output tape cache pop compat as bytes token return tape gradient eager output eager input output gradient dy with op control dependency op output return internal py func func eagerly execute grad inp dy tout tensor dtype for tensor in op input eager true be grad func true |
tensorflowtensorflow | wrong description in digit classifier tensorflow lite example | Bug | url with the issue description of issue the digit classifi example card have the wrong description generate reply suggestion to input conversational chat message screenshot image |
tensorflowtensorflow | valueerror could not interpret optimizer identifier tf keras | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information I have write a custom callback for learn rate scheduler in keras code be run on google colaboratory tensorflow version use command below tensorflow 2 2 0 rc3 I need to be able to set and get my learning rate and other param in my optimizer I need to be able to use the constructor of optimizer to set the parameter in it use the sample code in the keras documentation issue reproduce step 1 run this code in google colab from keras import optimizer model sequential model add dense 64 kernel initializer uniform input shape 10 model add activation softmax sgd optimizer sgd lr 0 01 decay 1e 6 momentum 0 9 nesterov true model compile loss mean squared error optimizer sgd 2 throw deserialization error 3 attach the screenshot of the erro 4 please help with a resolution workaround for this issue as I be work on a critical course assignment which I need to submit soon error 2020 04 23 222914 thank |
tensorflowtensorflow | why the output of my tflite model run on the cpu and gpu of the android phone be not the same | Bug | system information windows10 python3 7 tensorflow gpu 2 1 0 instal via pip androidstudio 3 6 2 org tensorflow tensorflow lite gpu 2 1 0 org tensorflow tensorflow lite 2 1 0 as the title say the tflite model I convert run on the cpu of the android phone and the result on the gpu be inconsistent I try two android phone with the same problem soc be snapdragon 660 snapdragon 845 the result of the model run on the android cpu be consistent with that on the computer I think this should explain that the model itself be not a problem this be the code of my android studio project this be a very simple project I use log e to view the output among they the 52 53 behavior of mainactivity java open the gpudelegate and then delete it to get the cpu operation result my gpu operation result be 2020 04 23 14 24 01 682 9335 9335 com star tflite test1 e 1111 output1 1 2991362e28 2020 04 23 14 24 01 682 9335 9335 com star tflite test1 e 1111 output2 infinity my cpu operation result be 2020 04 23 14 25 59 974 10058 10058 com star tflite test1 e 1111 output1 378560 0 2020 04 23 14 25 59 974 10058 10058 com star tflite test1 e 1111 output2 6 6762416e10 it can be see that there be obvious difference below be my model generation code import tensorflow as tf class test model tf keras model def init self super test model self init self conv1 tf keras layer conv2d filter 40 kernel size 3 padding same kernel initializer tf one self conv2 tf keras layer conv2d filter 56 kernel size 3 padding same kernel initializer tf one self conv3 tf keras layer conv2d filter 98 kernel size 3 padding same kernel initializer tf one self conv4 tf keras layer conv2d filter 33 kernel size 3 padding same kernel initializer tf one self conv5 tf keras layer conv2d filter 14 kernel size 3 padding same kernel initializer tf one tf function def call self input output1 self conv1 input output1 self conv2 output1 output temp output1 output1 self conv4 output1 output2 self conv3 output1 output2 tf concat output2 output temp axis 1 output2 self conv5 output2 return output1 output2 model test model test input tf one 1 6 6 1 tf keras backend set learning phase false test output1 model test input for output in test output1 print output model set inputs input test input converter tf lite tfliteconverter from keras model model tflite model converter convert open save6 convert model tflite wb write tflite model can anyone help I see what go wrong thank you very much |
tensorflowtensorflow | tf 2 1 0 savedmodel input fn from tf 1 15 x | Bug | tensorflow instal from binary tensorflow version 2 1 0 python version 3 7 3 import numpy as np import tensorflow compat v1 as tf tf disable v2 behavior print tf version function we write tf function def tile nd ragged2 a b need a sentinel otherwise it s hard to give it the initial shape we need we ll drop the sentinel at the end acc tf ragged constant dtype a dtype print acc be acc work one row at a time for i1 in tf range len a nest row length 0 should be able to write for a1 b1 in zip a b soon def outer loop test i1 return i1 len a nest row length 0 def outer loop body i1 acc a1 a i1 b1 b i1 if the component have variable length we can t use a tensorarray anymore so use a raggedtensor instead acc1 tf ragged constant dtype a dtype def loop test i2 shape none if isinstance a1 tf raggedtensor print rag print a1 print shape shape tf shape a1 nested row length 0 0 else print tensor print a1 shape tf shape a1 0 print tf shape a1 print shape return i2 shape return i2 tf cast a1 nested row length 0 i1 tf int32 def loop body i2 acc1 print a1 i2 a2 a1 i2 b2 b1 i2 for in range len b2 acc1 tf concat acc1 tf expand dim a2 0 axis 0 return i2 1 acc1 acc1 tf while loop loop test loop body 0 acc1 shape invariant tf tensorshape tf tensorshape none none acc1 acc1 1 drop the sentinel acc tf concat acc tf expand dim acc1 0 axis 0 add the row to the final result return i1 1 acc acc tf while loop outer loop test outer loop body 0 acc shape invariant tf tensorshape tf tensorshape none none none acc acc 1 drop the sentinel return acc export input fn def export input fn serialize tf example tf placeholder dtype tf string shape none name text s1split tf string split serialize tf example result type raggedtensor s1split tf string split s1split sep result type raggedtensor result tile nd ragged2 s1split s1split result tf result to tensor 1 0 0 result tf result to tensor result int tf string to number result tf out type tf int32 feature feature f1 result int this will be tf tensor 1 shape 1 dtype int32 reciever tensor text serialize tf example print reciever tensor return tf estimator export servinginputreceiver feature reciever tensor training x feature tf feature column numeric column f1 train input fn tf compat v1 estimator input numpy input fn x f1 np array 1 2 3 4 input feature y np array 1 5 3 5 5 5 7 5 true label batch size 1 num epoch 1 shuffle true regressor tf estimator linearregressor feature column x feature regressor train input fn train input fn step 10 sample np array 1 predict input fn tf compat v1 estimator input numpy input fn x f1 sample num epoch 1 shuffle false prediction list regressor predict input fn predict input fn print print prediction print training finish regressor export save model model export input fn as text false I have above code originally write in tf 1 15 0 currently I be try to port it to tf 2 1 0 I be basically try to generate savedmodel use customize feature transformation as above when I run the above code it work well but its due to tf disable v2 behavior I be currently try to comment out import tensorflow compat v1 as tf tf disable v2 behavior and use import tensorflow as tf and try to convert to tf 2 1 0 now tf 2 1 0 have no tf placeholder so I try convert it use serialize tf example tf placeholder dtype tf string shape none name text and get error typeerror expect int32 get none of type nonetype instead at serialize tf example tf variable tf zero none dtype tf string name text this might not be the right way to do it but all I want be to get savedmodel have feature processing inside savedmodel in tf 2 1 0 any help will be appreciate |
tensorflowtensorflow | documentation update | Bug | ksize should be size on l4806 also on line 4805 and 4835 the call need update to tf image extract patch l4805 l4835 |
tensorflowtensorflow | some of the operator in the model be not support by the standard tensorflow lite runtime if those be native tensorflow operator you might be able to use the extended runtime by pass enable select tf op or by set target op tflite builtin select tf op when call tf lite tfliteconverter otherwise if you have a custom implementation for they you can disable this error with allow custom op or by set allow custom op true when call tf lite tfliteconverter here be a list of builtin operator you be use add cast expand dim fill fully connect gather logistic mul not equal pack reshape shape softmax split stride slice tanh transpose zero like here be a list of operator for which you will need custom implementation tensorlistfromtensor tensorlistreserve tensorliststack while | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source provide the text output from tflite convert copy and paste here standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook also please include a link to a graphdef or the model if possible any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | verbose between epoch in tf version 2 2 0 rc3 | Bug | start to receive this today when colab upgrade to 2 2 0 rc3 yesterday I ve train and tf version be 2 2 0 rc2 the model I m use be create use tf keras epoch 1 2 2020 04 22 18 49 48 629597 I tensorflow core profiler lib profiler session cc 159 profiler session start 1 2 eta 0s loss 0 8884 categorical accuracy 0 7188 2020 04 22 18 49 48 646518 I tensorflow core profiler internal gpu cupti tracer cc 1479 cupti activity buffer flush 2020 04 22 18 49 48 646786 I tensorflow core profiler internal gpu device tracer cc 216 gputracer have collect 133 callback api event and 133 activity event 2020 04 22 18 49 48 663147 I tensorflow core profiler rpc client save profile cc 168 create directory content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 2020 04 22 18 49 48 670211 I tensorflow core profiler rpc client save profile cc 174 dump gzipped tool datum for trace json gz to content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 ea98f4633ef6 trace json gz 2020 04 22 18 49 48 672098 I tensorflow core profiler util event span cc 288 generation of step event take 0 026 ms 2020 04 22 18 49 48 689016 I tensorflow python profiler internal profiler wrapper cc 87 create directory content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48dumped tool datum for overview page pb to content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 ea98f4633ef6 overview page pb dump tool datum for input pipeline pb to content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 ea98f4633ef6 input pipeline pb dump tool datum for tensorflow stat pb to content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 ea98f4633ef6 tensorflow stat pb dump tool datum for kernel stat pb to content drive my drive colab notebook model log dir new1 logs board train plugin profile 2020 04 22 18 49 48 ea98f4633ef6 kernel stat pb 2 2 1s 306ms step loss 0 9273 categorical accuracy 0 6797 val loss 0 7531 val categorical accuracy 0 7508 epoch 2 2 2 2 1s 253ms step loss 0 8796 categorical accuracy 0 7188 val loss 0 6971 val categorical accuracy 0 7675 between epoch I m get all this mumbo jumbo previous version never result in this |
tensorflowtensorflow | tf datum dataset from tensor slice valueerror fail to convert a numpy array to a tensor unsupported object type list work on 2 0 0 beta1 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 juniper lab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device notebook tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 rc3 python version 3 8 1 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version no idea gpu model and memory geforce gtx 960 m you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I get an error when call tf datum dataset from tensor slice with valueerror fail to convert a numpy array to a tensor unsupported object type list see attachment answer tensorflow pad sequence feature column densefeature pdf describe the expect behavior I try to use this example it look like it be run on 2 0 0 beta1 but not more in the current version you can use this notebook to reproduce the case standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook you need to adapt the path the the csv file which will also be available in the repository other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | blank output when use ctc loss on tensorflow 2 | Bug | hello I m try to use tensorflow s tf nn ctc loss for a speech recognition problem but it seem it s cause the network to learn that the good way to reduce loss be to output blank I ve try other implementation like this and this but they have the same problem here be the gist for my own implementation and here be the link to my google drive folder with the file use I m use google colab s high ram runtime with gpu and tensorflow version 2 2 0 rc3 also for some reason I get this error valueerror dimension must be 2 but be 3 for node transpose transpose t dt float tperm dt int32 model 52 placeholder transpose perm with input shape 1200 29 3 when try to use tf function on the train step method from the ctc sr class when use the encoder decoder class but not when use the actual kera layer when use tf function with keras layer though training take waaaaaay long why be that |
tensorflowtensorflow | rnn tfliteconverter input tensor contain unknown dimension fail when couple with lstm | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux 4 19 104 x86 64 with ubuntu 18 04 bionic google colab s default environment tensorflow instal from source or binary pip install tf nightly tensorflow version or github sha if from source 2 2 0 dev20200421 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook from tensorflow lite python import lite from tensorflow python import keras import numpy as np input a kera layer input shape 3 3 name input a interm b tf keras layers lstm 4 name interm 1 input a output c keras layer dense 1 name dense 1 interm b model tf keras model model input input a output output c model compile optimizer sgd loss mean squared error model summary batch size 10 sample input np one batch size 3 3 dtype np float32 expect value model predict sample input converter lite tfliteconverterv2 from keras model model model converter experimental new converter true with open model tflite wb as f f write converter convert interpreter lite interpreter model path model tflite print interpreter get input detail interpreter resize tensor input 0 batch size 3 3 interpreter allocate tensor interpreter set tensor 0 sample input interpreter invoke interpreter get tensor interpreter get output detail 0 index the output from the converter invocation model model 2 layer type output shape param input a inputlayer none 3 3 0 interm 1 lstm none 4 128 dense 1 dense none 1 5 total param 133 trainable param 133 non trainable param 0 name input a index 0 shape array 1 3 3 dtype int32 shape signature array 1 3 3 dtype int32 dtype quantization 0 0 0 quantization parameter scale array dtype float32 zero point array dtype int32 quantize dimension 0 sparsity parameter runtimeerror traceback most recent call last in 27 interpreter allocate tensor 28 interpreter set tensor 0 sample input 29 interpreter invoke 30 interpreter get tensor interpreter get output detail 0 index usr local lib python3 6 dist package tensorflow lite python interpreter py in invoke self 512 513 self ensure safe 514 self interpreter invoke 515 516 def reset all variable self runtimeerror tensorflow lite kernels concatenation cc 74 t dim datum d t0 dim datum d 10 1 node number 50 concatenation fail to prepare node number 10 while fail to invoke failure detail the conversion be successful but the generate model can not be resize to variable batch size input tensor contain unknown dimension fail when couple with lstm same script would work just fine if one remove creation of interm b and pass input a as the input to generate the output c |
tensorflowtensorflow | tf execute eagerly return false in tensorflow 2 without use tf function | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 mac os catalina tensorflow instal from binary tensorflow version 2 1 0 python version 3 7 5 describe the current behavior the follow statement print tf execute eagerly print false inside a function of a custom layer while build the model so before compile be call none of those function have the decorator tf function in the documentation it say that the only time where the statement above can produce false be when either we be use tf function which be not the case execute inside a transformation function for tf dataset which be not the case or tf compat v1 disable eager execution be call which be not the case describe the expect behavior the statement above should return true see the relate stack overflow question |
tensorflowtensorflow | problem with read and get batch from 2d array tfrecord dataset | Bug | url s with the issue tfrecord file use tfdata description of issue what need change problem with read and get batch from 2d array tfrecord dataset clear description hello I use tensorflow 2 0 version I have some problem with reading tfrecord file when get batch first this be my read tfrecord py file import tensorflow as tf import os from glob import glob import numpy as np def serialize example batch list1 list2 filename train set tfrecord writer tf io tfrecordwriter filename for I in range batch feature feature1 np load list1 I feature2 np load list2 I print feature1 shape feature2 shape format feature1 shape feature2 shape feature input tf train feature float list tf train floatlist value feature1 flatten feature target tf train feature float list tf train floatlist value feature2 flatten feature tf train feature feature feature example tf train example feature feature serialize example serializetostre writer write serialize print th input target finish format I list1 I list2 I list inp sort glob input 2d magnitude list tar sort glob target 2d magnitude print len list inp serialize example len list inp list inp list tar my input and target shape be 2d array material of dataset be spectrogram therefore my tfrecord file include two feature like number of dataset x y about 100 000 dataset be successfully save as tfrecord file and I have problem when I read tfrecord file to get batch this be my code read tfrecord py import tensorflow as tf import os import numpy as np shuffle buffer size 50000 batch size 10 record file data2 dataset tfrecord train set tfrecord raw dataset tf datum tfrecorddataset record file print raw dataset raw dataset raw dataset raw dataset raw dataset repeat print repeat raw dataset repeat raw dataset raw dataset shuffle shuffle buffer size print shuffle raw dataset shuffle raw dataset raw dataset batch batch size drop remainder true print batch raw dataset batch raw example next iter raw dataset parse tf train example fromstre raw example numpy read tfrecord py 25 runtimewarne unexpected end group tag not all datum be convert print parse parse input parse feature feature input float list value print input input target parse feature feature target float list value print target target here be result from code raw dataset repeat shuffle batch read tfrecord py 25 runtimewarne unexpected end group tag not all datum be convert parse tf train example fromstre raw example numpy parse input target as a result I wonder how I get the batch from tfrecord file to train read tfrecord py 25 runtimewarne unexpected end group tag not all datum be convert could you give advice thank you very much usage example maybe raw dataset tf datum tfrecorddataset record file raw dataset raw dataset repeat raw dataset raw dataset shuffle shuffle buffer size raw dataset raw dataset batch batch size drop remainder true raw example next iter raw dataset parse tf train example fromstre raw example numpy input parse feature feature input float list value target parse feature feature target float list value |
tensorflowtensorflow | tf name scope have no effect when use with tf cond and autograph | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 5 beta mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior use tf summary scalar in a method that be call in tf cond log the scalar without the name scope the result be different than when eager execution be use describe the expect behavior tf name scope should be use standalone code to reproduce the issue eager execution python import tensorflow as tf def test summary with tf name scope myscope as scope mynum tf convert to tensor 43 9 name scope def log mynum tf summary scalar mynum datum mynum tf cond tf math equal mynum 43 9 true fn log mynum false fn lambda none name tb mynum log mynum with tf summary create file writer log as default tf summary experimental set step 0 test summary eager autograph python import tensorflow as tf tf function def test summary with tf name scope myscope as scope mynum tf convert to tensor 43 9 name scope def log mynum tf summary scalar mynum datum mynum tf cond tf math equal mynum 43 9 true fn log mynum false fn lambda none name tb mynum log mynum with tf summary create file writer log as default tf summary experimental set step 0 test summary autograph other info log n a |
tensorflowtensorflow | tf 2 2 0rc regression the predict train test on batch trace function with fix batch size 34907 | Bug | see 34907 for description even if the reference issue be fix for 2 1 it be again present in tf 2 2 0rc3 |
tensorflowtensorflow | confusing use of validation set in beginner example | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change tutorial make use of model evaluate and the documentation say that this be usually do on a validation set everything else I read glossary doc for model fit include validation set parameter point to this relate to a test set since it occur after the training phase and the parameter pass be x test and y test the confusion be unhelpful to beginner change from validation set to test set correct link n a parameter define n a return define n a raise list and define n a usage example n a request visual if applicable n a submit a pull request no I m a beginner so I don t want to do anything lest I create more confusion |
tensorflowtensorflow | tf 1 15 gpu version be not work as intend | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes I write a custom code os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device run on server tensorflow instal from source or binary pip tensorflow version use command below 1 15 python version python 3 6 9 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version 10 0 7 gpu model and memory gtx1060 6 gb describe the current behavior I m try to build up capsule network use tf 1 15 by refer tutorial the source code seem to work well with cpu but if gpu be use the model do not seem to be train I turn on off gpu by modify ld library path image image describe the expect behavior cpu and gpu version of the model need to be train standalone code to reproduce the issue import numpy as np import tensorflow as tf caps1 n 32 caps1 dim 8 caps2 n 10 caps2 dim 16 exp caps1 n caps1 n 6 6 1152 primary capsule m plus 0 9 m minus 0 1 lambda 0 5 conv1 param filter 256 kernel size 9 stride 1 padding valid activation tf nn relu conv2 param filter caps1 n caps1 dim 256 convolutional filter kernel size 9 stride 2 padding valid activation tf nn relu def squash s axis 1 epsilon 1e 7 square norm tf reduce sum tf square s axis axis keep dim true safe norm tf sqrt squared norm epsilon squash factor square norm 1 square norm unit vector s safe norm return squash factor unit vector def safe norm s axis 1 epsilon 1e 7 keep dim false squared norm tf reduce sum tf square s axis axis keep dim keep dim return tf sqrt squared norm epsilon from tensorflow example tutorial mnist import input datum mnist input datum read data set datum tf reset default graph np random seed 42 tf set random seed 42 placeholder x tf placeholder shape none 28 28 1 dtype tf float32 y tf placeholder shape none dtype tf int64 batch size tf shape x 0 capsuleizationlayer conv1 tf layer conv2d x conv1 param conv2 tf layer conv2d conv1 conv2 param caps1 raw tf reshape conv2 1 exp caps1 n caps1 dim caps1 output squash caps1 raw init sigma 0 1 w tf variable tf random normal shape 1 exp caps1 n caps2 n caps2 dim caps1 dim stddev init sigma dtype tf float32 w tile tf tile w batch size 1 1 1 1 caps1 output expand tf expand dim caps1 output 1 caps1 output tile tf expand dim caps1 output expand 2 caps1 output tile tf tile caps1 output tile 1 1 caps2 n 1 1 u hat tf matmul w tile caps1 output tile dynamic routing raw weight tf zero batch size exp caps1 n caps2 n 1 1 dtype np float32 round1 line 4 route weight tf nn softmax raw weight dim 2 round1 line 5 weight prediction tf multiply route weight u hat weight sum tf reduce sum weight prediction axis 1 keep dim true round1 line 6 caps2 output round 1 squash weight sum axis 2 round1 line 7 caps2 output round 1 tile tf tile caps2 output round 1 1 exp caps1 n 1 1 1 raw weights2 raw weight tf matmul u hat caps2 output round 1 tile transpose a true round2 line 4 route weight round 2 tf nn softmax raw weights2 dim 2 round2 line 5 weight prediction round 2 tf multiply route weight round 2 u hat weight sum round 2 tf reduce sum weight prediction round 2 axis 1 keep dim true round2 line 6 caps2 output squash weight sum round 2 axis 2 y proba safe norm caps2 output axis 2 y proba argmax tf argmax y proba axis 2 y pre tf squeeze y proba argmax axis 1 2 loss margin loss t tf one hot y depth caps2 n caps2 output norm safe norm caps2 output axis 2 keep dim true present error raw tf square tf maximum 0 m plus caps2 output norm present error tf reshape present error raw shape 1 10 absent error raw tf square tf maximum 0 caps2 output norm m minus absent error tf reshape absent error raw shape 1 10 l tf add t present error lambda 1 0 t absent error loss margin loss tf reduce mean tf reduce sum l axis 1 loss reconstruction loss correct tf equal y y pre accuracy tf reduce mean tf cast correct tf float32 default option for adam optimizer tf train adamoptimizer train op optimizer minimize loss init tf global variable initializer saver tf train saver batch size 50 n iteration per epoch mnist train num example batch size n iteration validation mnist validation num example batch size with tf session as sess init run for epoch in range 10 for iteration in range 1 n iteration per epoch 1 x batch y batch mnist train next batch batch size run the training operation and measure the loss loss train acc train sess run train op loss accuracy feed dict x x batch reshape 1 28 28 1 y y batch print iteration 1f loss 5f acc 5f format iteration n iteration per epoch iteration 100 n iteration per epoch loss train acc train 100 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach gpu version log iteration 1 1100 0 1 loss 0 30913 acc 6 00000 iteration 2 1100 0 2 loss 0 47852 acc 16 00000 iteration 3 1100 0 3 loss 0 00000 acc 8 00000 iteration 4 1100 0 4 loss 0 33996 acc 4 00000 iteration 5 1100 0 5 loss 0 00000 acc 10 00000 iteration 6 1100 0 5 loss 0 21045 acc 12 00000 iteration 7 1100 0 6 loss 0 33996 acc 8 00000 iteration 8 1100 0 7 loss 0 00000 acc 12 00000 iteration 9 1100 0 8 loss 0 00000 acc 10 00000 iteration 10 1100 0 9 loss 0 00000 acc 8 00000 iteration 11 1100 1 0 loss 0 21045 acc 16 00000 iteration 12 1100 1 1 loss 0 21045 acc 12 00000 iteration 13 1100 1 2 loss 0 40472 acc 6 00000 iteration 14 1100 1 3 loss 0 00000 acc 8 00000 iteration 15 1100 1 4 loss 0 61517 acc 14 00000 iteration 16 1100 1 5 loss 0 33996 acc 10 00000 iteration 17 1100 1 5 loss 0 00000 acc 14 00000 iteration 18 1100 1 6 loss 0 33996 acc 12 00000 iteration 19 1100 1 7 loss 0 00000 acc 8 00000 iteration 20 1100 1 8 loss 0 21045 acc 12 00000 iteration 21 1100 1 9 loss 0 33996 acc 18 00000 iteration 22 1100 2 0 loss 0 00000 acc 4 00000 iteration 23 1100 2 1 loss 0 21045 acc 8 00000 iteration 24 1100 2 2 loss 0 33996 acc 12 00000 iteration 25 1100 2 3 loss 0 00000 acc 10 00000 iteration 26 1100 2 4 loss 0 21045 acc 16 00000 iteration 27 1100 2 5 loss 0 61517 acc 8 00000 iteration 28 1100 2 5 loss 0 33996 acc 8 00000 iteration 29 1100 2 6 loss 0 00000 acc 16 00000 iteration 30 1100 2 7 loss 0 21045 acc 6 00000 iteration 31 1100 2 8 loss 0 00000 acc 6 00000 iteration 32 1100 2 9 loss 0 21045 acc 10 00000 iteration 33 1100 3 0 loss 0 61517 acc 14 00000 iteration 34 1100 3 1 loss 0 21045 acc 10 00000 iteration 35 1100 3 2 loss 0 21045 acc 4 00000 iteration 36 1100 3 3 loss 0 21045 acc 12 00000 iteration 37 1100 3 4 loss 0 61517 acc 10 00000 iteration 38 1100 3 5 loss 0 21045 acc 12 00000 iteration 39 1100 3 5 loss 0 21045 acc 6 00000 iteration 40 1100 3 6 loss 0 00000 acc 12 00000 iteration 41 1100 3 7 loss 0 00000 acc 14 00000 iteration 42 1100 3 8 loss 0 21045 acc 16 00000 iteration 43 1100 3 9 loss 0 21045 acc 16 00000 iteration 44 1100 4 0 loss 0 33996 acc 12 00000 iteration 45 1100 4 1 loss 0 00000 acc 6 00000 iteration 46 1100 4 2 loss 0 00000 acc 4 00000 iteration 47 1100 4 3 loss 0 21045 acc 10 00000 iteration 48 1100 4 4 loss 0 33996 acc 12 00000 iteration 49 1100 4 5 loss 0 00000 acc 14 00000 iteration 50 1100 4 5 loss 0 21045 acc 10 00000 cpu version log iteration 1 1100 0 1 loss 0 80942 acc 8 00000 iteration 2 1100 0 2 loss 0 63684 acc 10 00000 iteration 3 1100 0 3 loss 1 88716 acc 8 00000 iteration 4 1100 0 4 loss 0 80636 acc 4 00000 iteration 5 1100 0 5 loss 0 60099 acc 10 00000 iteration 6 1100 0 5 loss 0 54984 acc 12 00000 iteration 7 1100 0 6 loss 0 52977 acc 28 00000 iteration 8 1100 0 7 loss 0 52300 acc 12 00000 iteration 9 1100 0 8 loss 0 54408 acc 8 00000 iteration 10 1100 0 9 loss 0 49441 acc 16 00000 iteration 11 1100 1 0 loss 0 48491 acc 20 00000 iteration 12 1100 1 1 loss 0 49179 acc 36 00000 iteration 13 1100 1 2 loss 0 46365 acc 50 00000 iteration 14 1100 1 3 loss 0 45733 acc 54 00000 iteration 15 1100 1 4 loss 0 41508 acc 64 00000 iteration 16 1100 1 5 loss 0 40333 acc 64 00000 iteration 17 1100 1 5 loss 0 37526 acc 58 00000 iteration 18 1100 1 6 loss 0 37363 acc 58 00000 iteration 19 1100 1 7 loss 0 37349 acc 50 00000 iteration 20 1100 1 8 loss 0 37024 acc 52 00000 iteration 21 1100 1 9 loss 0 31172 acc 64 00000 iteration 22 1100 2 0 loss 0 29889 acc 64 00000 iteration 23 1100 2 1 loss 0 32116 acc 66 00000 iteration 24 1100 2 2 loss 0 31103 acc 74 00000 iteration 25 1100 2 3 loss 0 28157 acc 76 00000 iteration 26 1100 2 4 loss 0 22721 acc 82 00000 iteration 27 1100 2 5 loss 0 24645 acc 80 00000 iteration 28 1100 2 5 loss 0 28085 acc 76 00000 iteration 29 1100 2 6 loss 0 21644 acc 82 00000 iteration 30 1100 2 7 loss 0 22552 acc 82 00000 iteration 31 1100 2 8 loss 0 19832 acc 84 00000 iteration 32 1100 2 9 loss 0 18913 acc 86 00000 iteration 33 1100 3 0 loss 0 18527 acc 86 00000 iteration 34 1100 3 1 loss 0 19863 acc 84 00000 iteration 35 1100 3 2 loss 0 16656 acc 90 00000 iteration 36 1100 3 3 loss 0 17566 acc 80 00000 iteration 37 1100 3 4 loss 0 16723 acc 82 00000 iteration 38 1100 3 5 loss 0 13901 acc 94 00000 iteration 39 1100 3 5 loss 0 14630 acc 88 00000 iteration 40 1100 3 6 loss 0 16909 acc 84 00000 iteration 41 1100 3 7 loss 0 13749 acc 90 00000 iteration 42 1100 3 8 loss 0 16060 acc 84 00000 iteration 43 1100 3 9 loss 0 12409 acc 92 00000 iteration 44 1100 4 0 loss 0 16228 acc 82 00000 iteration 45 1100 4 1 loss 0 14838 acc 86 00000 iteration 46 1100 4 2 loss 0 13091 acc 90 00000 iteration 47 1100 4 3 loss 0 13386 acc 92 00000 iteration 48 1100 4 4 loss 0 15651 acc 84 00000 iteration 49 1100 4 5 loss 0 15982 acc 84 00000 iteration 50 1100 4 5 loss 0 12564 acc 90 00000 |
tensorflowtensorflow | modelcheckpoint result in input 0 of layer be incompatible with the layer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 python version 3 7 4 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory nvidia gtx 970 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior save a model use keras modelcheckpoint callback result in the follow error valueerror input 0 of layer cls output be incompatible with the layer its rank be undefined but the layer require a define rank describe the expect behavior should save the model without error standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the modelcheckpoint callback work when use save weight only true but I need to save the whole model for deployment to ai platform |
tensorflowtensorflow | ragged tensor input for keras model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary source tensorflow version use command below 2 1 0 python version 3 7 4 describe the current behavior if a ragged tensor input to a keras model e g by tensorflow keras layers input rag true or tensorflow keras input rag true be not use or directly connect to the output an error be raise see below this be may not be a problem since the ragged tensor be not use anyway however I encounter the same issue for a ragged tensor which be only use for indexing or reshape valueerror layer input 5 do not support raggedtensor as input input receive tf raggedtensor value tensor raggedfromvariant 1 raggedtensorfromvariant 1 shape none 1 dtype float32 row split tensor raggedfromvariant 1 raggedtensorfromvariant 0 shape none dtype int64 you can try convert your input to an uniform tensor describe the expect behavior the ragged tensor input be accept as an input even if it be not use in the model hopefully this will also resolve the issue in a more complex model with the same error message behavior standalone code to reproduce the issue the code below show model3 which do not work although model and model2 work perfectly python3 import tensorflow as tf import numpy as np class denseragge tf keras layers layer def init self unit use bias true activation linear kwargs super denseragge self init kwargs self support ragged input true self unit unit self use bias use bias self activation tf keras activation get activation def build self input shape last dim input shape 1 self kernel self add weight kernel shape last dim self unit trainable true if self use bias self bias self add weight bias shape self unit trainable true else self bias none super denseragge self build input shape def call self input output tf ragged map flat value tf matmul input self kernel if self use bias output tf ragged map flat value tf nn bias add output self bias output tf ragged map flat value self activation output return output class poolingragge tf keras layers layer def init self kwargs super poolingragge self init kwargs self support ragged input true def build self input shape super poolingragge self build input shape def call self input node input out tf math reduce mean node axis 1 return out datum a tf ragged constant 2 0 2 0 3 0 4 0 5 0 6 0 ragged rank 1 datum b tf ragged constant 4 0 4 0 6 0 8 0 10 0 12 0 ragged rank 1 datum y np array 3 9 5 8 11 print datum a shape datum b shape in a tf keras input shape none 1 dtype float32 rag true out denseragge 1 in a out poolingragge out model tf keras model model input in a output out optimizer tf keras optimizer adam lr 1e 3 model compile loss mean squared error optimizer optimizer metric mean absolute error mean squared error model fit x datum a y datum y epoch 200 print this work in a2 tf keras input shape none 1 dtype float32 rag true in b2 tf keras input shape none 1 dtype float32 ragged true outa2 denseragge 1 in a2 outb2 denseragge 1 in b2 outa2 poolingragge outa2 outb2 poolingragge outb2 out2 tf keras layer add outa2 outb2 model2 tf keras model model input in a2 in b2 output out2 optimizer tf keras optimizer adam lr 1e 3 model2 compile loss mean squared error optimizer optimizer metric mean absolute error mean squared error model2 fit x data a datum b y datum y epoch 200 print this work too in a3 tf keras input shape none 1 dtype float32 rag true in b3 tf keras input shape none 1 dtype float32 ragged true out3 denseragge 1 in a3 out3 poolingragge out3 model3 tf keras model model input in a3 in b3 output out3 optimizer tf keras optimizer adam lr 1e 3 model3 compile loss mean squared error optimizer optimizer metric mean absolute error mean squared error model3 fit x data a datum b y datum y epoch 200 print this do not work |
tensorflowtensorflow | online documentation miss python doc | Bug | url s with the issue may be present on other but I m not go to scour the web site look description of issue what need change the general documentation of these class do not show up on the web site this lead to confusion as many of they have argument documentation contain statement such as see the decay computation above when there be no document computation above instead you have to open the source code to see the documentation that be mention here s an example image and here s the miss documentation image |
tensorflowtensorflow | lstm keras conversion to tflite model work fine but the mlkit firebase ml model interpreter error on loading model | Bug | system information os platform google colab android 9 poco f1 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 2 0 rc3 firebase ml model interpreter version firebase ml model interpreter 22 0 2 I use this follow code to produce my lstm model model sequential model add lstm 128 input shape x shape 1 return sequence true model add dropout 0 2 model add batchnormalization model add lstm 128 input shape x shape 1 return sequence true model add dropout 0 2 model add batchnormalization model add lstm 128 input shape x shape 1 model add dropout 0 1 model add batchnormalization model add dense 64 activation relu model add dropout 0 2 model add dense 1 activation sigmoid opt tf keras optimizer adam lr 0 001 decay 1e 6 model compile loss binary crossentropy optimizer opt metric accuracy command use to run the converter or code if you re use the python api converter tf lite tfliteconverter from keras model model tflite model converter convert failure detail the conversion run just fine no error and I can download the tflite model however when I try to put this model into my android app it show error cause by cause by java lang illegalargumentexception bytebuffer be not a valid flatbuffer model I think if the conversion to tflite model be successful we should be able to run it on the interpreter any other info log here be the full log of the error e modelresourcemanager error preloade model resource com google firebase ml common firebasemlexception local model load fail with the model option local model path drowsy detector v21 tflite remote model name unspecified at com google firebase ml common internal modeldownload zzj zza com google firebase firebase ml common 22 1 0 36 at com google android gms internal firebase ml zzrj zza com google firebase firebase ml model interpreter 22 0 2 111 at com google android gms internal firebase ml zzrj zzol com google firebase firebase ml model interpreter 22 0 2 107 at com google android gms internal firebase ml zzqr zzf com google firebase firebase ml common 22 1 0 53 at com google android gms internal firebase ml zzqr zza zzoo com google firebase firebase ml common 22 1 0 7 at com google android gms internal firebase ml zzqr zza call com google firebase firebase ml common 22 1 0 24 at com google android gms internal firebase ml zzpx zza com google firebase firebase ml common 22 1 0 32 at com google android gms internal firebase ml zzpw run unknown source 4 at android os handler handlecallback handler java 873 at android os handler dispatchmessage handler java 99 at com google android gms internal firebase ml zze dispatchmessage com google firebase firebase ml common 22 1 0 6 at android os looper loop looper java 201 at android os handlerthread run handlerthread java 65 cause by java lang illegalargumentexception bytebuffer be not a valid flatbuffer model at org tensorflow lite nativeinterpreterwrapper createmodelwithbuffer native method at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 59 at org tensorflow lite interpreter interpreter java 207 at com google android gms internal firebase ml zzrj zzb com google firebase firebase ml model interpreter 22 0 2 174 at com google android gms internal firebase ml zzrl zzc unknown source 0 at com google android gms internal firebase ml zzrj zza com google firebase firebase ml model interpreter 22 0 2 170 at com google android gms internal firebase ml zzrk zza unknown source 6 at com google firebase ml common internal modeldownload zzj zzb com google firebase firebase ml common 22 1 0 61 at com google firebase ml common internal modeldownload zzj zza com google firebase firebase ml common 22 1 0 21 at com google android gms internal firebase ml zzrj zza com google firebase firebase ml model interpreter 22 0 2 111 at com google android gms internal firebase ml zzrj zzol com google firebase firebase ml model interpreter 22 0 2 107 at com google android gms internal firebase ml zzqr zzf com google firebase firebase ml common 22 1 0 53 at com google android gms internal firebase ml zzqr zza zzoo com google firebase firebase ml common 22 1 0 7 at com google android gms internal firebase ml zzqr zza call com google firebase firebase ml common 22 1 0 24 at com google android gms internal firebase ml zzpx zza com google firebase firebase ml common 22 1 0 32 at com google android gms internal firebase ml zzpw run unknown source 4 at android os handler handlecallback handler java 873 at android os handler dispatchmessage handler java 99 at com google android gms internal firebase ml zze dispatchmessage com google firebase firebase ml common 22 1 0 6 at android os looper loop looper java 201 at android os handlerthread run handlerthread java 65 |
tensorflowtensorflow | warn tensorflow entity at 0x000002343dcf24c8 could not be transform and will be execute as be | Bug | what can be do to solve this warning warning message below be the full warning warn tensorflow entity at 0x000002343dcf24c8 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute str warn entity at 0x000002343dcf24c8 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute str minimum working example the code that generate the warning be discuss in 38471 and store in this gist scrollto 8tvmwdmhlcsy import tensorflow as tf def compute length x return tf string length x def check substre x substre return tf string regex full match x substre def compute palindrome x extra split tf string bytes split x reverse tf reverse extra split 0 reversedstr tf string reduce join reverse return reversedstr ds tf datum dataset from tensor slice ottawa stockholm rabat ds ds map lambda city city compute length city check substre city lm compute palindrome city num elem len ds element spec for elem in ds print join f elem I for I in range num elem environment python 3 7 4 tensorflow gpu 2 0 0 tensorflow dataset 1 3 0 gast 0 2 2 run on window 10 under conda 4 8 3 |
tensorflowtensorflow | I ve get an error for build a handwritten digit classifier app with tensorflow lite | Bug | I have do all the same way show in the official website but I have get an error message as below only safe or non null assert call be allow on a nullable receiver of type tensor I could not find any solution through web surfing in github and other could you help I please regard |
tensorflowtensorflow | could we have more helpful error message | Bug | description of issue what need change tensorflow give many error and most of they aren t very helpful something like module tensorflow have no attribute reset graph can we change the error message so they be more constructive in this situation the issue be partially solve by downgrade to tensorflow 1 12 it would be helpful if instead of the reset graph error message we could get a message more like this version of tensorflow be incompatible with the current project please downgrade to tensorflow 1 12 use pip install tensorflow 1 12 clear description for example why should someone use this method how be it useful to keep from tear their own hair out |
tensorflowtensorflow | in modelcheckpoint filepath be not accept batch as formatting parameter | Bug | in modelcheckpoint callback there be a parameter give name save freq to save model if save freq be set to epoch it will save model at the end of every epoch this work perfectly fine but when save freq be set to an integer let s say n then the callback should save the model after n batch in every epoch but the problem here be the callback doesn t accept the filepath as file batch batch 02d epoch epoch 02d h5 and raise error as batch be invalid key the problem in the code that I have notice be that the save model function have access to epoch but it doesn t have access to batch and that s why get file path have access to epoch but not batch the functionality should be change little bit I be raise pr to add access to batch param in both save model and get file path variable I notice this error in tf code during the work on my pr 1702 in tensorflow addon cc gabrieldemarmiesse |
tensorflowtensorflow | 2 2rc3 keras validation datum doesn t respect cache in mirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0rc3 python version 3 7 cuda cudnn version 10 1 7 6 5 32 gpu model and memory 4 x nvidia v100 on gcp describe the current behavior when run the code below with cache training and validation dataset in a multi gpu environment I be use a gcp vm with 312 gb of memory and 4 nvidia v100s with tf distribute mirroredstrategy the validation dataset isn t correctly cache and example be still read from gcs during validation the memory usage suggest that the validation dataset be cache but during the keras validation loop it look like that data be still read from gcs instead of from the cache which can be observe by the very high network usage during validation I would expect no network usage after the first epoch in the example below I intentionally use an very large validation set to make this issue very obvious and easy to detect through monitor network usage this behaviour can also be observe with other dataset but the unexpected network access will be less noticible on small datset in which case can this issue not be observe to narrow down the possible cause for this I find two case where this issue doesn t exist 1 when run on a single gpu without mirroredstrategy the validation datum be correctly read from the cache and after the start of the second epoch no additional network traffic reading from gcs can be observe 2 when not use a validation dataset at all the network usage be zero after the first epoch so cache of the training set work as expect this seem to be a complicated interaction between tf datum tf keras and tf distribute do you have an idea what could cause this behaviour please let I know what additional information I could provide describe the expect behavior network usage should be zero after the start of the second epoch since both dataset be cache in memory and no additional read from gcs should be require standalone code to reproduce the issue python import tensorflow as tf import tensorflow dataset as tfds batch size 1024 decoder image tfds decode skipdecoding dataset tfds load imagenet2012 5 0 0 decoder decoder split validation datum dir gs my datum bucket val dataset tfds load imagenet2012 5 0 0 decoder decoder split train datum dir gs my datum bucket def decode and center crop image byte crop to center of image with padding then scale image size shape tf image extract jpeg shape image bytes image height shape 0 image width shape 1 image size 224 pad center crop size tf cast image size image size 32 tf cast tf minimum image height image width tf float32 tf int32 offset height image height pad center crop size 1 2 offset width image width pad center crop size 1 2 crop window tf stack offset height offset width pad center crop size pad center crop size image tf image decode and crop jpeg image bytes crop window channel 3 return tf image resize image image size image size method bicubic def preprocessing datum return tf cast decode and center crop datum image tf float32 data label def apply preprocesse dataset return dataset cache map preprocesse num parallel call tf data experimental autotune batch batch size prefetch 1 dataset apply preprocesse dataset val dataset apply preprocesse val dataset with tf distribute mirroredstrategy scope model tf keras model sequential tf keras layers globalmaxpool2d input shape 224 224 3 tf keras layer dense 1000 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy sparse top k categorical accuracy model fit dataset epoch 5 validation datum val dataset other info log to monitor the network usage over time tool like ytop can be use |
tensorflowtensorflow | cross post from keras memory leak hang on gpu no pid to kill not sudo user so can not install nvtop | Bug | null |
tensorflowtensorflow | issue with tensorshape and dataset | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 7 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory gtx 1080 ti 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to use tf dataset with generator I be get the follow very strange error try to build tf tensorshape typeerror in convert code 11 none ds ds interleave lambda gen idx tf datum dataset from generator gen wrapper home jostheim virtualenvs data science lib python3 7 site package tensorflow core python data op dataset op py 744 from generator output type tensor shape as shape output shape home jostheim virtualenvs data science lib python3 7 site package tensorflow core python data util nest py 471 map structure up to result func tensor for tensor in zip all flatten up to home jostheim virtualenvs data science lib python3 7 site package tensorflow core python data util nest py 471 result func tensor for tensor in zip all flatten up to home jostheim virtualenvs data science lib python3 7 site package tensorflow core python framework tensor shape py 1211 as shape return tensorshape shape home jostheim virtualenvs data science lib python3 7 site package tensorflow core python framework tensor shape py 771 init self dim as dimension d for d in dim iter home jostheim virtualenvs data science lib python3 7 site package tensorflow core python framework tensor shape py 771 self dim as dimension d for d in dim iter home jostheim virtualenvs data science lib python3 7 site package tensorflow core python framework tensor shape py 716 as dimension return dimension value home jostheim virtualenvs data science lib python3 7 site package tensorflow core python framework tensor shape py 200 init none 3 raise from typeerror dimension value must be integer or none or have an index method get tensorshape 1 describe the expect behavior the int I pass into tensorshape should be recognize as int and not throw an error standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook def gen for I in itertool count 1 yield I 1 I ds tf datum dataset from tensor slice list range 24 ds ds interleave lambda gen idx tf datum dataset from generator gen output type tf float32 args gen idx output shape tf tensorshape tf tensorshape none cycle length 24 block length 1 num parallel call 24 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | dilate convolution pass not work on standard tcn model | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version or github sha if from source 17dfa4e121c080a547e9cf6443b8fe2ae9ed45ed command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook copy and paste here the exact command in python model load model model path custom object tcn tcn converter tf lite tfliteconverter from keras model model converter experimental new converter true converter target spec support op tf lite opsset tflite builtin copy and paste here the exact command tf tfl translate command line utility tf tfl translate tf input array input 1 tf input shape 1 784 1 tf output array dense biasadd print function result mapping o home desktop convert tflite emit builtin tflite op smnist pb the output from the converter invocation copy and paste the output here tf tfl translate output main input name input 1 buffer 0 main output name dense biasadd buffer 271 loc dense biasadd also please include a link to the save model or graphdef smnist zip contain the model in both h5 and pb format and the converted flatbuffer failure detail I m convert the keras tcn model from to tf lite method 1 I convert the keras model h5 to a tf lite flatbuffer via the python api as above the conversion be successful inference with the model have the correct accuracy but I want to get rid of the spacetobatchnd and batchtospacend node result from the conv1d op present in the model commit f54bb6f5578b931d79884302768996ba1073f685 claim to do so solve issue 29509 however this do not happen to be so in my converted model which I attach method 2 to investigate far I build the tf tfl translate tool from source and invoke it with the command above on a graphdef pb of the model conversion happen to be successful but again the stb and bts op be still present in the tf lite flatbuffer the sequence of op in the graphdef seem to comply to the sequence specify in however this time the expanddim and squeeze op be correctly convert to reshape operation as specify here whereas this would not happen with the method above and although I would like to avoid this be do on my final flatbuffer |
tensorflowtensorflow | why there be serveral topkv2 op | Bug | please make sure that this be an issue relate to performance of tensorflow as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag performance template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below tensorflow2 1 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior screenshot from 2020 04 17 18 27 35 describe the expect behavior only one topkv2 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook l313 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach trace json |
tensorflowtensorflow | some bug of categoricalcrossentropy and sparsecategoricalcrossentropy in tf keras loss | Bug | it seem that there be some bug of categoricalcrossentropy and sparsecategoricalcrossentropy the code below will go wrong in tensorflow with version 2 2 0rc2 1 error of no gradient provide with categorical crossentropy import numpy as np import tensorflow as tf import tensorflow kera as keras import tensorflow kera backend as k from tensorflow keras layer import I input dtype int32 o dense 2 activation softmax I none l tf keras loss categorical crossentropy k one hot I 2 o m keras model input I output o m add loss l m compile optimizer adam m fit np one 1 and the same with the code below import numpy as np import tensorflow as tf import tensorflow kera as keras import tensorflow kera backend as k from tensorflow keras layer import I input dtype int32 o dense 2 activation softmax I none l tf keras loss categoricalcrossentropy k one hot I 2 o m keras model input I output o m add loss l m compile optimizer adam m fit np one 1 2 show error that sparse op be not support with functional model with build in layer wrap with sparse categorical crossentropy import numpy as np import tensorflow as tf import tensorflow kera as keras import tensorflow kera backend as k from tensorflow keras layer import I input dtype int32 o dense 2 activation softmax I none l tf keras loss sparse categorical crossentropy I o m keras model input I output o m add loss l m compile optimizer adam m fit np one 1 as well as the one below import numpy as np import tensorflow as tf import tensorflow kera as keras import tensorflow kera backend as k from tensorflow keras layer import I input dtype int32 o dense 2 activation softmax I none l tf keras loss sparsecategoricalcrossentropy I o m keras model input I output o m add loss l m compile optimizer adam m fit np one 1 |
tensorflowtensorflow | feature column be v2 column always return true | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 1 python version 3 6 8 bazel version if compile from source bazel 2 0 gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior upgrade I code from tf1 14 to tf2 0 I get the unexpected error file use lib python3 6 site package tensorflow estimator python estimator can linear py line 432 in linear logit fn variable remove bias valueerror list remove x x not in list describe the expect behavior it should work and train successfully standalone code to reproduce the issue import collection import tensorflow compat v1 as tf tf disable v2 behavior tf disable eager execution input name i d study label label all label input def input fn def parse data value input default for I in range 1 4 label default 0 all column collection ordereddict zip all tf io decode csv value record default label default input default label all column pop label 0 feature all column return feature label extract line from input file use the dataset api dataset tf datum dataset from tensor slice a b c 0 b b c 1 dataset dataset batch 1 dataset dataset map parse datum return dataset def train input column for name in input input column append tf feature column categorical column with hash bucket name hash bucket size 3 model tf estimator linearclassifi feature column input column model train input fn lambda input fn if name main train other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | imagedatagenerator complain about lack of stratification but I m do regression | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes but it s basic os platform and distribution e g linux ubuntu 16 04 macos 10 15 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de410 2 1 0 python version 3 7 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a describe the current behavior when use imagedatagenerator flow with a validation split argument a valueerror exception be raise that state training and validation subset have different number of class after the split if your numpy array be sort by the label you might want to shuffle they however my label datum be continuous float and I m do regression describe the expect behavior this code should not throw at the very least it should be a warning and I should be able to silence it standalone code to reproduce the issue python y 500 numpy random rand 200 label be float value datagen imagedatagenerator validation split 0 2 gen train datagen flow x y shuffle true subset training |
tensorflowtensorflow | tensor eat up all my memory in gpu | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf custom gradient expect an additional output when declare temp variable | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 fedora 29 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 1 0 rc2 17 ge5bf8de python version python 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior thank in advance for your time I m attempt to implement a custom loss function it require storage of a temporary variable and custom gradient for many reason numerical stability and flexibility I d like to just implement the gradient by hand the model here be just a foo bar model it seem it s relate to this issue I m also happy to open a pr however I haven t be able to get the patch suggest to work because I can t build from source via some bazel issue any reference be appreciate I ve also try to implement the loss function as a subclass of type loss but be unsuccessful thank for all of the hard work that go into this project describe the expect behavior run model via eagerexecution when use gradienttape to evaluate gradient I m get error valueerror not enough value to unpack expect 2 get 1 the loss function be only a function of one input and thus only have one partial derivative wrt to that input standalone code to reproduce the issue import tensorflow as tf from tensorflow keras layer import layer import numpy as np class linear layer y w x b def init self unit 32 super linear self init self unit unit def build self input shape self w self add weight shape input shape 1 self unit initializer random normal trainable true self b self add weight shape self unit initializer random normal trainable true def call self input return tf matmul input self w self b tf custom gradient def loss fn x r tf variable tf zero 100 dtype tf float32 create r def grad df variable none return df 2 tf reduce sum r return tf pow tf norm r 2 grad m 100 m np arange 0 m x m m linear layer linear 10 optimizer tf keras optimizer sgd learn rate 1e 3 dataset tf datum dataset from tensor slice x for step x in enumerate dataset with tf gradienttape as tape logit linear layer x loss loss fn x gradient tape gradient loss linear layer trainable weight optimizer apply gradient zip gradient linear layer trainable weight |
tensorflowtensorflow | 2 2rc3 distibute training with kera and threadpooldataset run out of memory | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below 2 2 0rc3 and tf nightly python version 3 7 cuda cudnn version 10 1 7 6 5 32 gpu model and memory 4 x nvidia v100 on gcp describe the current behavior when run the code below with cache training and validation dataset in a multi gpu environment I be use a gcp vm with 312 gb of memory and 4 nvidia v100s memory increase during each validation run until the vm run out of memory this behaviour can be observe on 2 2 0 rc3 and on the late nightly it look like the validation dataset be not properly cache since I can still see network access during validation and the memory usage drop below the theoretical cache memory requirement after validation have finish and then increase linearly during the next validation round to a point large than the memory usage in the previous epoch in the example below I intentionally use an very large validation set to make this memory increase very obvious and make training crash within the first 5 epoch this behaviour can also be observe with other dataset but the memory increase will be less noticible on small datset in which case be the memory usage still stable to narrow down the possible cause for this I find two case where this issue doesn t exist 1 when run on a single gpu memory usage be stable 2 tensorflow dataset use a privatethreadpooldataset l360 l362 by set the experimental threading private threadpool size 16 l62 as a default option when disable this option the memory usage be stable again unfortunately this be not a valid workaround in userland since the dataset option can not be overwrite with experimental threading private threadpool size none as it expect an integer yhliang2018 tomerk byronyi this seem to be a complicated interaction between tf datum tf keras and tf distribute do you have an idea what could cause this behaviour please let I know what additional information I could provide I ve run into similar issue with experimental threading private threadpool size on tf 2 0 0 in the past though never investigate the root cause in detail so this might not be an entirely new regression describe the expect behavior memory usage should be stable after the first epoch standalone code to reproduce the issue python import tensorflow as tf import tensorflow dataset as tfds batch size 1024 dataset tfds load imagenet2012 5 0 0 decoder image tfds decode skipdecoding split train datum dir gs my cloud bucket val dataset tfds load imagenet2012 5 0 0 decoder image tfds decode skipdecoding split validation datum dir gs my cloud bucket def decode and center crop image byte crop to center of image with padding then scale image size shape tf image extract jpeg shape image bytes image height shape 0 image width shape 1 image size 224 pad center crop size tf cast image size image size 32 tf cast tf minimum image height image width tf float32 tf int32 offset height image height pad center crop size 1 2 offset width image width pad center crop size 1 2 crop window tf stack offset height offset width pad center crop size pad center crop size image tf image decode and crop jpeg image bytes crop window channel 3 return tf image resize image image size image size method bicubic def preprocessing datum return tf cast decode and center crop datum image tf float32 data label dataset dataset cache map preprocesse num parallel call tf data experimental autotune batch batch size prefetch 1 val dataset val dataset cache map preprocesse num parallel call tf data experimental autotune batch batch size prefetch 1 with tf distribute mirroredstrategy scope model tf keras model sequential tf keras layers globalmaxpool2d input shape 224 224 3 tf keras layer dense 1000 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy sparse top k categorical accuracy model fit val dataset epoch 5 validation datum dataset other info log to monitor memory usage over time tool like ytop can be use |
tensorflowtensorflow | gradient do not exist for variable manually add to wrapper layer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version v2 1 0 rc2 17 ge5bf8de python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 2 gpu model and memory rtx 2080 ti describe the current behavior when use a wrapper layer parameter add use self add weight be be ignore by gradient although the output depend on those parameter a regularization term add use self add loss also depend on the same parameter this be visible as the warning warn tensorflow gradient do not exist for variable loss ignore 3 p logit 0 loss ignore 4 p logit 0 when minimize the loss come up this happen when I be try to port the concrete dropout implementation in to tensorflow 2 a minimal example with some of the complexity strip away and not necessarily sensible math be post below describe the expect behavior gradient should exist for those variable which should be optimize together with the neural network weight standalone code to reproduce the issue an early version of this problem be post in |
tensorflowtensorflow | compilation error | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 window 7 professional tensorflow instal from source or binary source tensorflow version commit sha if source sha target platform e g arm mbe os arduino nano 33 etc stm32f746 discovery kit describe the problem error during compilation please provide the exact sequence of command step when you run into the problem I download source code as describe in the manual and instal mbed cli I open cmd line and run command make f tensorflow lite micro tool make makefile target mbe tag cmsis disco f746ng generate micro speech mbed project I get follow error message c user anton paus voice tensorflowmaster2 make f tensorflow lite micro tool make makefile target mbe tag cmsis disco f746ng generate micro speech mbe p roject process begin createprocess null uname m fail make tensorflow lite micro tool make makefile 28 pipe no error tensorflow lite micro tool make download and extract sh gle gemmlowp archive 719139ce755a0f31cbf1c37f7f98adcc7fc9f425 zip 7e8191b24853 d75de2af87622ad293ba tensorflow lite micro tool make download gemmlowp tensorflow lite micro tool make download and extract sh line 110 syntax error bad for loop variable tensorflow lite micro tool make makefile 271 recipe for target tensorflow light e micro tool make download gemmlowp fail make tensorflow lite micro tool make download gemmlowp error 2 I have no idea what could be wrong during the installation of mbed cli the installer could not instal serial driver because it do not recgnize the mbed enable board |
tensorflowtensorflow | create serve graph separately from train | Bug | os platform and distribution macos catalina 10 15 3 tensorflow instal from binary tensorflow version 1 15 0 python version 3 7 3 can we get example of this create serve graph separately from train its not very clear from docs I basically have checkpoint file and what to convert that into pb file with feature transformation part of it I already have code for convert to feature use tf api |
tensorflowtensorflow | tf function raise typeerror | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below v2 1 0 rc2 17 ge5bf8de 2 1 0 python version 3 cuda cudnn version gpu model and memory tesla v100 describe the current behavior when run the code attach below it s from it crash with the trace attach at the bottom but when I remove the tf function annotation it work describe the expect behavior I would expect that the annotation do not make the program crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf from tensorflow keras layer import layer from tensorflow keras layer import conv2d from tensorflow keras layer import batchnormalization from tensorflow keras layer import concatenate from tensorflow keras import initializer import tensorflow kera backend as k def conv layer filter kernel size stride 1 1 padding same name none return conv2d filter kernel size stride stride padding padding use bias true kernel initializer he normal name name def normalize depth vars depth k depth v filter if type depth k float depth k int filter depth k else depth k int depth k if type depth v float depth v int filter depth v else depth v int depth v return depth k depth v class attentionaugmentation2d layer def init self depth k depth v num head relative true kwargs super attentionaugmentation2d self init kwargs if depth k num head 0 raise valueerror depth k d be not divisible by num head d depth k num head if depth v num head 0 raise valueerror depth v d be not divisible by num head d depth v num head if depth k num head 1 raise valueerror depth k num head can not be less than 1 give depth k d num head d depth k num head if depth v num head 1 raise valueerror depth v num head can not be less than 1 give depth v d num head d depth v num head self depth k depth k self depth v depth v self num head num head self relative relative self axis 1 if k image data format channel first else 1 def build self input shape self shape input shape normalize the format of depth v and depth k self depth k self depth v normalize depth var self depth k self depth v input shape if self axis 1 channel height width input shape else height width channel input shape if self relative dk per head self depth k self num head if dk per head 0 print dk per head dk per head self key relative w self add weight key rel w shape 2 width 1 dk per head initializer initializer randomnormal stddev dk per head 0 5 self key relative h self add weight key rel h shape 2 height 1 dk per head initializer initializer randomnormal stddev dk per head 0 5 else self key relative w none self key relative h none def call self input kwargs if self axis 1 if channel first force it to be channel last for these op input k permute dimensions input 0 2 3 1 q k v tf split input self depth k self depth k self depth v axis 1 q self split head 2d q k self split head 2d k v self split head 2d v scale query depth k head self depth k self num head q depth k head 0 5 batch num head height width depth k or depth v if axis 1 qk shape self batch self num head self height self width self depth k self num head v shape self batch self num head self height self width self depth v self num head flat q k reshape q k stack qk shape flat k k reshape k k stack qk shape flat v k reshape v k stack v shape batch num head hw hw logit tf matmul flat q flat k transpose b true apply relative encoding if self relative h rel logit w rel logit self relative logit q logit h rel logit logit w rel logit weight k softmax logit axis 1 attn out tf matmul weight flat v attn out shape self batch self num head self height self width self depth v self num head attn out shape k stack attn out shape attn out k reshape attn out attn out shape attn out self combine head 2d attn out batch height width depth v if self axis 1 return to batch depth v height width for channel first attn out k permute dimension attn out 0 3 1 2 attn out set shape self compute output shape self shape return attn out def compute output shape self input shape output shape list input shape output shape self axis self depth v return tuple output shape def split head 2d self ip tensor shape k shape ip batch height width channel for axis 1 tensor shape tensor shape I for I in range len self shape batch tensor shape 0 height tensor shape 1 width tensor shape 2 channel tensor shape 3 save the spatial tensor dimension self batch batch self height height self width width ret shape k stack batch height width self num head channel self num head split k reshape ip ret shape transpose axis 0 3 1 2 4 split k permute dimension split transpose axis return split def relative logit self q shape k shape q batch num head h w depth v shape shape I for I in range 5 height shape 2 width shape 3 rel logit w self relative logit 1d q self key relative w height width transpose mask 0 1 2 4 3 5 rel logit h self relative logit 1d k permute dimension q 0 1 3 2 4 self key relative h width height transpose mask 0 1 4 2 5 3 return rel logit h rel logit w def relative logit 1d self q rel k h w transpose mask rel logit tf einsum bhxyd md bhxym q rel k rel logit k reshape rel logit 1 self num head h w 2 w 1 rel logit self rel to abs rel logit rel logit k reshape rel logit 1 self num head h w w rel logit k expand dim rel logit axis 3 rel logit k tile rel logit 1 1 1 h 1 1 rel logit k permute dimensions rel logit transpose mask rel logit k reshape rel logit 1 self num head h w h w return rel logit def rel to abs self x shape k shape x shape shape I for I in range 3 b nh l shape col pad k zeros k stack b nh l 1 x k concatenate x col pad axis 3 flat x k reshape x b nh l 2 l flat pad k zeros k stack b nh l 1 flat x pad k concatenate flat x flat pad axis 2 final x k reshape flat x pad b nh l 1 2 l 1 final x final x l l 1 return final x def combine head 2d self input batch num head height width depth v num head transpose k permute dimension input 0 2 3 1 4 batch height width num head depth v num head shape k shape transpose shape shape I for I in range 5 a b shape 2 ret shape k stack shape 2 a b batch height width depth v return k reshape transpose ret shape def get config self config depth k self depth k depth v self depth v num head self num head relative self relative base config super attentionaugmentation2d self get config return dict list base config item list config item def augment conv2d ip filter kernel size 3 3 stride 1 1 depth k 0 2 depth v 0 2 num head 8 relative encoding true channel axis 1 if k image data format channel first else 1 depth k depth v normalize depth vars depth k depth v filter conv out conv layer filter depth v kernel size stride ip augment attention block qkv conv conv layer 2 depth k depth v 1 1 stride ip attn out attentionaugmentation2d depth k depth v num head relative encoding qkv conv attn out conv layer depth v kernel size 1 1 attn out output concatenate conv out attn out axis channel axis output batchnormalization output return output tf function def main from tensorflow keras layers import input from tensorflow keras model import model ip input shape 32 32 3 x augment conv2d ip filter 20 kernel size 3 3 depth k 0 2 depth v 0 2 dk v 0 2 f out 20 4 num head 4 relative encoding true model model ip x model summary check if attention build properly x tf zero 1 32 32 3 y model x print attention augment conv out shape y shape if name main main other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach typeerror in convert code attn aug conv py 318 main x augment conv2d ip filter 20 kernel size 3 3 attn aug conv py 305 augmented conv2d attn out attentionaugmentation2d depth k depth v num head relative encoding qkv conv home miniconda3 lib python3 7 site package tensorflow core python keras engine base layer py 778 call output call fn cast input args kwargs attn aug conv py 158 call h rel logit w rel logit self relative logit q attn aug conv py 215 relative logit rel logit w self relative logit 1d q self key relative w height width attn aug conv py 228 relative logit 1d rel logit self rel to abs rel logit attn aug conv py 240 rel to abs col pad k zeros k stack b nh l 1 home miniconda3 lib python3 7 site package tensorflow core python keras backend py 1300 zero v array op zeros shape shape dtype tf dtype name name home miniconda3 lib python3 7 site package tensorflow core python op array op py 2446 zero output fill shape constant zero dtype dtype name name home miniconda3 lib python3 7 site package tensorflow core python op array op py 233 fill result gen array op fill dim value name name home miniconda3 lib python3 7 site package tensorflow core python ops gen array op py 3240 fill dim value name name ctx ctx home miniconda3 lib python3 7 site package tensorflow core python ops gen array op py 3267 fill eager fallback ctx ctx name name home miniconda3 lib python3 7 site package tensorflow core python eager execute py 76 quick execute raise e home miniconda3 lib python3 7 site package tensorflow core python eager execute py 61 quick execute num output typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name attention augmentation2d stack 6 0 |
tensorflowtensorflow | wrong function in example for tensor diag | Bug | the example in the documentation of tf linalg tensor diag part and tf linalg tensor diag be show the non tensor version of these function e g diagonal be 1 2 3 4 tf diag diagonal 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 see |
tensorflowtensorflow | unwanted tf function retracing when use variable length input | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary pip tensorflow version use command below 2 2 0rc2 python version 3 6 8 describe the current behavior a lot of warning say that there be a tf function retracing be happen when use a keras model in a loop with variable length input describe the expect behavior I would like not to have retrace if there be no need for example a fully convolutionnal model standalone code to reproduce the issue python from random import randint import tensorflow as tf from tensorflow keras layer import conv1d from tensorflow keras model import sequential model sequential model add conv1d 8 3 model build none 12 1 predict tensor tf random normal randint 1 8 randint 4 40 1 for in range 10 for t in predict tensor model predict t other info log log warn log before flag parsing go to stderr w0406 09 22 52 525994 139643050075904 def function py 598 5 out of the last 6 call to predict function at 0x7f00a7fc1268 trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail w0406 09 22 52 615050 139643050075904 def function py 598 6 out of the last 7 call to predict function at 0x7f00a7fc1268 trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail w0406 09 22 52 653312 139643050075904 def function py 598 7 out of the last 8 call to predict function at 0x7f00a7fc1268 trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail w0406 09 22 52 706550 139643050075904 def function py 598 8 out of the last 10 call to predict function at 0x7f00a7fc1268 trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail this issue be originally describe here issuecomment 609612284 and some other people have have trouble with training as well issuecomment 609186763 when switch back to 2 1 the problem be go |
tensorflowtensorflow | fail to use vectorize mapping for tf datum | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information os platform and distribution e g linux ubuntu 16 04 ubuntu18 04 tensorflow instal from source or binary tensorflow version use command below pip tf2 1 0 cuda cudnn version gpu model and memory cuda10 1 cudnn7 6 5 describe the current behavior fail to apply vectorize mapping for tf datum describe the expect behavior apply vectorize mapping for tf datum and speed up as describe in vectorize mapping standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf datum tf datum tfrecorddataset image tfrecord image feature description height tf io fixedlenfeature tf int64 width tf io fixedlenfeature tf int64 depth tf io fixedlenfeature tf int64 bboxe tf io varlenfeature tf int64 image raw tf io fixedlenfeature tf string def parse example example datum tf io parse single example example image feature description img tf io decode jpeg data image raw img tf image resize img 416 416 bboxe datum bboxe bboxe tf sparse to dense bboxe bboxe tf reshape bboxe 1 5 return img bboxe datum datum map parse example batch 1 this work datum datum batch 1 map parse example I try to apply vectorize mapping but error raise other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file test tfrecord py line 28 in datum datum batch 1 map parse example file home wilson venv lib python3 6 site package tensorflow core python data op dataset op py line 1588 in map return mapdataset self map func preserve cardinality true file home wilson venv lib python3 6 site package tensorflow core python data op dataset op py line 3888 in init use legacy function use legacy function file home wilson venv lib python3 6 site package tensorflow core python data op dataset op py line 3147 in init self function wrapper fn get concrete function internal file home wilson venv lib python3 6 site package tensorflow core python eager function py line 2395 in get concrete function internal args kwargs file home wilson venv lib python3 6 site package tensorflow core python eager function py line 2389 in get concrete function internal garbage collect graph function self maybe define function args kwargs file home wilson venv lib python3 6 site package tensorflow core python eager function py line 2703 in maybe define function graph function self create graph function args kwargs file home wilson venv lib python3 6 site package tensorflow core python eager function py line 2593 in create graph function capture by value self capture by value file home wilson venv lib python3 6 site package tensorflow core python framework func graph py line 978 in func graph from py func func output python func func args func kwargs file home wilson venv lib python3 6 site package tensorflow core python data op dataset op py line 3140 in wrapper fn ret wrapper helper args file home wilson venv lib python3 6 site package tensorflow core python data op dataset op py line 3082 in wrapper helper ret autograph tf convert func ag ctx nest args file home wilson venv lib python3 6 site package tensorflow core python autograph impl api py line 237 in wrapper raise e ag error metadata to exception e valueerror in convert code test tfrecord py 14 parse example datum tf io parse single example example image feature description home wilson venv lib python3 6 site package tensorflow core python op parse op py 472 parse single example v2 unoptimize serialize assert scalar serialize serialize home wilson venv lib python3 6 site package tensorflow core python op parse op py 1319 assert scalar raise valueerror input s must be a scalar name valueerror input serialize must be a scalar how should I fix it thank |
tensorflowtensorflow | can not create control input with tf control dependency | Bug | I be expect to get some control input with tf control dependency but the follow test code do not give back what I expect import tensorflow as tf a tf get variable a shape 2 3 b tf get variable b shape 2 3 c tf scalar mul 2 a d tf scalar mul 3 b with tf control dependency d c f d c print f op control input it return if I do the other way f d c f op add control input c op d op print f op control input I get which seem good so do tf control dependency create control dependency or do op control input reflect all the control input could this be a bug |
tensorflowtensorflow | tf function tf variable convert to tf tensor automatically in second loop | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 macos mojave 10 14 5 tensorflow instal from source or binary tensorflow version use command below pip python version 3 6 5 describe the current behavior tf variable convert to tf tensor automatically after second loop in function decorate by tf function describe the expect behavior it should not convert automatically standalone code to reproduce the issue python tf function def foo a print a for I in range 10 if a 0 3 print f true a I a a a 0 assign 1 else print f false a I a a a 0 assign 2 a tf variable np array 1 2 3 foo a other info log valueerror traceback most recent call last in 1 foo a pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager def function py in call self args kwd 566 xla context exit 567 else 568 result self call args kwd 569 570 if trace count self get trace count pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager def function py in call self args kwd 613 this be the first call of call so we have to initialize 614 initializer 615 self initialize args kwd add initializer to initializer 616 finally 617 at this point we know that the initialization be complete or less pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager def function py in initialize self args kwd add initializer to 495 self concrete stateful fn 496 self stateful fn get concrete function internal garbage collect pylint disable protect access 497 args kwd 498 499 def invalid creator scope unused args unused kwd pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager function py in get concrete function internal garbage collect self args kwargs 2387 args kwargs none none 2388 with self lock 2389 graph function self maybe define function args kwargs 2390 return graph function 2391 pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager function py in maybe define function self args kwargs 2701 2702 self function cache miss add call context key 2703 graph function self create graph function args kwargs 2704 self function cache primary cache key graph function 2705 return graph function args kwargs pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager function py in create graph function self args kwargs override flat arg shape 2591 arg name arg name 2592 override flat arg shape override flat arg shape 2593 capture by value self capture by value 2594 self function attribute 2595 tell the concretefunction to clean up its graph once it go out of pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python framework func graph py in func graph from py func name python func args kwargs signature func graph autograph autograph option add control dependency arg name op return value collection capture by value override flat arg shape 976 convert func 977 978 func output python func func args func kwargs 979 980 invariant func output contain only tensor compositetensor pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python eager def function py in wrap fn args kwd 437 wrap allow autograph to swap in a converted function we give 438 the function a weak reference to itself to avoid a reference cycle 439 return weak wrap fn wrap args kwd 440 weak wrap fn weakref ref wrap fn 441 pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python framework func graph py in wrapper args kwargs 966 except exception as e pylint disable broad except 967 if hasattr e ag error metadata 968 raise e ag error metadata to exception e 969 else 970 raise valueerror in convert code 7 foo a a 0 assign 1 user mac pyenv version 3 6 5 envs tensorflow lib python3 6 site package tensorflow core python op array op py 1074 assign raise valueerror slice assignment be only support for variable valueerror slice assignment be only support for variable |
tensorflowtensorflow | keras model in savedmodel format error on loading valueerror model input be already set | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 productname mac os x productversion 10 15 2 buildversion 19c57 tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 python version 3 6 8 cuda cudnn version none gpu model and memory none describe the current behavior when try to load one of my savedmodel format model save use 1 15 0 use tf keras model load model an error be throw at the follow location file user saurabh pyenv version emotion python lib python3 6 site package tensorflow core python keras engine training py line 2671 in set input attrs raise valueerror model input be already set I can successfully load and run this model use tensorflow version 2 0 0 1 15 0 and 1 14 0 describe the expect behavior can successfully load a model from a smb savedmodel format file code to reproduce the issue import tensorflow as tf model smb tf keras model load model smbnew compile false other info log I be also attach a dummy savedmodel model below which can be use to test complete stacktrace of the error traceback most recent call last file setup py line 9 in model smb tf keras model load model smbnew compile false file python3 6 site package tensorflow core python keras save save py line 150 in load model return save model load load filepath compile file python3 6 site package tensorflow core python keras save save model load py line 89 in load model tf load load internal path loader cls kerasobjectloader file python3 6 site package tensorflow core python save model load py line 552 in load internal export dir file python3 6 site package tensorflow core python keras save save model load py line 119 in init self finalize file python3 6 site package tensorflow core python keras save save model load py line 165 in finalize node set input input file python3 6 site package tensorflow core python keras engine training py line 2647 in set input input self set input attrs input file python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs file python3 6 site package tensorflow core python keras engine training py line 2671 in set input attrs raise valueerror model input be already set valueerror model input be already set when load with tf kera in v2 0 0 the layer model config input output summary etc be all parse correctly as well as be able to run datum through the model |
tensorflowtensorflow | ftrl math display messy | Bug | in this doc the math be mess up see use chrome browser |
tensorflowtensorflow | tf image extract glimpse do not work as it should tensorflow 2 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 64 bit mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary nvidia tensorflow version use command below 2 1 python version 3 6 9 describe the problem hello I be try to use tf image extract glimpse and I realise that it do not work as it should the issue be the same than 4 year ago 7681 and it be solve but it seem like that in the new version of tensorflow 2 1 it be not fix source code log you can try to reproduce this code batch size 1 image height 7 image width 7 channel 1 glimpse size 3 3 image tf reshape tf range 49 delta 1 dtype tf float32 shape batch size image height image width channel output1 tf image extract glimpse image size glimpse size offset 1 1 center false normalize false output2 tf image extract glimpse image size glimpse size offset 2 2 center false normalize false output1 0 1 2 5 6 7 10 11 12 output2 0 1 2 5 6 7 10 11 12 the result be the same that we get in issue 7681 |
tensorflowtensorflow | long latency after both post quantization and aware quantization training | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary source tensorflow version 1 15 2 python version 3 7 7 instal use virtualenv pip conda pip bazel version if compile from source 0 26 1 gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none describe the problem provide the exact sequence of command step that you execute before run into the problem I try to evaluate my tensorflow lite model use the tool in I evaluate 3 model original model model after full integer quantization and model after aware quantization train the model I use be ssd mobilenetv2 in object detection tensorflow api and train with another dataset for the datum I get the original model have latency 151954 full interger quantize model s latency 2970549 aware quantize model s 945065 my code for run tensorflow lite tool evaluation task coco object detection run eval model file home sicong document model zoo ssd mobilenet v2 coco 2018 03 29 udacity udacity 300 247705 full integer quantization 247705 200 dev01 with input inference tflite ground truth image path home sicong document dataset udacity for evaluation image ground truth proto home sicong document dataset udacity for evaluation ground truth change i d pbtxt model output label home sicong document dataset udacity for evaluation udacity label map edge tpu pbtxt output file path home sicong document dataset udacity for evaluation full integer quantization 247705 200 dev01 with input inference txt thank you for help |
tensorflowtensorflow | problem with nest tf function in tensorflow 2 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution name ubuntu version 18 04 3 lts bionic beaver i d ubuntu i d like debian pretty name ubuntu 18 04 3 lts version i d 18 04 home url support url bug report url privacy policy url version codename bionic ubuntu codename bionic mobile device not test on mobile tensorflow instal from source or binary tensorflow and tf nightly instal with pip tensorflow version use command below tensorflow version treid with 2 1 0 and tf nigthly tf 2 2 0 python version python3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory problem persist on cpu and gpu cpu processor 3 vendor i d genuineintel cpu family 6 model 142 model name intel r core tm i7 8565u cpu 1 80ghz gpu description vga compatible controller product gp102 geforce gtx 1080 ti vendor nvidia corporation describe the current behavior it throw a valueerror describe the expect behavior do not throw valueerror standalone code to reproduce the issue I make a small example that reproduce the error here if run that code as be it throw an error see the constraint function if I just copy the content of the fem function into constraint then it work as describe in the comment in the constraint function other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach here s the traceback of the error bash invalidargumenterror traceback most recent call last usr local lib python3 6 dist package tensorflow python framework op py in get attr self name 2327 with c api util tf buffer as buf 2328 pywrap tf session tf operationgetattrvalueproto self c op name buf 2329 datum pywrap tf session tf getbuffer buf invalidargumenterror operation statefulpartitionedcall have no attr name xlacompile during handling of the above exception another exception occur valueerror traceback most recent call last 15 frame valueerror operation statefulpartitionedcall have no attr name xlacompile during handling of the above exception another exception occur valueerror traceback most recent call last usr local lib python3 6 dist package tensorflow python eager function py in call flat self args capture input cancellation manager 1736 raise valueerror all input to concretefunction s must be tensor 1737 on invocation of s the d th input s be not a 1738 tensor self func graph name I str arg 1739 args tensor input capture input 1740 possible gradient type valueerror all input to concretefunction s must be tensor on invocation of backward fem 664 the 0 th input indexedslice indice tensor gradient partitionedcall grad partitionedcall 4 1 shape 3200 dtype int64 value tensor gradient partitionedcall grad partitionedcall 4 0 shape 3200 dtype float64 dense shape tensor gradient partitionedcall grad partitionedcall 4 2 shape 1 dtype int32 be not a tensor |
tensorflowtensorflow | be break | Bug | url s with the issue please provide a link to the documentation entry for example train and export tensorflow model description of issue what need change in the explanation correspond to tag in this link train and export tensorflow model the hyperlink correspond to the text relate tensorflow api documentation be break clear description this link be useful for the community to understand the purpose of different tag use while save a model parameter define be all parameter define and format correctly yes return define be return value define n a |
tensorflowtensorflow | can not use set visible device with mixed precision | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow kind of combination of 2 example script os platform and distribution e g linux ubuntu 16 04 linux fedora 31 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below v2 2 0 rc3 0 gaad398b5e9 2 2 0 rc3 python version 3 7 6 bazel version if compile from source 2 0 0 gcc compiler version if compile from source gcc gcc 9 2 1 20190827 red hat 9 2 1 1 cuda cudnn version cuda 10 2 cudnn 7 6 5 33 gpu model and memory 2x geforce rtx 2080 ti 12 gb describe the current behavior when attempt to use tf config set visible device in conjunction with tf python keras mixed precision experimental policy set policy the tensorflow error with runtimeerror tensorflow device gpu 0 be be map to multiple cuda device 0 now and 1 previously which be not support this may be the result of provide different gpu configuration configproto gpu option for example different visible device list when create multiple session in the same process this be not currently support see describe the expect behavior no error standalone code to reproduce the issue import tensorflow as tf device tf config list physical device gpu tf config set visible device device 1 gpu from tensorflow python keras mixed precision experimental import policy as mixed precision mixed precision set policy mixed precision policy mixed float16 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.