repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | incorrect file format mention and raise section be miss in imagedatagenerator documentation | Bug | the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change 1 in the description correspond to save format argument in imagedatagenerator flow flow the statement one of png jpeg only relevant if save to dir be set default png should be change to one of png jpeg jpg bmp pdf gif only relevant if save to dir be set default png please find the github gist that demonstrate acceptance of other format 2 raise error section be miss for imagedatagenerator imagedatagenerator flow imagedatagenerator flow from directory imagedatagenerator flow from dataframe etc clear description imagedatagenerator be extremely useful for computer vision task so the more clear and well its documentation be the more it will help developer correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define no be the error define for example raise no usage example be there a usage example miss for imagedatagenerator flow from dataframe see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content yes submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | add the description correspond to subset validation add raise section and modify save format | Bug | address the doc issue 44587 44829 |
tensorflowtensorflow | add loss raise inaccessibletensorerror | Bug | I make a code for a model reffere to but the code raise this error inaccessibletensorerror the tensor tensor add 1 0 shape 2 dtype float32 can not be access here it be define in another function or code block use return value explicit python local or tensorflow collection to access it define in funcgraph name build graph i d 140714038692944 access from funcgraph name train function i d 140714048668056 the code have make custom model part use subclass api custom model s call method include vae loss loss l2 tf reduce mean tf square z out vae axis 1 2 3 4 original axis value be 1 2 3 4 loss kl 1 self n tf reduce sum tf exp z var tf square z mean 1 z var axis 1 vae loss self weight l2 loss l2 self weight kl loss kl self add loss vae loss when I comment this block the error be not raise full reproducible code scrollto 8rfnhmcbpnio |
tensorflowtensorflow | exception 0 error loc batch normalization move mean be not immutable try run tf save model optimize global tensor to prove tensor be immutable | Bug | system information os platform and distribution e g linux ubuntu 16 04 both linux 20 04 and window 10 tensorflow instal from source or binary binary pip install tf nightly tensorflow version or github sha if from source 2 5 0 dev20201111 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook command tf25 ubuntu ubuntu tfg257xs yolo tf tensorflow yolov4 tflite python convert tflite py weight checkpoint yolov4 416 tflite output checkpoint yolov4 416 tflite code import tensorflow as tf from absl import app flag log from absl flag import flag import numpy as np import cv2 from core yolov4 import yolov4 yolov3 yolov3 tiny decode import core util as util import os from core config import cfg flag define string weight checkpoint yolov4 416 path to weight file flag define string output checkpoint yolov4 416 fp32 tflite path to output flag define integer input size 416 path to output flag define string quantize mode float32 quantize mode int8 float16 float32 flag define string dataset volume element data coco dataset coco 5k txt path to dataset def representative datum gen fimage open flag dataset read split for input value in range 10 if os path exist fimage input value original image cv2 imread fimage input value original image cv2 cvtcolor original image cv2 color bgr2rgb image datum util image preprocess np copy original image flag input size flag input size img in image datum np newaxis astype np float32 print calibration image format fimage input value yield img in else continue def save tflite converter tf lite tfliteconverter from save model flag weight if flag quantize mode float16 converter optimization tf lite optimize default converter target spec support type tf compat v1 lite constant float16 converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op converter allow custom op true elif flag quantize mode int8 converter target spec support op tf lite opsset tflite builtins int8 converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op converter allow custom op true converter representative dataset representative datum gen tflite model converter convert open flag output wb write tflite model log info model save to format flag output def demo interpreter tf lite interpreter model path flag output interpreter allocate tensor log info tflite model load input detail interpreter get input detail print input detail output detail interpreter get output detail print output detail input shape input detail 0 shape input datum np array np random random sample input shape dtype np float32 interpreter set tensor input detail 0 index input datum interpreter invoke output datum interpreter get tensor output detail I index for I in range len output detail print output datum def main argv save tflite demo if name main try app run main except systemexit pass the output from the converter invocation tf25 ubuntu ubuntu tfg257xs yolo tf tensorflow yolov4 tflite python convert tflite py weight checkpoint yolov4 416 tflite output checkpoint yolov4 416 tflite 2020 11 12 10 51 14 591438 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2020 11 12 10 51 14 591714 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 11 12 10 51 21 071335 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 319 ignore output format 2020 11 12 10 51 21 071397 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 322 ignore drop control dependency 2020 11 12 10 51 21 071417 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 328 ignore change concat input range 2020 11 12 10 51 21 072230 I tensorflow cc save model reader cc 32 reading savedmodel from checkpoint yolov4 416 tflite 2020 11 12 10 51 21 126090 I tensorflow cc save model reader cc 55 read meta graph with tag serve 2020 11 12 10 51 21 126129 I tensorflow cc save model reader cc 93 reading savedmodel debug info if present from checkpoint yolov4 416 tflite 2020 11 12 10 51 21 270211 I tensorflow compiler mlir mlir graph optimization pass cc 251 none of the mlir optimization pass be enable register 0 pass 2020 11 12 10 51 21 307128 I tensorflow cc save model loader cc 206 restore savedmodel bundle 2020 11 12 10 51 21 354451 I tensorflow core platform profile util cpu util cc 112 cpu frequency 2299965000 hz 2020 11 12 10 51 21 867525 I tensorflow cc save model loader cc 190 running initialization op on savedmodel bundle at path checkpoint yolov4 416 tflite 2020 11 12 10 51 22 030647 I tensorflow cc save model loader cc 277 savedmodel load for tag serve status success ok take 958418 microsecond 2020 11 12 10 51 22 617404 I tensorflow compiler mlir tensorflow util dump mlir util cc 194 disable mlir crash reproducer set env var mlir crash reproducer directory to enable loc batch normalization move mean error be not immutable try run tf save model optimize global tensor to prove tensor be immutable traceback most recent call last file home ubuntu anaconda3 envs tf25 lib python3 7 site package tensorflow lite python convert py line 213 in toco convert protos enable mlir converter file home ubuntu anaconda3 envs tf25 lib python3 7 site package tensorflow lite python wrap toco py line 38 in wrap toco convert enable mlir converter exception 0 error loc batch normalization move mean be not immutable try run tf save model optimize global tensor to prove tensor be immutable during handling of the above exception another exception occur traceback most recent call last file convert tflite py line 76 in app run main file home ubuntu anaconda3 envs tf25 lib python3 7 site package absl app py line 303 in run run main main args file home ubuntu anaconda3 envs tf25 lib python3 7 site package absl app py line 251 in run main sys exit main argv file convert tflite py line 71 in main save tflite file convert tflite py line 45 in save tflite tflite model converter convert file home ubuntu anaconda3 envs tf25 lib python3 7 site package tensorflow lite python lite py line 745 in convert result convert save model converter kwargs file home ubuntu anaconda3 envs tf25 lib python3 7 site package tensorflow lite python convert py line 637 in convert save model enable mlir converter true file home ubuntu anaconda3 envs tf25 lib python3 7 site package tensorflow lite python convert py line 216 in toco convert protos raise convertererror str e tensorflow lite python convert convertererror 0 error loc batch normalization move mean be not immutable try run tf save model optimize global tensor to prove tensor be immutable also please include a link to the save model or graphdef save model yolov4 416 tflite zip failure detail when I try to get tflite file from save model tflite fail to generate tflite file with error message this problem happen regardless of quantization fp32 fp16 int8 any other info log when I try to get tflite with tensorflow 2 3 0rc0 it work well could generate tflite file and execute it with tflite interpreter but I fail to generate tflite file with tf nightly 2 5 0 dev2020111 how can I solve this problem |
tensorflowtensorflow | gradient transformer throw autograph error when try to maintain state in tensorarray | Bug | I m try to implement automatic gradient clip where the norm of the gradient be store on each train step and the gradient be clip base off the accumulate distribution of gradient norm however I seem to be run into some autograph issue here s the code to reproduce the issue python import tensorflow as tf import tensorflow probability as tfp class autoclipper def init self clip percentile self clip percentile clip percentile self grad history none self I none def call self grad and var if self grad history be none self grad history tf tensorarray dtype tf float32 size 0 dynamic size true self I 0 grad norm get grad norm g for g in grad and var total norm tf norm grad norm self grad history self grad history write self I total norm self I 1 clip value tfp stat percentile self grad history stack q self clip percentile return tf clip by norm g clip value v for g v in grad and var def get grad norm t axis none name none value tf convert to tensor t value if isinstance t tf indexedslice else t name t calculate l2 norm clip element by ratio of clip norm to l2 norm l2sum tf math reduce sum value value axis keepdim true pre l2sum 0 two tap tf where trick to bypass nan gradient l2sum safe tf where pre l2sum tf one like l2sum return tf squeeze tf where pre tf math sqrt l2sum safe l2sum model tf keras model sequential tf keras layer flatten input shape 28 28 tf keras layer dense 128 activation relu tf keras layers dropout 0 2 tf keras layer dense 10 model compile optimizer tf keras optimizer adam learning rate 0 001 gradient transformer autoclipper 10 loss mean absolute error metric accuracy x train y train x test y test tf keras datasets mnist load datum x train x test x train 255 0 x test 255 0 model fit x train y train it seem to fail when try to update the tensorarray but I think tensorarray could be create outside tf function python op name inference train function 13337 num output 4 input ensor shape dtype resource numpy attrs executor type config proto b n x07 n x03cpu x10 x01 n x07 n x03gpu x10 x002 x02j x008 x01 x82 x01 x00 ctx name none def quick execute op name num output input attrs ctx name none execute a tensorflow operation args op name name of the tensorflow operation see register op in c code to execute num output the number of output of the operation to fetch explicitly provide instead of be infer for performance reason input a list of input to the operation each entry should be a tensor or a value which can be pass to the tensor constructor to create one attrs a tuple with alternate string attr name and attr value for this operation ctx the value of context context name customize name for the operation return list of output tensor object the list be empty if there be no output raise an exception on error device name ctx device name pylint disable protect access try ctx ensure initialize tensor pywrap tfe tfe py execute ctx handle device name op name input attrs num output e typeerror an op outside of the function building code be be pass e a graph tensor it be possible to have graph tensor e leak out of the function building context by include a e tf init scope in your function build code e for example the follow function will fail e tf function e def have init scope e my constant tf constant 1 e with tf init scope e add my constant 2 e the graph tensor have name adam tensorarrayv2write tensorlistsetitem 0 lib python3 7 site package tensorflow python eager execute py 60 typeerror system information tensorflow 2 4 0 rc1 python 3 5 ubuntu 20 04 |
tensorflowtensorflow | defect in code example | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the code example have problem to allow some overlap between the feature of one batch and the label of another use dataset zip feature length 10 label length 5 feature range ds batch feature length drop remainder true label range ds batch feature length skip 1 map lambda label label 5 predict 5 step tf datum dataset zip feature label for feature label in predict 5 step take 3 print feature numpy label numpy 1 the label length variable be never use 2 the slicing label 5 be wrong the only reason it work be because 10 5 be 5 clear description let s illustrate the problem by slightly modify the defective code python feature length 10 label length 3 feature range ds batch feature length drop remainder true label range ds batch feature length skip 1 map lambda label label label length predict 5 step tf datum dataset zip feature label for feature label in predict 5 step take 3 print feature numpy label numpy this print 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 this be wrong because the label sequence length be not 3 the correct code be below this will work as long as label length feature length python feature length 10 label length 3 feature range ds batch feature length drop remainder true label range ds batch feature length skip 1 map lambda label label label length predict 5 step tf datum dataset zip feature label for feature label in predict 5 step take 3 print feature numpy label numpy this will correctly print 0 1 2 3 4 5 6 7 8 9 10 11 12 10 11 12 13 14 15 16 17 18 19 20 21 22 20 21 22 23 24 25 26 27 28 29 30 31 32 a well and more generic code should avoid use name like predict 5 step python feature length 10 label length 3 feature range ds batch feature length drop remainder true label range ds batch feature length skip 1 map lambda label label label length input ds tf datum dataset zip feature label for feature label in input ds take 3 print feature numpy label numpy submit a pull request |
tensorflowtensorflow | tf distribute experimental communicationoption do not work at all | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no tensorflow instal from source or binary source tensorflow version use command below 2 4 rc1 python version 3 7 describe the current behavior the implementation of tf distribute experimental communicationoption try to use some trickery for unknown reason and fail to do anything in its constructor see l84 and l114 this make it impossible to use the non deprecate tf distribute multiworkermirroredstrategy and specify the communication backend as that require to create such an option instance describe the expect behavior create an instance of that class initialize the member standalone code to reproduce the issue import tensorflow as tf tf distribute experimental communicationoption byte per pack 1 this should fail communication tf distribute experimental communicationimplementation nccl o tf distribute experimental communicationoption implementation communication assert o implementation communication error no attribute implementation other info log imo this be critical for the 2 4 release and the fix be rather simple don t have an extra option class but simply put that code into optionsexporte and let hint derive from that to not disturb other internal code rename optionsexporte to option would be wise |
tensorflowtensorflow | importerror can not import name np util in imagedatagenerator | Bug | the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example example 2 description of issue what need change clear description np util be no long available it should be replace with util reproducible code be mention in this colab for example why should someone use this method how be it useful developer use imagedatagenerator flow use this code correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define no usage example be there a usage example yes request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | tensorflowlite lstm data input format example | Bug | system information os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version or github sha if from source android tensorflow 2 3 command use to run the converter or code if you re use the python api if possible please share a link to colab jupyter any notebook training perform use notebook at this have a cnn feature extractor that run on a sequence of 40 image capture produce 40 feature vector these be feed into an lstm model and train for a video activity recognition direct user across crosswalk application the output from the converter invocation I understand will create 2 tflite model one for feature extractor cnn and the other for the lstm model my question be how do I repackage the 40 feature vector in my sequence as input into the tflite model on android I would like to see some example show this for android as on android do not have access to full tensorflow api also please include a link to the save model or graphdef put link here or attach to the issue failure detail if the conversion be successful but the generate model be wrong state what be wrong produce wrong result and or decrease in accuracy produce correct result but the model be slow than expect model generate from old converter rnn conversion support if convert tf rnn to tflite fuse rnn op please prefix rnn in the title any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | wrong path separator in error message when save model do not exist in keras model load model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 home single language build 19042 610 tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 python version 3 8 6 cuda cudnn version 10 1 gpu model and memory geforce rtx 2070 super 8 gb current behavior when I use function keras model load model with incorrect path I catch ioerror oserror savedmodel file do not exist at c incorrectpathtosavedmodel save model pbtxt save model pb because I use window path separator be incorrect and when I view code of parse save model in tensorflow python save model loader impl py I see hardcode symbol in error text definition expect behavior path separator should be recive from os path sep for example in window this error message must be oserror savedmodel file do not exist at c incorrectpathtosavedmodel save model pbtxt save model pb in linux oserror savedmodel file do not exist at incorrectpathtosavedmodel save model pbtxt save model pb standalone code to reproduce the issue from tensorflow import keras model dir c incorrectpathtosavedmodel model keras model load model model dir |
tensorflowtensorflow | validation step overwrite callback s internal predict call | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 1 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 python version 3 8 5 cuda cudnn version 10 1 gpu model and memory geforce rtx 2080 ti 11 gb x8 I m train a network and I m try to collect metric on two separate validation set I ve be use the build in validation for one of they and I write a callback for the other however in epoch when both run the callback seem to always get the same answer as the build in validation set I have a minimal test case below this be a completely trivial network where the output and input be equal I ve build the situation as follow let x np one 5 it train on input output x so the training error be always 0 it validate on input x output 2 x so the validation error be always 1 the callback validate on input x output 3 x so the callback s validation error should always be 2 import numpy as np import tensorflow as tf from tensorflow keras layers import input from tensorflow keras model import model input datum input shape 5 model model input input datum output input data model compile loss mae class valcallback tf keras callbacks callback def init self model input output self model model self input input self output output def on epoch end self epoch log val self model evaluate self input self output verbose 0 print nval val traindata np one 5 vc valcallback model traindata 3 traindata model fit traindata traindata validation datum traindata 2 traindata epoch 4 callback vc validation freq 2 verbose 0 in epoch 1 the callback correctly print 2 in epoch 2 when the validation run the callback print 1 instead somehow the validation data be overwrite the datum in the callback and I don t know how or why the self model evaluate call in the callback aren t get the right answer any more after validation have happen desire output val 2 val 2 val 2 val 2 actual output val 2 val 1 val 1 val 1 |
tensorflowtensorflow | tflite tensorarrayscatterv3 fail not find | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 work android 9 0 inference mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device motorola nexus 6 tensorflow instal from source or binary source master branch at 4625f1d87f17537a8d50dc68a213d2da875c12ce tensorflow version use command below 4625f1d87f17537a8d50dc68a213d2da875c12ce python version 3 8 2 bazel version if compile from source 3 7 0 gcc compiler version if compile from source n a cuda cudnn version no gpu gpu model and memory n a android sdk 30 0 2 and ndk r21d describe the current behavior I compile benchmark model plus flex and run into the same issue I be try to benchmark the following model with it convert into tflite format with the follow code python def do convert save model output file converter tf lite tfliteconverter from save model save model converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op model converter convert with open output file wb as f f write model the exact error be text native op kernel cc 1763 op require fail at tensor array op cc 1035 not find container per step 0 do not exist could not find resource per step 0 tensor arraystensorarrayv3 774 error container per step 0 do not exist could not find resource per step 0 tensor arraystensorarrayv3 774 while execute tensorarrayscatterv3 via eager error node number 276 tfliteflexdelegate fail to invoke and the command line in adb shell be shell benchmark model plus flex graph ssd mobilenet v1 coco 2018 01 28 tflite describe the expect behavior the benchmark should proceed successfully standalone code to reproduce the issue |
tensorflowtensorflow | tflite tf matmul operator convert to an unsupported tfl 16x16 fullyconnected operator with 16x8 post training quantization | Bug | hello when use the tf linalg matmul function with tflite 16 bit post training quantization the operation be sometimes transform into a tfl fullyconnectedop operator with 16 bit input and 16 bit weight unfortunately the 16 bit version of the fc operator only support 16 bit input and 8 bit weight the conversion occur in two place depend on how tf linalg matmul be interpret as a tf matmulop operator or a tf batchmatmul v2 op the tf matmulop be convert to a tfl fullyconnectedop in legalize tf cc l242 the tf batchmatmul v2 op on the other hand be convert in unroll batch matmul cc l190 to a tf matmulop which be then convert by the previous transformation into a tfl fullyconnectedop this conversion pass be only enable when preparetfpass unfold batch matmul l1214 be true to disable the second conversion we could add a check on the toco flag inference type flag in graphdef to tfl flatbuffer cc l88 and other appropriate place and set pass config unfold batch matmul to false when the inference type be int16 the tf batchmatmul v2 op would then be convert to a tfl batchmatmulop which support 16x16 input without trouble the problem remain though if we have a simple 16x16 tf matmulop operator we can t just disable the conversion in the same way as for tf batchmatmul v2 op as there isn t any tfl matmulop one option would be to disable this transformation in int16 post training quantization in the same way as for tf batchmatmul v2 op and add a tf matmulop to tf batchmatmul v2 op conversion pass the tfl batchmatmulop would then be use for the case where we didn t convert the tf matmulop to a tfl fullyconnectedop thibaut |
tensorflowtensorflow | module tensorflow lite support metadata metadata schema py generate have no attribute modelmetadatat error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device not use tensorflow instal from source or binary spyder in anagonda navigator it be preinstalle tensorflow version use command below tensorflow 2 python version 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version none gpu model and memory not use gpu describe the current behavior I have try to run the sample code from in anaconda prompt which be python metadata writer for image classifier py model file model without metadata mobilenet v1 0 75 160 quantize tflite label file model without metadata label txt export directory model with metadata I have download mobilenet v1 0 75 160 quantize tflite file and place it in desktop model without metadata download this file and place it in desktop as well by run this code it give I module tensorflow lite support metadata metadata schema py generate have no attribute modelmetadatat error how can I solve this describe the expect behavior metadata should be add to mobilenet v1 0 75 160 quantize tflite in order to use it in mlkit for object classification |
tensorflowtensorflow | typo in tf keras callback earlystoppe example | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change the follow be from the earlystopping page on tensorflow website callback tf keras callback earlystopping monitor loss patience 3 this callback will stop the training when there be no improvement in the validation loss for three consecutive epoch in the above line it should have be loss instead of validation loss clear description check above please check this gist scrollto l9utqni3qd08 for an example correct link submit a pull request yes |
tensorflowtensorflow | argument have an html format issue | Bug | url s with the issue the argument need to be reformatte screen shoot 2020 11 08 at 17 28 21 |
tensorflowtensorflow | fail for tf image flip leave right img input where I m input be input shape none none 3 name image | Bug | I want to dump graph of tta for model h5 but seem tf image flip leave right not support dynamic input size tf version 2 3 import tensorflow as tf from tensorflow keras layers import input img input input shape none none 3 name image tf image flip leave right img input operatornotallowedingrapherror traceback most recent call last in 1 tf image flip leave right img input home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python util dispatch py in wrapper args kwargs 199 call target and fall back on dispatcher if there be a typeerror 200 try 201 return target args kwargs 202 except typeerror valueerror 203 note convert to eager tensor currently raise a valueerror not a home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op image op impl py in flip leave right image 489 valueerror if the shape of image not support 490 491 return flip image 1 flip leave right 492 493 home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op image op impl py in flip image flip index scope name 548 with op name scope none scope name image 549 image op convert to tensor image name image 550 image assertatleast3dimage image 551 shape image get shape 552 if shape ndim 3 or shape ndim be none home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op image op impl py in assertatleast3dimage image 195 196 return control flow op with dependency 197 checkatleast3dimage image require static false image 198 199 home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op image op impl py in checkatleast3dimage image require static 230 check op assert positive 231 array op shape image 3 232 inner 3 dim of image shape 233 must be 0 234 check op assert great equal home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python util dispatch py in wrapper args kwargs 199 call target and fall back on dispatcher if there be a typeerror 200 try 201 return target args kwargs 202 except typeerror valueerror 203 note convert to eager tensor currently raise a valueerror not a home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op check op py in assert positive x datum summarize message name 510 x s name x 511 zero op convert to tensor 0 dtype x dtype 512 return assert less zero x datum datum summarize summarize 513 514 home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python util dispatch py in wrapper args kwargs 199 call target and fall back on dispatcher if there be a typeerror 200 try 201 return target args kwargs 202 except typeerror valueerror 203 note convert to eager tensor currently raise a valueerror not a home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op check op py in assert less x y datum summarize message name 899 def assert less x y datum none summarize none message none name none 900 return binary assert assert less math op less np less x y datum 901 summarize message name 902 903 home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python op check op py in binary assert sym opname op func static func x y datum summarize message name 333 test op op func x y 334 condition math op reduce all test op 335 if condition 336 return 337 home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python framework op py in bool self 875 typeerror 876 877 self disallow bool cast 878 879 def nonzero self home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python framework op py in disallow bool cast self 488 else 489 default v1 style graph execution 490 self disallow in graph mode use a tf tensor as a python bool 491 492 def disallow iteration self home gezi env anaconda3 envs tf2 lib python3 6 site package tensorflow python framework op py in disallow in graph mode self task 477 raise error operatornotallowedingrapherror 478 be not allow in graph execution use eager execution or decorate 479 this function with tf function format task 480 481 def disallow bool cast self operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function |
tensorflowtensorflow | bug when a custom tf keras model model have multiple class inheritance | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window and linux ubuntu 20 04 tensorflow instal from source or binary pip tensorflow version use command below 2 3 python version conda env with python 3 7 9 cuda cudnn version 10 1 gpu model and memory geforce rtx 2080 super with max q design 8 gb describe the current behavior create a custom model that inherit of at least one other class than tf keras model model the follow exception be raise file c user snake miniconda3 envs transformer lib site package tensorflow python training tracking base py line 457 in method wrapper result method self args kwargs file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 255 in init inject functional model class self class file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in inject functional model class cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in inject functional model class cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in inject functional model class cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in cls base tuple inject functional model class base file c user snake miniconda3 envs transformer lib site package tensorflow python keras engine training py line 144 in inject functional model class cls base tuple inject functional model class base typeerror can t set attribute of build in extension type object describe the expect behavior be able to create a custom model with different mixin standalone code to reproduce the issue here a simple piece of code to reproduce the issue import tensorflow as tf class printmixin def custom print self print hello world class custommodel tf keras model model printmixin def init self args kwargs my input tf keras layers input shape 16 dense tf keras layer dense 32 activation relu output dense my input output output output super init input my input output output args kwargs my model custommodel other info log apparently when give the input and output parameter tensorflow try to inject an attribute to all the class and super class until reach tf keras model model here the piece of code from the file training py line 136 def inject functional model class cls inject functional into the hierarchy of this class if need from tensorflow python keras engine import functional pylint disable g import not at top from tensorflow python keras engine import training v1 pylint disable g import not at top if cls model or cls training v1 model return functional functional cls base tuple inject functional model class base for base in cls basis trigger any new class swapping that need to happen on functional but do not because functional be not in the class hierarchy cls new cls return cls but when it try to check the superclass of my mixin class which be object an error be raise say that we can not add an attribute to the object type for I the follow update of the method fix the issue def inject functional model class cls inject functional into the hierarchy of this class if need from tensorflow python keras engine import functional pylint disable g import not at top from tensorflow python keras engine import training v1 pylint disable g import not at top if cls model or cls training v1 model return functional functional if cls object return cls cls base tuple inject functional model class base for base in cls basis trigger any new class swapping that need to happen on functional but do not because functional be not in the class hierarchy cls new cls return cls here we return the object class as it be but I don t know if it be a proper fix that win t bring another error elsewhere first I want to know if it be really a bug if not how I could do a proper custom model with mixin class and my input output if yes be the fix I propose ok and if need I can open a pr with it thank |
tensorflowtensorflow | tf keras model load model fail if tf linalg band part be use | Bug | system information have I write custom code yes os platform and distribution arch linux 5 9 4 arch1 1 tensorflow instal from binary tensorflow version 2 3 1 python version 3 8 6 cuda cudnn version 11 1 8 0 4 gpu model and memory nvidia geforce gtx 1080 ti 10 91gib describe the current behavior tf keras model load model fail when tf linalg band part be use valueerror inconsistent value for attr tindex dt int32 vs dt int64 while building nodedef tf op layer matrixbandpart matrixbandpart use op band t attr t type attr tindex type default dt int64 allow dt int32 dt int64 describe the expect behavior model be load without error standalone code to reproduce the issue import tensorflow as tf x tf keras input shape 3 3 y tf linalg band part x 1 0 get low triangle part I need band part for make look ahead mask on transformer y tf reshape x 1 9 model tf keras model x y print model predict 1 2 3 4 5 6 7 8 9 9 8 7 6 5 4 3 2 1 model save model tf keras model load model model other info log include any log or source code that would be helpful to tensorflow python framework error impl invalidargumenterror inconsistent value for attr tindex dt int32 vs dt int64 while building nodedef tf op layer matrixbandpart matrixbandpart use op band t attr t type attr tindex type default dt int64 allow dt int32 dt int64 during handling of the above exception another exception occur traceback most recent call last file test band part py line 11 in tf keras model load model model file usr lib python3 8 site package tensorflow python keras save save py line 187 in load model return save model load load filepath compile option file usr lib python3 8 site package tensorflow python keras save save model load py line 120 in load model tf load load internal file usr lib python3 8 site package tensorflow python save model load py line 632 in load internal loader loader cls object graph proto save model proto export dir file usr lib python3 8 site package tensorflow python keras save save model load py line 194 in init super kerasobjectloader self init args kwargs file usr lib python3 8 site package tensorflow python save model load py line 130 in init self load all file usr lib python3 8 site package tensorflow python keras save save model load py line 221 in load all self finalize object file usr lib python3 8 site package tensorflow python keras save save model load py line 530 in finalize object self reconstruct all model file usr lib python3 8 site package tensorflow python keras save save model load py line 548 in reconstruct all model self reconstruct model model i d model layer file usr lib python3 8 site package tensorflow python keras save save model load py line 588 in reconstruct model create layer functional lib reconstruct from config file usr lib python3 8 site package tensorflow python keras engine functional py line 1214 in reconstruct from config process node layer node datum file usr lib python3 8 site package tensorflow python keras engine functional py line 1162 in process node output tensor layer input tensor kwargs file usr lib python3 8 site package tensorflow python keras engine base layer py line 925 in call return self functional construction call input args kwargs file usr lib python3 8 site package tensorflow python keras engine base layer py line 1117 in functional construction call output call fn cast input args kwargs file usr lib python3 8 site package tensorflow python keras engine base layer py line 3099 in call return self make op input file usr lib python3 8 site package tensorflow python keras engine base layer py line 3121 in make op c op op create c op graph node def input control input file usr lib python3 8 site package tensorflow python framework op py line 1815 in create c op raise valueerror str e valueerror inconsistent value for attr tindex dt int32 vs dt int64 while building nodedef tf op layer matrixbandpart matrixbandpart use op band t attr t type attr tindex type default dt int64 allow dt int32 dt int64 same as 42301 I just write standalone code |
tensorflowtensorflow | keras epsilon doesn t change | Bug | window tf 2 3 1 I m try to change the epsilon so my mape doesn t blow up when I do tensorflow keras backend set epsilon 1e 3 and then tensorflow keras backend epsilon I get 1e 07 |
tensorflowtensorflow | behaviour of imagedatagenerator with subset validation | Bug | url s with the issue flow description of issue the documentation for kera preprocesse image imagedatagenerator doesn t specify how it handle validation datum clear description in particular documentation for the flow method doesn t specify whether transformation be apply to augment validation datum when validation split have be give in the constructor and subset validation be pass usage example there be no usage example that use the parameter validation split with a previously undivide dataset all example employ a dataset that be already divide in training validation and then create two imagedatagenerator object with different parameter other I have read the source code l808 in order to understand this behaviour however I end up in the definition of numpyarrayiterator l400 a class that apparently inherit from itself and I haven t be able to find its actual implementation or the meaning of this strange inheritance |
tensorflowtensorflow | be for loop support or proper way of iterate over a 1d tensor | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow version or github sha if from source 2 3 1 import os import tempfile import numpy as np import tensorflow as tf tmpdir tempfile mkdtemp class custommodule tf module def init self super custommodule self init self v tf variable 1 dtype tf int64 self const tf constant np arange 10 tf function input signature tf tensorspec dtype tf int64 def call self x for ele in self const self v assign ele return x self const module custommodule module path os path join tmpdir print module tf constant 2 dtype tf int64 print saving model tf save model save module module path import tf save model load module path print import tf constant 4 dtype tf int64 converter tf lite tfliteconverter from save model module path tflite model converter convert open convert model tflite wb write tflite model error tf tensor 0 2 4 6 8 10 12 14 16 18 shape 10 dtype int64 save model tf tensor 0 4 8 12 16 20 24 28 32 36 shape 10 dtype int64 tensorflow lite python convert convertererror input resource 0 expect type resource int64 the type of while assignvariableop resource 0 0 in node while assignvariableop |
tensorflowtensorflow | dataset from tensor slice regression | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mac os 10 15 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 1 0 2 2 and 2 3 0 python version 3 6 3 7 3 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version v2 2 0 rc4 8 g2b96f3662b 2 2 0 describe the current behavior I m support someone try to build a model in tensorflow 2 1 0 the code below work correctly in version 2 2 0 the last line fail I be wonder if there be a regression or if there be a fix and 2 1 0 shouldn t have work describe the expect behavior I expect the dataset to be create correctly standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf import panda as pd import numpy as np datum np array k1 np array t1 t2 t3 np array 03 02 03 np array k1 k2 k3 np array t4 t5 t6 np array 03 02 03 train pd dataframe data column kwd title label feature col kwd title label train pop label feature col train col for col in feature col batch tf datum dataset from tensor slice feature label other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file array example py line 17 in batch tf datum dataset from tensor slice train label file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python data op dataset op py line 640 in from tensor slice return tensorslicedataset tensor file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python data op dataset op py line 2858 in init element structure normalize element element file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python data util structure py line 98 in normalize element op convert to tensor t name component d I file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python framework op py line 1341 in convert to tensor ret conversion func value dtype dtype name name as ref as ref file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python framework constant op py line 321 in constant tensor conversion function return constant v dtype dtype name name file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python framework constant op py line 262 in constant allow broadcast true file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python framework constant op py line 270 in constant impl t convert to eager tensor value ctx dtype file user oshiv miniconda3 envs py37tf22 lib python3 7 site package tensorflow python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype valueerror fail to convert a numpy array to a tensor unsupported object type numpy ndarray |
tensorflowtensorflow | custom learningrateschedule not call within a mirrored strategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow from the tutorial custom training with tf distribute strategy with a custom learningrateschedule os platform and distribution e g linux ubuntu 16 04 colab tensorflow version use command below 2 3 0 python version 3 6 9 gpu model and memory gpu from colab describe the current behavior in a distribute environment when add a custom learningrateschedule to adam optimizer to decay the learning rate over epoch the learning rate be not decay as expect I test the same code in an environment without the distribute scope and the lr decay over the epoch describe the expect behavior the optimizer to call the learningrateschedule call method to decay the learning rate after apply the gradient standalone code to reproduce the issue please check this colab from the tf tutorial see chapter training loop other info log class customschedule tf keras optimizer schedule learningrateschedule def init self super customschedule self init self lr 0 01 def call self step self lr self lr 10 print new lr self lr return self lr with strategy scope model create model optimizer tf keras optimizer adam learn rate customschedule checkpoint tf train checkpoint optimizer optimizer model model the learning rate customschedule be the only line I change from the tutorial see the colab for the train step |
tensorflowtensorflow | locallyconnected1d layer description | Bug | the documentation of the locallyconnected1d layer mention the follow for the stride argument specify any stride value 1 be incompatible with specify any dilation rate value 1 I assume this sentence be just copy from the conv1d layer and should be remove since there be no dilation rate argument for the locallyconnected1d layer |
tensorflowtensorflow | the readme file in this repo have a bad link 404 notfound | Bug | the readme file in this repo have a bad link 404 notfound status code 404 notfound link this be find by an new experimental hobby project that I have just create if this have be in any way helpful then please consider give the above repo a star |
tensorflowtensorflow | tf lite interpreter can not be use on the new nightly version | Bug | I use the version tf nightly 2 5 0 dev20201029 and test the tflite model when I use the code below to test the performance of the fastspeech tflite model the first input can be convert to audio correctly however the second would be convert to audio full of noise only when I reload the interpreter the model can be use for the next input the next audio full of noise interpreter tf lite interpreter model path model tflite for sample in datum queue input detail interpreter get input detail output detail interpreter get output detail interpreter resize tensor input input detail 0 index sample input shape interpreter allocate tensor interpreter set tensor input detail 0 index sample input interpreter invoke feature interpreter get tensor output detail 0 index self vocoder feature numpy correct for sample in datum queue interpreter tf lite interpreter model path model tflite input detail interpreter get input detail output detail interpreter get output detail interpreter resize tensor input input detail 0 index sample input shape interpreter allocate tensor interpreter set tensor input detail 0 index sample input interpreter invoke feature interpreter get tensor output detail 0 index self vocoder feature numpy |
tensorflowtensorflow | some variable be not restore only when mirroredstragy be use | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 official docker image tensorflow instal from source or binary binary tensorflow version use command below I ve try every version 2 3 0 and I could reproduce this bug in each of they python version I could reproduce in 3 6 3 7 and 3 8 gpu model and memory not relevant could reproduce this issue with and without gpu description tf keras model model load weight method doesn t restore the optimizer slot variable only when mirroredstrategy be use here s the minimum code to demonstrate this bug python3 import tensorflow as tf def prepare model model tf keras model sequential tf keras layer dense 1 model compile optimizer adam loss mse model build none 1 return model def print opt weight tag print tag list map lambda x x name model optimizer weight return label input tf random uniform 5 1 model prepare model print opt weight after model creation model fit x input y label batch size 1 print opt weight after training model save weight save print without distribute model prepare model print opt weight after model creation status model load weight save print opt weight after load weight model fit x input y label batch size 1 print opt weight after retrain status assert consume print with distribute with tf distribute mirroredstrategy scope model prepare model print opt weight after model creation status model load weight save print opt weight after load weight model fit x input y label batch size 1 print opt weight after retrain status assert consume the comment out line will make optimizer in and have the same number of weight upon load weight call all the slot variable in the optimizer will be create when it s not use distribution strategy while they be not create until the model be actually retrain with a distribution strategy in use the above snippet will produce the output like below after model creation after train adam iter 0 adam dense kernel m 0 adam dense bias m 0 adam dense kernel v 0 adam dense bias v 0 without distribute after model creation after load weight dense 1 kernel m 0 dense 1 kernel v 0 dense 1 bias m 0 dense 1 bias v 0 with distribute after model creation after load weight as we can see from the output the optimizer slot variable be correctly restore without mirroredstrategy while they be not restore with mirroredstrategy in my code I be call assert consume method just after call load weight to make sure everything be load correctly before proceed to any other operation and due to this unexpected difference in the behavior it be crush when it use mirroredstrategy in either case with without distribution strategy all the variable be probably restore after retrain call fit again so the easy solution will be to just ignore this difference but I want to know what make this difference and how we can fix this thank you so much for take the time to take a look at this issue any comment or suggestion will be very helpful |
tensorflowtensorflow | keras preprocesse datum util py not work for non infer label | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script tf env txt you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior datum util work for label infer but not for any provide list of label describe the expect behavior it should work I change it as per datum util txt which work as far as I can tell if this hasn t already be fix in a later version and if the change meet with whatever standard you have feel free to adopt any part of it for future version standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | io tensorflow lite error didn t find op for builtin opcode resize bilinear version 3 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no it s the tensorflow lite io object detection example I just try to use my own convert darknet tiny yolo v4 to tflite os platform and distribution e g linux ubuntu 16 04 na mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device io 13 7 tensorflow instal from source or binary I run the pod install command on my mac tensorflow version use command below unknown whatever pod install instal python version na bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior from xcode play the solution to deploy to my io iphone device fail on the follow line interpreter try interpretor modelpath modelpath option option describe the expect behavior I expect it to run the normal tflite io object detection example program on my phone but with my model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | jacobian fail on gradient of tf function with if elif else or nest tf cond | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary binary pip tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 python version 3 8 5 cuda cudnn version execute on cpu gpu model and memory executing on cpu describe the current behavior compute the gradienttape jacobian of the gradient I e the hessian of a tf function with either an if elif else construct or nest tf cond result in a crash tensorflow python framework error impl invalidargumenterror try to add unsupported dtype 10 node gradient addn 2 define at debug py 44 op inference backward backward f bad 373 1158 1489 see full trace attach as trace without pfor txt the computation work fine when disable tf function first order derivative gradient or jacobian work fine too describe the expect behavior hessian evaluate to tf tensor 2 shape 1 1 dtype float32 standalone code to reproduce the issue the follow work if use function false but fail for both f bad and f bad cond when use use function true both f good and f good const always work fine import tensorflow as tf use function true use pfor false tf config run function eagerly not use function tf function def f bad x if x 1 return tf pow x 2 elif x 1 return tf pow x 2 else return tf pow x 2 tf function def f bad cond x return tf cond x 1 lambda tf pow x 2 lambda tf cond x 1 lambda tf pow x 2 lambda tf pow x 2 tf function def f good x if x 1 return tf pow x 2 else return tf pow x 2 tf function def f good cond x return tf cond x 1 lambda tf pow x 2 lambda tf pow x 2 f f bad x tf variable 0 with tf gradienttape persistent not use pfor as t2 with tf gradienttape as t1 y f x g y t1 gradient y x hess t2 jacobian g y x experimental use pfor use pfor print hess note that use use pfor true with use function true crash for all four function see the attached trace with pfor txt but that be probably an entirely different issue |
tensorflowtensorflow | tf log documentation lead to 404 | Bug | url s with the issue as far as I can tell all article describe tf logging namespace such as or description of issue what need change despite show up in search result these article be not accessible clear description search result for log and related query contain link to nonexistent article lead to confusion and potentially avoidable issue submit a pull request not at this time |
tensorflowtensorflow | h5py 3 0 0 cause issue with keras model load in tensorflow 2 1 0 | Bug | h5py release version 3 0 0 today and it cause this code to fail l182 with error file databrick python lib python3 7 site package tensorflow core python keras save save py line 146 in load model return hdf5 format load model from hdf5 filepath custom object compile file databrick python lib python3 7 site package tensorflow core python keras save hdf5 format py line 166 in load model from hdf5 model config json loads model config decode utf 8 attributeerror str object have no attribute decode it look like in version 2 1 0 the h5py version be not pin it be pin in master which be cause the issue |
tensorflowtensorflow | freeze crash occur when batch size be reduce | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 home 1909 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 python version 3 7 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 cudnn 10 1 windows10 x64 v7 6 5 32 gpu model and memory geforce gtx 1660 ti computecapability 7 5 coreclock 1 455ghz corecount 24 devicememorysize 6 00gib devicememorybandwidth 268 26gib s describe the current behavior when run with large batch size in model fit code complete successfully with small batch size the training crash freeze in the middle of an epoch describe the expect behavior small batch size do not cause tf to hang or crash standalone code to reproduce the issue code utf 8 reproduce crash during fit import numpy as np from tensorflow keras layer import dense from tensorflow keras model import sequential model sequential model add dense 9 activation relu input dim 125 model add dense 31 activation softmax model compile loss categorical crossentropy optimizer adam metric accuracy x train np random rand 3225 125 y train np random rand 3225 31 this work model fit x train y train epoch 100 batch size x train shape 0 verbose 1 this crash model fit x train y train epoch 100 batch size 100 verbose 1 the exact point it freeze seem to vary base on the random seed here be an example the shell cmd exe be frozen and must be kill through task manager epoch 11 100 33 33 1s 18ms step loss 607 7234 accuracy 0 0332 epoch 12 100 24 33 eta 0s loss 645 0050 accuracy 0 0288 |
tensorflowtensorflow | renode test of cmsis nn kernel test be not work for all | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source 5b5960f4fda6a6ae9cbf5233873b9ea6910b3e4e target platform e g arm mbe os arduino nano 33 etc describe the problem cmsis nn kernel test conv and softmax be not work please provide the exact sequence of command step when you run into the problem make j4 f tensorflow lite micro tool make makefile tag cmsis nn target stm32f4 test kernel conv test make j4 f tensorflow lite micro tool make makefile tag cmsis nn target stm32f4 test kernel softmax test |
tensorflowtensorflow | in the codelab build a handwritten digit classifier app with tensorflow lite at step number 4 the interpreter be not initialize with model which be cause an npe | Bug | 3 check the above link in step number 4 the interpreter be not initialize use the model which be why it be give npe on interpreter instance |
tensorflowtensorflow | random error with conv grad filter op | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below tf 2 2 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 0 194 8 0 1 gpu model and memory geforce rtx 2070 super describe the current behavior we have a repo with a few hundred unit test during the ci in 5 10 of the run one of the large test always the same one be fail with the follow log repeat many time tensorflow core framework op kernel cc 1753 op require fail at conv grad filter op cc 1101 not find no algorithm work the test suite be a mix of eager execution and graph mode test this particular fail test be build a network build with cnn resnet backbone then object detection and it s use the estimator api its only exotic feature be the use of raggedtensor another test be very similar but no raggedtensor and have no issue both these test be run towards the end of the suite so after a lot of model variable have be instantiate and delete describe the expect behavior a more informative error message standalone code to reproduce the issue unfortunately this error can be reproduce only randomly and with the whole repo even change the order of the test can have an impact I m aware this ticket be not super helpful but I hope someone will have an idea about what s happen other info log stack trace look like 2020 10 27t07 40 30 393z tensorflow python framework error impl notfounderror no algorithm work 2020 10 27t07 40 30 393z node sgd gradient gradient conv2d grad conv2dbackpropfilter define at 2020 10 27t07 40 30 393z 2020 10 27t07 40 30 393z error may have originate from an input operation 2020 10 27t07 40 30 393z input source operation connect to node sgd gradient gradient conv2d grad conv2dbackpropfilter 2020 10 27t07 40 30 393z biasadd define at |
tensorflowtensorflow | tensorflow kera save weight and load weight produce random evaluation result on some dataset | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 and mac os 10 15 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tensorflow tensorflow version use command below 2 3 1 python version window 10 3 7 4 mac os 3 8 5 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version have cuda only on window 10 cuda 10 1 cudnn 7 6 5 gpu model and memory msi geforce gtx1080 8 gb describe the current behavior when save weight and load weight on keras model seem to work fine in the same python session with training but after stop that python session and try call load weight on a new python session some dataset produce a random evaluation result describe the expect behavior evaluation result after load weight should be the same across python session standalone code to reproduce the issue this the full standalone reproducible problem when load weight each time get you a random evaluation loss and accuracy |
tensorflowtensorflow | tf where raise typeerror for a raggedtensor argument condition | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary pip install tensorflow tensorflow version use command below v2 3 0 54 gfcc4b966f1 2 3 1 python version 3 8 2 64 bit venv describe the current behavior call tf where raise typeerror for a raggedtensor argument condition typeerror expect bool pass to parameter condition of op selectv2 get tf raggedtensor describe the expect behavior tf where should support raggedtensor other info log source code snippet input action mask tf where tf math equal action mask 0 no zero input value action mask name input action mask action mask be a raggedtensor no zero input value be a float I think broadcasting should not be a problem here 2020 10 28 15 26 31 076 error worker py 1018 possible unhandled error from worker ray rolloutworker init pid 3809 ip 172 28 211 149 file home user venv lib python3 8 site package tensorflow python framework tensor util py line 268 in inner check fail v file home user venv lib python3 8 site package tensorflow python framework tensor util py line 249 in check fail raise valueerror v valueerror tf raggedtensor value tensor tu policy equal 1 0 shape 1 133 dtype bool row split tensor tu policy placeholder 12 0 shape dtype int64 during handling of the above exception another exception occur ray rolloutworker init pid 3809 ip 172 28 211 149 file home user venv lib python3 8 site package tensorflow python framework op def library py line 465 in apply op helper value op convert to tensor file home user venv lib python3 8 site package tensorflow python framework op py line 1499 in convert to tensor ret conversion func value dtype dtype name name as ref as ref file home user venv lib python3 8 site package tensorflow python framework constant op py line 338 in constant tensor conversion function return constant v dtype dtype name name file home user venv lib python3 8 site package tensorflow python framework constant op py line 263 in constant return constant impl value dtype shape name verify shape false file home user venv lib python3 8 site package tensorflow python framework constant op py line 280 in constant impl tensor util make tensor proto file home user venv lib python3 8 site package tensorflow python framework tensor util py line 456 in make tensor proto assertcompatible value dtype file home user venv lib python3 8 site package tensorflow python framework tensor util py line 335 in assertcompatible raise typeerror expect s get s of type s instead typeerror expect bool get tf raggedtensor value tensor tu policy equal 1 0 shape 1 133 dtype bool row split tensor tu policy placeholder 12 0 shape dtype int64 of type raggedtensor instead during handling of the above exception another exception occur ray rolloutworker init pid 3809 ip 172 28 211 149 file python ray raylet pyx line 479 in ray raylet execute task file python ray raylet pyx line 483 in ray raylet execute task file python ray raylet pyx line 484 in ray raylet execute task file python ray raylet pyx line 438 in ray raylet execute task function executor file home user venv lib python3 8 site package ray rllib evaluation rollout worker py line 416 in init self build policy map policy dict policy config file home user venv lib python3 8 site package ray rllib evaluation rollout worker py line 1008 in build policy map policy map name cls obs space act space merge conf file home user venv lib python3 8 site package ray rllib policy tf policy template py line 206 in init dynamictfpolicy init file home user venv lib python3 8 site package ray rllib policy dynamic tf policy py line 198 in init self model modelcatalog get model v2 file home user venv lib python3 8 site package ray rllib model catalog py line 339 in get model v2 raise e file home user venv lib python3 8 site package ray rllib model catalog py line 324 in get model v2 instance model cls obs space action space file mnt c user user desktop ki galvanik marl for galvanic per second 2 tus articleschedulingmodel py line 41 in init input action mask tf where tf math equal action mask 0 no zero input value action mask name input action mask file home user venv lib python3 8 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home user venv lib python3 8 site package tensorflow python op array op py line 4461 in where v2 return gen math op select v2 condition condition t x e y name name file home user venv lib python3 8 site package tensorflow python ops gen math op py line 8874 in select v2 op output op def library apply op helper file home user venv lib python3 8 site package tensorflow python framework op def library py line 475 in apply op helper raise typeerror typeerror expect bool pass to parameter condition of op selectv2 get tf raggedtensor value tensor tu policy equal 1 0 shape 1 133 dtype bool row split tensor tu policy placeholder 12 0 shape dtype int64 of type raggedtensor instead error expect bool get tf raggedtensor value tensor tu policy equal 1 0 shape 1 133 dtype bool row split tensor tu policy placeholder 12 0 shape dtype int64 of type raggedtensor instead |
tensorflowtensorflow | invalidzoneinfofile when try to create a gmt pendulum object | Bug | hello I encounter the issue below when try to execute this code system information window 10 python 3 7 pendulum 2 0 2 pytzdata 2020 1 dependency instal by pendulum code import pendulum gmt time pendulum now gmt workaround this issue be close but it seem that with my current config it re appear I downgrade my pytzdata version to an old one 2018 3 as a workaround traceback traceback most recent call last file line 1 in file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum init py line 212 in now tz safe timezone tz file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum init py line 82 in safe timezone return timezone obj file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz init py line 36 in timezone tz timezone name extend extended file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz timezone py line 30 in init tz read name extend extend file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo init py line 9 in read return reader extend extend read for name file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo reader py line 52 in read for return self read file path file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo reader py line 64 in read return self parse fd file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo reader py line 115 in parse type idx self parse type idx fd hdr transition file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo reader py line 198 in parse type idx buff self check read fd n file c user pnouhaud conda envs asbooker prod py37 lib site package pendulum tz zoneinfo reader py line 77 in check read nbyte fd name len result if result else 0 pendulum tz zoneinfo exception invalidzoneinfofile expect 0 byte read c user pnouhaud conda envs asbooker prod py37 lib site package pytzdata zoneinfo gmt but get 0 |
tensorflowtensorflow | unreference buffer in flatbuffer | Bug | when use inference input type tf int8 or inference output type tf int8 there be unrefenrenced buffer leftover in the flatbuffer the problem be in this function l632 which clean up unreference tensor over here l710 but it do not cleanup unreference buffer I e the buffert object it can be reproduce with python import tensorflow as tf def mymodel img tf keras layers input shape 96 96 3 x img x tf quantization fake quant with min max var x 3 3 x tf keras layer conv2d 32 3 x x tf quantization fake quant with min max var x 3 3 return tf keras model img x converter tf lite tfliteconverter from keras model mymodel converter optimization tf lite optimize default converter inference input type tf int8 converter inference output type tf int8 tflite model converter convert in the above example when inference input type and inference output type be not set the flatbuffer will contain 7 buffer object of which buffer 0 be empty and unreference by definition of the schema buffer s 1 and 6 be reference by the input and output tensor of the model when inference input type and inference output type be set to tf int8 then buffer s 1 and 6 remain in the flatbuffer file but be unreference meghnanatraj can you take a look at this I see you push the code relate to this system information os platform and distribution arch linux tensorflow instal from source or binary binary tensorflow version tf nightly on 27 oct 2020 |
tensorflowtensorflow | model do not learn when use the gpu | Bug | system information os platform and distribution linux ubuntu 18 04 5 lts bionic beaver tensorflow instal from conda tensorflow version 2 2 0 python version 3 6 9 cuda cudnn version 10 1 243 7 6 5 gpu model and memory rtx 3090 24 265 mib driver 455 32 00 describe the current behavior when use tensorflow with the cpu tensorflow the model achieve a test accuracy of 0 90 but when run the same code with the gpu tensorflow gpu the model achieve a test accuracy of 0 10 so it seem like the cpu version learn while the gpu version do not the same problem be present when run another simple example code with keras describe the expect behavior I would expect the two version to have around the same test accuracy in the end or at least it would learn with the gpu version code to reproduce the issue I m use the follow code python code from import tensorflow as tf from tensorflow keras import model layer import numpy as np print tf version print tf config list physical device gpu mnist dataset parameter num class 10 total class 0 9 digit training parameter learning rate 0 001 training step 200 batch size 128 display step 10 network parameter conv1 filter 32 number of filter for 1st conv layer conv2 filter 64 number of filter for 2nd conv layer fc1 unit 1024 number of neuron for 1st fully connect layer prepare mnist datum from tensorflow keras datasets import mnist x train y train x test y test mnist load datum convert to float32 x train x test np array x train np float32 np array x test np float32 normalize image value from 0 255 to 0 1 x train x test x train 255 x test 255 use tf datum api to shuffle and batch datum train datum tf datum dataset from tensor slice x train y train train datum train datum repeat shuffle 5000 batch batch size prefetch 1 create tf model class convnet model set layer def init self super convnet self init convolution layer with 32 filter and a kernel size of 5 self conv1 layer conv2d 32 kernel size 5 activation tf nn relu max pooling down sample with kernel size of 2 and stride of 2 self maxpool1 layer maxpool2d 2 stride 2 convolution layer with 64 filter and a kernel size of 3 self conv2 layer conv2d 64 kernel size 3 activation tf nn relu max pooling down sample with kernel size of 2 and stride of 2 self maxpool2 layer maxpool2d 2 stride 2 flatten the datum to a 1 d vector for the fully connect layer self flatten layer flatten fully connect layer self fc1 layer dense 1024 apply dropout if be training be false dropout be not apply self dropout layer dropout rate 0 5 output layer class prediction self out layer dense num class set forward pass def call self x be train false x tf reshape x 1 28 28 1 x self conv1 x x self maxpool1 x x self conv2 x x self maxpool2 x x self flatten x x self fc1 x x self dropout x training be train x self out x if not be train tf cross entropy expect logit without softmax so only apply softmax when not train x tf nn softmax x return x build neural network model conv net convnet cross entropy loss note that this will apply softmax to the logit def cross entropy loss x y convert label to int 64 for tf cross entropy function y tf cast y tf int64 apply softmax to logit and compute cross entropy loss tf nn sparse softmax cross entropy with logit label y logit x average loss across the batch return tf reduce mean loss accuracy metric def accuracy y pre y true predict class be the index of high score in prediction vector I e argmax correct prediction tf equal tf argmax y pre 1 tf cast y true tf int64 return tf reduce mean tf cast correct prediction tf float32 axis 1 stochastic gradient descent optimizer optimizer tf optimizer adam learning rate optimization process def run optimization x y wrap computation inside a gradienttape for automatic differentiation with tf gradienttape as g forward pass pre conv net x be train true compute loss loss cross entropy loss pre y variable to update I e trainable variable trainable variable conv net trainable variable compute gradient gradient g gradient loss trainable variable update w and b follow gradient optimizer apply gradient zip gradient trainable variable run training for the give number of step for step batch x batch y in enumerate train datum take training step 1 run the optimization to update w and b value run optimization batch x batch y if step display step 0 pre conv net batch x loss cross entropy loss pre batch y acc accuracy pre batch y print step I loss f accuracy f step loss acc test model on validation set pre conv net x test print test accuracy f accuracy pre y test other info log the output from the cpu version 2 2 0 2020 10 27 13 57 03 982255 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 fma 2020 10 27 13 57 04 003444 I tensorflow core platform profile util cpu util cc 102 cpu frequency 4149920000 hz 2020 10 27 13 57 04 004916 I tensorflow compiler xla service service cc 168 xla service 0x556240ba48c0 initialize for platform host this do not guarantee that xla will be use device 2020 10 27 13 57 04 004927 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 10 27 13 57 04 004979 I tensorflow core common runtime process util cc 147 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance step 10 loss 1 765925 accuracy 0 859375 step 20 loss 1 604248 accuracy 0 906250 step 30 loss 1 642668 accuracy 0 882812 step 40 loss 1 594136 accuracy 0 937500 step 50 loss 1 557037 accuracy 0 953125 step 60 loss 1 549416 accuracy 0 937500 step 70 loss 1 530980 accuracy 0 976562 step 80 loss 1 546553 accuracy 0 937500 step 90 loss 1 518947 accuracy 0 968750 step 100 loss 1 525878 accuracy 0 953125 step 110 loss 1 492367 accuracy 0 992188 step 120 loss 1 498649 accuracy 0 984375 step 130 loss 1 515978 accuracy 0 960938 step 140 loss 1 522711 accuracy 0 976562 step 150 loss 1 496059 accuracy 0 976562 step 160 loss 1 501745 accuracy 0 976562 step 170 loss 1 488870 accuracy 0 984375 step 180 loss 1 504619 accuracy 0 992188 step 190 loss 1 480513 accuracy 1 000000 step 200 loss 1 489994 accuracy 0 984375 test accuracy 0 976200 process finish with exit code 0 the output from the gpu version 2 2 0 2020 10 27 13 52 51 391410 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2020 10 27 13 52 51 424025 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 424711 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 21 00 0 name geforce rtx 3090 computecapability 8 6 coreclock 1 86ghz corecount 82 devicememorysize 23 70gib devicememorybandwidth 871 81gib s 2020 10 27 13 52 51 424829 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 10 27 13 52 51 425678 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 10 27 13 52 51 426660 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 10 27 13 52 51 426788 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 10 27 13 52 51 427656 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 10 27 13 52 51 428145 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 10 27 13 52 51 430066 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 10 27 13 52 51 430149 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 430858 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 431506 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 physicaldevice name physical device gpu 0 device type gpu 2020 10 27 13 52 51 654160 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx avx2 fma 2020 10 27 13 52 51 659773 I tensorflow core platform profile util cpu util cc 102 cpu frequency 4149920000 hz 2020 10 27 13 52 51 661105 I tensorflow compiler xla service service cc 168 xla service 0x55b3ed5bed50 initialize for platform host this do not guarantee that xla will be use device 2020 10 27 13 52 51 661115 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 10 27 13 52 51 661262 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 661963 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 21 00 0 name geforce rtx 3090 computecapability 8 6 coreclock 1 86ghz corecount 82 devicememorysize 23 70gib devicememorybandwidth 871 81gib s 2020 10 27 13 52 51 661986 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 10 27 13 52 51 661992 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 10 27 13 52 51 661997 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 10 27 13 52 51 662002 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 10 27 13 52 51 662007 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 10 27 13 52 51 662012 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 10 27 13 52 51 662017 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 10 27 13 52 51 662051 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 662717 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 663356 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 10 27 13 52 51 663379 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 10 27 13 52 51 720749 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 10 27 13 52 51 720769 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 10 27 13 52 51 720775 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 10 27 13 52 51 720899 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 721550 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 722176 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 27 13 52 51 722790 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 21939 mb memory physical gpu device 0 name geforce rtx 3090 pci bus i d 0000 21 00 0 compute capability 8 6 2020 10 27 13 52 51 723983 I tensorflow compiler xla service service cc 168 xla service 0x55b3ee94ec90 initialize for platform cuda this do not guarantee that xla will be use device 2020 10 27 13 52 51 723991 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce rtx 3090 compute capability 8 6 2020 10 27 13 52 52 629936 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 10 27 13 52 54 157774 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 step 10 loss 2 302645 accuracy 0 101562 step 20 loss 2 302479 accuracy 0 101562 step 30 loss 2 302398 accuracy 0 125000 step 40 loss 2 302354 accuracy 0 164062 step 50 loss 2 301157 accuracy 0 140625 step 60 loss 2 301878 accuracy 0 140625 step 70 loss 2 302341 accuracy 0 109375 step 80 loss 2 303088 accuracy 0 039062 step 90 loss 2 302307 accuracy 0 085938 step 100 loss 2 302237 accuracy 0 125000 step 110 loss 2 302458 accuracy 0 125000 step 120 loss 2 301577 accuracy 0 171875 step 130 loss 2 301929 accuracy 0 109375 step 140 loss 2 302793 accuracy 0 101562 step 150 loss 2 302621 accuracy 0 125000 step 160 loss 2 302866 accuracy 0 078125 step 170 loss 2 301470 accuracy 0 164062 step 180 loss 2 301091 accuracy 0 156250 step 190 loss 2 302170 accuracy 0 132812 step 200 loss 2 302764 accuracy 0 117188 test accuracy 0 113500 process finish with exit code 0 |
tensorflowtensorflow | tensorflow dll leak reference to itself prevent the dll from unload | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 window 10 pro 64bit version 2004 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 cpu only python version n a bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior we link to the tensorflow dll dynamically I e use loadlibrary directly on tensorflow dll or on another dll that itself link to tensorflow we use the tensorflow c api to run some datum through a model eventually when processing be complete the dll be unlinked I e use freelibrary directly on tensorflow dll or the other dll that itself link to tensorflow for some reason unknown to we at this stage the operating system window still believe the dll be in use the reference count be non zero thus any static resource allocate internally by tensorflow be probably not free the code example below illustrate this problem if within the same process and under certain condition the procedure above be repeat a second time link again to tf run a model etc we experience an access violation error in tf sessionrun we be not able to produce a sample code that reproduce this last error as it only occur when our software be use by a third party application the source code of which we do not have access to however we notice that the error would no long happen if we forcefully unlink the tensorflow dll by repeatedly call freelibrary tensorflow dll until the os hold no reference to it this require some 30 40 call to freelibrary this imply that some internal state of tensorflow from the first link process unlink run be still in use but be somehow invalid describe the expect behavior a cycle of loadlibrary run tf freelibrary should leave the tensorflow dll reference count unchanged I e equal to zero if tf be not use anywhere else in the program on freelibrary any internal resource hold by tf should be free provide tf be not in use by other part of the program load the dll again should result in a blank new internal state of tf standalone code to reproduce the issue c include include include include include helper function to count the number of reference to tensorflow dll std size t gettensorflowrefcount auto handle getcurrentprocessid auto s createtoolhelp32snapshot th32cs snapmodule handle std size t value 0 moduleentry32 me32 me32 dwsize sizeof moduleentry32 module32first s me32 do if std strcmp me32 szmodule tensorflow dll 0 value me32 glblcntusage break while module32next s me32 closehandle s return value manually import tensorflow some type and function typedef enum tf code tf ok 0 tf cancel 1 tf unknown 2 tf invalid argument 3 tf deadline exceed 4 tf not find 5 tf already exist 6 tf permission deny 7 tf unauthenticated 16 tf resource exhaust 8 tf fail precondition 9 tf abort 10 tf out of range 11 tf unimplemented 12 tf internal 13 tf unavailable 14 tf datum loss 15 tf code use tf status void use tf graph void use tf sessionoption void use tf session void tf code tf getcode tf status tf status tf newstatus tf graph tf newgraph tf sessionoption tf newsessionoption tf session tf newsession tf graph tf sessionoption tf status void tf deletesession tf session tf status void tf deletesessionoption tf sessionoption void tf deletegraph tf graph void tf deletestatus tf status void importtensorflow hinstance handle define load function function function reinterpret cast getprocaddress handle function load function tf getcode load function tf newstatus load function tf newgraph load function tf newsessionoption load function tf newsession load function tf deletesession load function tf deletesessionoption load function tf deletegraph load function tf deletestatus undef load function helper function to throw on error void tfstatuscheck tf status status if tf getcode status tf ok throw std runtime error error in tensorflow int main load tensorflow dll and import function std cout initial gettensorflowrefcount std endl auto handle loadlibrary tensorflow dll std cout dll load gettensorflowrefcount std endl importtensorflow handle std cout function import gettensorflowrefcount std endl create a dummy session tf status status nullptr tf graph graph nullptr tf sessionoption option nullptr tf session session nullptr try status tf newstatus std cout tf newstatus gettensorflowrefcount std endl graph tf newgraph std cout tf newgraph gettensorflowrefcount std endl option tf newsessionoption std cout tf newsessionoption gettensorflowrefcount std endl session tf newsession graph option status tfstatuscheck status std cout tf newsession gettensorflowrefcount std endl catch const std exception e std cout e what std endl free allocate resource if session tf deletesession session status std cout tf deletesession gettensorflowrefcount std endl if option tf deletesessionoption option std cout tf deletesessionoption gettensorflowrefcount std endl if graph tf deletegraph graph std cout tf deletegraph gettensorflowrefcount std endl if status tf deletestatus status std cout tf deletestatus gettensorflowrefcount std endl free dll freelibrary handle std cout dll unload gettensorflowrefcount std endl return 0 other info log simply compile the above on window with clang test cpp o test output on my computer initial 0 dll load 1 function import 1 tf newstatus 1 tf newgraph 2 tf newsessionoption 2 2020 10 26 13 38 48 388408 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 10 26 13 38 48 398942 I tensorflow compiler xla service service cc 168 xla service 0x1ffcd6eeec0 initialize for platform host this do not guarantee that xla will be use device 2020 10 26 13 38 48 399288 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version tf newsession 41 tf deletesession 40 tf deletesessionoption 40 tf deletegraph 40 tf deletestatus 40 dll unload 39 notice the follow tf newgraph generate a new reference to the dll tf newsession generate 39 new reference to the dll tf deletesession remove one reference we be leave with 39 leak reference which we do not create explicitly in our program we be able to reproduce this with a number of tf version 1 15 2 1 1 2 3 1 with or without gpu support |
tensorflowtensorflow | tensorflow 1 14 bug | Bug | import tensorflow as tf sess tf session logit tf constant 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 dtype tf float32 topk logit value topk logit index tf math top k logit k 3 print sess run topk logit index sample index tf multinomial topk logit value num sample 1 range tf range 0 sample index get shape 0 sample index tf cast sample index tf int32 sample index range sample index get shape 0 tf reshape sample index 1 print sess run sample index print sess run tf reshape topk logit index 1 sample global index tf gather tf reshape topk logit index 1 indice sample index print sess run sample global index sample global index tf gather tf reshape topk logit index 1 indice tf constant 2 4 print sess run sample global index print result 3 2 1 3 2 1 2 4 3 2 1 3 2 1 3 1 should be 1 2 1 2 |
tensorflowtensorflow | tensorflow 1 14 bug | Bug | import tensorflow as tf sess tf session logit tf constant 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 dtype tf float32 topk logit value topk logit index tf math top k logit k 3 print sess run topk logit index print sess run topk logit value sample index tf multinomial topk logit value num sample 1 range tf range 0 sample index get shape 0 sample index tf cast sample index tf int32 range tf expand dim range 1 print sess run range sample index tf concat range sample index 1 print sess run sample index sample global index tf gather nd topk logit index indice sample index print sess run sample global index sample global index tf gather nd topk logit index indice tf constant 0 0 1 2 print sess run sample global index print result 3 2 1 3 2 1 0 0 1 2 2 1 should be 3 1 3 1 |
tensorflowtensorflow | doc fail to initialize nvml driver library version mismatch | Bug | url s with the issue install cuda with apt description of issue what need change I m follow the ubuntu 18 04 cuda 10 1 instruction I instal the nvidia package repository nvidia driver 450 and reboot then run nvidia smi nvidia smi nvidia smi 450 80 02 driver version 450 80 02 cuda version 11 0 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 tesla k80 on 00000000 00 1e 0 off 0 n a 62c p0 63w 149w 0mib 11441mib 0 default n a process gpu gi ci pid type process name gpu memory i d i d usage no running process find I then instal cuda 10 1 libcudnn7 7 6 5 32 1 cuda10 1 libcudnn7 dev 7 6 5 32 1 cuda10 1 as per the instruction now I get nvidia smi fail to initialize nvml driver library version mismatch I m run ubuntu 18 04 5 lts bionic beaver |
tensorflowtensorflow | tf keras experimental cosinedecay error | Bug | tensorflow version 2 3 0 python version 3 7 6 accord to you can pass this schedule directly into a tf keras optimizer optimizer as the learning rate however when I try to use the suggest code I get the follow error typeerror not support between instance of cosinedecay and int standalone code to reproduce the issue here be a sample code import numpy as np from tensorflow import kera from tensorflow keras import layer import tensorflow as tf model datum parameter num class 10 input shape 28 28 1 the datum split between train and test set x train y train x test y test keras datasets mnist load datum scale image to the 0 1 range x train x train astype float32 255 x test x test astype float32 255 make sure image have shape 28 28 1 x train np expand dim x train 1 x test np expand dim x test 1 print x train shape x train shape print x train shape 0 train sample print x test shape 0 test sample convert class vector to binary class matrix y train keras util to categorical y train num class y test keras util to categorical y test num class model keras sequential keras input shape input shape layer conv2d 64 kernel size 3 3 activation relu layer maxpooling2d pool size 2 2 layer flatten layer dropout 0 5 layer dense num class activation softmax model summary decay step 10 lr decay fn tf keras experimental cosinedecay 0 0001 decay step model compile loss categorical crossentropy optimizer keras optimizer adam lr lr decay fn metric categorical accuracy batch size 128 epoch 5 model compile loss categorical crossentropy optimizer tf keras optimizer adam lr lr decay fn metric categorical accuracy model fit x train y train batch size batch size epoch epoch validation split 0 1 |
tensorflowtensorflow | unexpected snapshot behaviour with flat map in tf nightly | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary pip tensorflow version use command below 2 4 0 dev20201023 python version 3 7 7 describe the current behavior a dataset form by flat map ping multiple snapshotte dataset which have each be iterate over individually thus produce file on disk result in a dataset which seemingly do not use those file on disk this be different to the behaviour of cache in tf 2 3 and tf nightly and snapshot in tf 2 3 describe the expect behavior snapshot to work equivalently in 2 3 as tf nightly and similarly to cache standalone code to reproduce the issue colab here python import os from tempfile import temporarydirectory import numpy as np import tensorflow as tf def as numpy ds tf datum dataset return np array x numpy for x in ds def get data num repeat 2 snap false preprocess early false preprocess late false del rng false get numpy result from a data pipeline the pipeline look like 1 range 2 add stateful random noise 3 create num repeat cache d or snapshot ted version 4 flat map if num repeat 1 args num repeat number of duplicate create in step 3 above snap use snapshot otherwise use cache preprocess early if true we iterate over individually cache snapshotte dataset prior to flat mapping preprocess late if true we iterate over the flat map pe dataset del rng if true we delete the rng responsible for generate random noise in step 2 this will cause an error if this map function be call again rather than use cache snapshotte file on disk return two iteration of the repeat dataset rng tf random generator from seed 0 dataset tf datum dataset range 10 map lambda x tf cast x tf float32 rng uniform with temporarydirectory as tmp dir path os path join tmp dir f repeat I for I in range num repeat if snap dataset dataset apply tf datum experimental snapshot path for path in path else dataset dataset cache path for path in path if preprocess early iterate over dataset individually to force save to file for ds in dataset as numpy ds if num repeat 1 dataset dataset else dataset tf datum dataset from tensor slice dataset flat map lambda x x if preprocess late iterate over concatenate dataset to force save to file as numpy dataset if del rng this will cause an error be the original map dataset be call del rng return as numpy dataset as numpy dataset class snapshott tf test testcase def test consistent self base0 base1 get datum np testing assert equal base0 base1 def test reproducible self base0 get datum s0 s1 get datum np testing assert equal s0 s1 np testing assert equal s0 base0 def test snapshot self base0 get datum s0 s1 get datum snap true np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess late self base0 get datum s0 s1 get datum snap true preprocess late true np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess late del rng self base0 get datum s0 s1 get datum snap true preprocess late true del rng true np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess early self base0 get datum s0 s1 get datum snap true preprocess early true np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess early del rng self base0 get datum s0 s1 get datum snap true preprocess early true del rng true np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess no repeat self preprocess early be equivalent to preprocess late here base0 get datum num repeat 1 s0 s1 get datum snap true preprocess early true num repeat 1 np testing assert equal s0 s1 np testing assert equal s0 base0 def test preprocess del rng no repeats self preprocess early be equivalent to preprocess late here base0 get datum num repeat 1 s0 s1 get datum snap true preprocess early true num repeat 1 del rng true np testing assert equal s0 s1 np testing assert equal s0 base0 if name main tf test main other info log fail test output txt error test preprocess early del rng main snapshott snapshott test preprocess early del rng traceback most recent call last file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python eager context py line 2113 in execution mode yield file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python data op iterator op py line 733 in next internal output shape self flat output shape file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python ops gen dataset op py line 2579 in iterator get next op raise from not ok status e name file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python framework op py line 6862 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl notfounderror resource localhost anonymousvar6 n10tensorflow3vare do not exist node stateful uniform statefuluniform op iteratorgetnext during handling of the above exception another exception occur traceback most recent call last file foob py line 107 in test preprocess early del rng s0 s1 get datum snap true preprocess early true del rng true file foob py line 67 in get datum return as numpy dataset as numpy dataset file foob py line 9 in as numpy return np array x numpy for x in ds file foob py line 9 in return np array x numpy for x in ds file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python data op iterator op py line 747 in next return self next internal file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python data op iterator op py line 739 in next internal return structure from compatible tensor list self element spec ret file home jackd anaconda3 envs tf nightly lib python3 7 contextlib py line 130 in exit self gen throw type value traceback file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python eager context py line 2116 in execution mode executor new wait file home jackd anaconda3 envs tf nightly lib python3 7 site package tensorflow python eager executor py line 69 in wait pywrap tfe tfe executorwaitforallpendingnode self handle tensorflow python framework error impl notfounderror resource localhost anonymousvar6 n10tensorflow3vare do not exist node stateful uniform statefuluniform fail test preprocess early main snapshott snapshott test preprocess early traceback most recent call last file foob py line 103 in test preprocess early np testing assert equal s0 base0 file home jackd anaconda3 envs tf nightly lib python3 7 site package numpy testing private util py line 342 in assert equal return assert array equal actual desire err msg verbose file home jackd anaconda3 envs tf nightly lib python3 7 site package numpy testing private util py line 931 in assert array equal verbose verbose header array be not equal file home jackd anaconda3 envs tf nightly lib python3 7 site package numpy testing private util py line 840 in assert array compare raise assertionerror msg assertionerror array be not equal mismatch element 20 20 100 max absolute difference 0 90819454 max relative difference 1 9366292 x array 0 91562 1 45509 2 253555 3 829305 4 681193 5 65526 6 401854 7 514806 8 184864 9 174181 0 130606 1 063369 2 513922 3 190604 4 433053 5 044663 6 653943 7 007094 8 878403 9 046815 dtype float32 y array 0 311793 1 18098 2 761353 3 138052 4 027518 5 460741 6 235661 7 175892 8 786037 9 549028 0 860469 1 631952 2 669349 3 255722 4 884421 5 066545 6 267429 7 34992 8 16538 9 955009 dtype float32 run 10 test in 0 849 fail failure 1 error 1 skip 1 |
tensorflowtensorflow | tflite interpreter fail to load a save tflite model when dropout be use | Bug | system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary pip tensorflow version or github sha if from source v 2 3 0 provide the text output from tflite convert valueerror traceback most recent call last in 1 interpreter tf lite interpreter model path tflite file 2 translate tflite este o primeiro livro que eu fiz tokenizer pt tokenizer en interpreter max length pyenv version 3 7 2 lib python3 7 site package tensorflow lite python interpreter py in init self model path model content experimental delegate num thread 196 self interpreter 197 interpreter wrapper createwrapperfromfile 198 model path self custom op registerer 199 if not self interpreter 200 raise valueerror fail to open format model path valueerror do not get operator tensor or buffer in subgraph 1 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook it be a transformer base machine translation the code be the one implement in tensorflow tutorial page the code for save and load the tflite model be as follow load tflite model and allocate tensor def evaluate tflite inp sentence tokenizer pt tokenizer en interpreter max length start token tokenizer pt vocab size end token tokenizer pt vocab size 1 todo languague change check for erro inp sentence be lev hence add the start and end token inp sentence start token tokenizer pt encode inp sentence end token encoder input tf expand dim inp sentence 0 as the target be english the first word to the transformer should be the english start token decoder input tokenizer en vocab size output tf expand dim decoder input 0 for I in range max length enc padding mask combine mask dec padding mask create mask encoder input output use interpreter for inference print input detail interpreter tf lite interpreter model path tflite file input detail interpreter get input detail output detail interpreter get output detail interpreter resize tensor input input detail 0 index encoder input shape interpreter resize tensor input input detail 1 index output shape interpreter resize tensor input input detail 3 index enc padding mask shape interpreter resize tensor input input detail 4 index combine mask shape interpreter resize tensor input input detail 5 index dec padding mask shape interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail interpreter set tensor input detail 0 index tf cast encoder input tf float32 interpreter set tensor input detail 1 index tf cast output tf float32 interpreter set tensor input detail 2 index false interpreter set tensor input detail 3 index enc padding mask interpreter set tensor input detail 4 index combine mask interpreter set tensor input detail 5 index dec padding mask interpreter invoke print inference work print d for d in output detail the function get tensor return a copy of the tensor datum use tensor in order to get a pointer to the tensor prediction interpreter get tensor output detail 0 index attention weight interpreter get tensor output detail 1 index select the last word from the seq len dimension prediction prediction 1 batch size 1 vocab size predict i d tf cast tf argmax prediction axis 1 tf int32 return the result if the predict i d be equal to the end token if predict i d tokenizer en vocab size 1 return tf squeeze output axis 0 attention weight concatentate the predicted i d to the output which be give to the decoder as its input output tf concat output predict i d axis 1 return tf squeeze output axis 0 attention weight def translate tflite sentence tokenizer pt tokenizer en interpreter max length plot false result attention weight evaluate tflite sentence tokenizer pt tokenizer en interpreter max length predict sentence tokenizer en decode I for I in result if I tokenizer en vocab size print input format sentence print predict translation format predict sentence if plot plot attention weight attention weight sentence result plot return predict sentence interpreter tf lite interpreter model path tflite file translate tflite este o primeiro livro que eu fiz tokenizer pt tokenizer en interpreter max length also please include a link to a graphdef or the model if possible any other info log when I remove dropout layer from the interpreter work well and load and interpret the tflite model well do you have any idea how I can incorporate dropout |
tensorflowtensorflow | a description problem in tf keras layer maxpooling2d | Bug | url s with the issue description of issue what need change at the end of second paragraph you give a fomula to compute the size of output without pad output shape input shape pool size 1 stride however there be something wrong with it the correct one should be output shape input shape pool size stride 1 thank you |
tensorflowtensorflow | multi output custom loss model crash valueerror the truth value of an array with more than one element be ambiguous use a any or a all error occur when finalize generatordataset iterator fail precondition python interpreter state be not initialize the process may be terminate node pyfunc | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 docker container in centos linux release 7 8 2003 core mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary docker container tensorflow tensorflow late gpu tensorflow version use command below 2 3 1 python version python 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 gpu model and memory 2x nvidia tesla v100 32510mib 34 gb of memory each you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m implement a source separation 2 output model and it s make with the functional api I m run a custom loss function which use 2 output target and prediction 4 in total for my loss function to work my model be wrap within a subclasse model it crash during training describe the expect behavior I expect it not to crash during train standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook from scipy io import wavfile import scipy signal as sg import matplotlib pyplot as plt import tensorflow as tf from tensorflow keras layers import input simplernn dense lambda timedistribute layer lstm bidirectional batchnormalization concatenate from tensorflow keras model import model from tensorflow keras activation import relu from tensorflow keras callbacks import earlystopping import numpy as np import datetime import numpy as np import math import random import json import os import sys loss function def discriminative loss piano true noise true piano pre noise pre loss const last dim piano pre shape 1 piano pre shape 2 return tf math reduce mean tf reshape noise pre noise true shape 1 last dim 2 axis 1 loss const tf math reduce mean tf reshape noise pre piano true shape 1 last dim 2 axis 1 tf math reduce mean tf reshape piano pre piano true shape 1 last dim 2 axis 1 loss const tf math reduce mean tf reshape piano pre noise true shape 1 last dim 2 axis 1 def make model feature sequence name model input layer input shape sequence feature dtype float32 name piano noise mix piano true input shape sequence feature dtype float32 name piano true noise true input shape sequence feature dtype float32 name noise true x simplernn feature 2 activation relu return sequence true input layer piano pre timedistribute dense feature name piano hat x source 1 branch noise pre timedistribute dense feature name noise hat x source 2 branch model model input input layer piano true noise true output piano pre noise pre return model model wrapper for many input loss function class restorationmodel2 model def init self model loss const super restorationmodel2 self init self model model self loss const loss const def call self input return self model input def compile self optimizer loss super restorationmodel2 self compile self optimizer optimizer self loss loss def train step self data unpack datum what generator yeild x piano true noise true datum with tf gradienttape as tape piano pre noise pre self model x piano true noise true training true loss self loss piano true noise true piano pre noise pre self loss const trainable var self model trainable variable gradient tape gradient loss trainable var self optimizer apply gradient zip gradient trainable var return loss loss def test step self datum x piano true noise true datum piano pre noise pre self model x piano true noise true training false loss self loss piano true noise true piano pre noise pre self loss const return loss loss def make imp model feature sequence loss const 0 05 optimizer tf keras optimizer rmsprop clipvalue 0 7 name restoration model epsilon 10 10 new semi imperative model model restorationmodel2 make model feature sequence name training model loss const loss const model compile optimizer optimizer loss discriminative loss return model model train eval function def evaluate source sep train generator validation generator num train num val n feat n seq batch size loss const epoch 20 optimizer tf keras optimizer rmsprop clipvalue 0 75 patience 10 epsilon 10 10 print make model imperative model customize fit model make imp model n feat n seq loss const loss const optimizer optimizer epsilon epsilon print go into training now hist model fit train generator step per epoch math ceil num train batch size epoch epoch validation datum validation generator validation step math ceil num val batch size callback earlystoppe val loss patience patience mode min print model summary neural network data generator def my dummy generator num sample batch size train seq train feat while true for offset in range 0 num sample batch size initialise x y1 and y2 array for this batch x y1 y2 np empty batch size train seq train feat np empty batch size train seq train feat np empty batch size train seq train feat yield x y1 y2 def main epsilon 10 10 train batch size 5 loss const epoch val split 0 05 10 0 25 optimizer tf keras optimizer rmsprop clipvalue 0 9 train seq len train feat len 1847 2049 total smpls 60 validation training split index list range total smpls val index indice math ceil total smpls val split num val len val indice num train total smpls num val train seq train feat train seq len train feat len print train input stat print n feat train feat seq len train seq batch size train batch size create data generator and evaluate model with they train generator my dummy generator num train batch size train batch size train seq train seq train feat train feat validation generator my dummy generator num val batch size train batch size train seq train seq train feat train feat evaluate source sep train generator validation generator num train num val n feat train feat n seq train seq batch size train batch size loss const loss const epochs epoch optimizer optimizer epsilon epsilon if name main main other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach matplotlib create a temporary config cache directory at tmp matplotlib w351htm7 because the default path config matplotlib be not a writable directory it be highly recommend to set the mplconfigdir environment variable to a writable directory in particular to speed up the import of matplotlib and to well support multiprocesse 2020 10 20 20 59 48 073656 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 train input stat n feat 2049 seq len 1847 batch size 5 make model 2020 10 20 20 59 49 685893 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 10 20 20 59 51 341091 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 27 00 0 name tesla v100s pcie 32 gb computecapability 7 0 coreclock 1 597ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 1 03tib s 2020 10 20 20 59 51 343325 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 83 00 0 name tesla v100s pcie 32 gb computecapability 7 0 coreclock 1 597ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 1 03tib s 2020 10 20 20 59 51 343415 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 10 20 20 59 51 346449 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 10 20 20 59 51 349214 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 10 20 20 59 51 349659 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 10 20 20 59 51 352344 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 10 20 20 59 51 353860 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 10 20 20 59 51 359411 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 10 20 20 59 51 367984 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 1 2020 10 20 20 59 51 368576 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 10 20 20 59 51 405603 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2245615000 hz 2020 10 20 20 59 51 435047 I tensorflow compiler xla service service cc 168 xla service 0x4c0cdc0 initialize for platform host this do not guarantee that xla will be use device 2020 10 20 20 59 51 435197 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 10 20 20 59 51 659719 I tensorflow compiler xla service service cc 168 xla service 0x44a7910 initialize for platform cuda this do not guarantee that xla will be use device 2020 10 20 20 59 51 659822 I tensorflow compiler xla service service cc 176 streamexecutor device 0 tesla v100s pcie 32 gb compute capability 7 0 2020 10 20 20 59 51 659849 I tensorflow compiler xla service service cc 176 streamexecutor device 1 tesla v100s pcie 32 gb compute capability 7 0 2020 10 20 20 59 51 665940 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 27 00 0 name tesla v100s pcie 32 gb computecapability 7 0 coreclock 1 597ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 1 03tib s 2020 10 20 20 59 51 668089 I tensorflow core common runtime gpu gpu device cc 1716 find device 1 with property pcibusid 0000 83 00 0 name tesla v100s pcie 32 gb computecapability 7 0 coreclock 1 597ghz corecount 80 devicememorysize 31 75gib devicememorybandwidth 1 03tib s 2020 10 20 20 59 51 668184 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 10 20 20 59 51 668387 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 10 20 20 59 51 668620 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 10 20 20 59 51 668671 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 10 20 20 59 51 668786 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 10 20 20 59 51 668836 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 10 20 20 59 51 668883 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 10 20 20 59 51 677143 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 1 2020 10 20 20 59 51 677300 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 10 20 20 59 52 875191 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 10 20 20 59 52 875315 I tensorflow core common runtime gpu gpu device cc 1263 0 1 2020 10 20 20 59 52 875346 I tensorflow core common runtime gpu gpu device cc 1276 0 n y 2020 10 20 20 59 52 875557 I tensorflow core common runtime gpu gpu device cc 1276 1 y n 2020 10 20 20 59 52 882402 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 30132 mb memory physical gpu device 0 name tesla v100s pcie 32 gb pci bus i d 0000 27 00 0 compute capability 7 0 2020 10 20 20 59 52 885327 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 1 with 30132 mb memory physical gpu device 1 name tesla v100s pcie 32 gb pci bus i d 0000 83 00 0 compute capability 7 0 go into training now 2020 10 20 20 59 53 516824 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 epoch 1 10 9 9 eta 0s loss 0 0000e 00traceback most recent call last file dlnn brahm restore clean py line 212 in main file dlnn brahm restore clean py line 209 in main optimizer optimizer epsilon epsilon file dlnn brahm restore clean py line 158 in evaluate source sep callback earlystoppe val loss patience patience mode min file usr local lib python3 6 dist package tensorflow python keras engine training py line 108 in method wrapper return method self args kwargs file usr local lib python3 6 dist package tensorflow python keras engine training py line 1137 in fit callback on epoch end epoch epoch log file usr local lib python3 6 dist package tensorflow python keras callbacks py line 416 in on epoch end callback on epoch end epoch numpy log file usr local lib python3 6 dist package tensorflow python keras callbacks py line 1664 in on epoch end if self monitor op current self min delta self good valueerror the truth value of an array with more than one element be ambiguous use a any or a all 2020 10 20 21 00 05 825863 w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator fail precondition python interpreter state be not initialize the process may be terminate node pyfunc |
tensorflowtensorflow | deprecation warning model state update and layer update when save model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 colab ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below happen on 2 3 1 v2 3 0 54 gfcc4b966f1 and 2 4 0 dev20201020 v1 12 1 44160 g72c19e8880 python version 3 6 9 describe the current behavior get two deprecation warning when save a model with default parameter usr local lib python3 6 dist package tensorflow python keras engine training py 2334 userwarning model state update will be remove in a future version this property should not be use in tensorflow 2 0 as update be apply automatically warning warn model state update will be remove in a future version usr local lib python3 6 dist package tensorflow python keras engine base layer py 1397 userwarning layer update will be remove in a future version this property should not be use in tensorflow 2 0 as update be apply automatically warning warn layer update will be remove in a future version describe the expect behavior no warning standalone code to reproduce the issue import tensorflow as tf from tensorflow import keras model keras model sequential kera layer dense 1 input shape 1 model save my model colab |
tensorflowtensorflow | issue run test hello world test on mac | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 mac os catalina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version describe the problem after clone tensorflow git cd in tensorflow I m try to run the hello world test on mac and get the follow error follow the gmake command tensorflow lite micro tool make makefile 413 warn override recipe for target tensorflow lite micro tool make download person model int8 tensorflow lite micro tool make makefile 413 warning ignore old recipe for target tensorflow lite micro tool make download person model int8 g std c 11 fno rtti fno exception fno threadsafe static fno unwind table ffunction section fdata section fmessage length 0 dtf lite static memory dtf lite disable x86 neon o3 werror wsign compare wdouble promotion wshadow wunuse variable wmisse field initializer wunuse function wswitch wvla wall wextra wstrict aliasing wno unused parameter I itensorflow lite micro tool make download itensorflow lite micro tool make download gemmlowp itensorflow lite micro tool make download flatbuffer include itensorflow lite micro tool make download ruy itensorflow lite micro tool make download kissfft o tensorflow lite micro tool make gen osx x86 64 bin hello world test tensorflow lite micro tool make gen osx x86 64 obj tensorflow lite micro examples hello world hello world test o tensorflow lite micro tool make gen osx x86 64 obj tensorflow lite micro examples hello world model o tensorflow lite micro tool make gen osx x86 64 lib libtensorflow microlite a wl fatal warning wl gc section lm framework foundation framework audiotoolbox ld unknown option fatal warning clang error linker command fail with exit code 1 use v to see invocation gmake tensorflow lite micro examples hello world makefile inc 34 tensorflow lite micro tool make gen osx x86 64 bin hello world test error 1 provide the exact sequence of command step that you execute before run into the problem gmake f tensorflow lite micro tool make makefile test hello world test any other info log gmake version gnu make 4 3 build for x86 64 apple darwin19 2 0 use gmake as my make version be 3 81 |
tensorflowtensorflow | report autograph could not transform module gast have no attribute index | Bug | system information source os platform archlinux 5 8 14 x86 64 tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 2 python version 3 8 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 1 0 2 gpu model and memory geforce gtx 1660 super computecapability 7 5 22 devicememorysize 5 80gib describe the current behavior warn tensorflow autograph could not transform and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause module gast have no attribute index describe the expect behavior standalone code to reproduce the issue python transformer py other info log include any log or source code that would be helpful to transformer zip |
tensorflowtensorflow | tensorflow lite with nvidia gpu on ubuntu happen coredump when create delegate | Bug | hi supporter system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version use command below master branch 2 3 1 python version 3 8 bazel version if compile from source 3 1 0 gcc compiler version if compile from source 7 5 0 cuda cudnn version 10 1 gpu model and memory geforce gtx 1050 ti 4 gb describe the current behavior I try to inference tf lite model with gpu everything be normal with tf use cpu but when create delagate coredump happen immediately standalone code to reproduce the issue include include include include include include include include int main std unique ptr model tflite flatbuffermodel buildfromfile superpoint 640x480 tflite if model std cerr fail to mmap tflite model std endl return 1 tflite op builtin builtinopresolver resolver std unique ptr interpreter if tflite interpreterbuilder model get resolver interpreter ktfliteok std cerr fail to interpreter tflite model std endl return 1 auto delegate tflitegpudelegatecreate nullptr if interpreter modifygraphwithdelegate delegate ktfliteok std cerr fail to enable gpu std endl return 1 new prepare custom option with feature enable tflitegpudelegateoptionsv2 option tflitegpudelegateoptionsv2default option experimental flag tflite gpu experimental flag none auto delegate tflitegpudelegatev2create option if interpreter modifygraphwithdelegate delegate ktfliteok std cerr fail to register delegate std endl return 1 return 0 tf lite model file superpoint 640x480 zip other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach backtrace txt as I know so far tf lite on gpu support pad op but don t know why the coredump happen here have you see this could you help I overcome this kind of issue many thank in advance |
tensorflowtensorflow | no gradient provide for any variable when do binarization | Bug | I have write a lambda layer that convert an input variable of range 0 1 to either 0 or 1 I e the layer binarize the input I do this by use k random uniform shape k shape x x however when wantint to train the model I m get the error message that no gradient be provide for any variable how do I have to change the code such that my idea work tf version be 2 2 0 import numpy as np from keras import backend as k from keras layers import from keras model import model def binarize d x1 x2 d return x1 k cast k random uniform shape k shape x2 x2 float32 inp input 1 var dense 1 activation sigmoid inp out lambda binarize inp var model model inp out model compile loss mse optimizer sgd x np random normal size 128 1 y x 0 model train on batch x y |
tensorflowtensorflow | tf distribute mirroredstrategy use tf config and non eager execution assign op to nonexistent device name | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 google ai platform tensorflow instal from source or binary binary tensorflow version reproducible on 2 2 or 2 3 python version 3 7 cuda cudnn version unknown gpu model and memory 2x nvidia tesla k80 when run the example code from distribute training with kera on ai platform with multiple gpu eager execution disable and use tf keras mirroredstrategy to utilize all gpu tensorflow raise an invalidargumenterror tensorflow python framework error impl invalidargumenterror can not assign a device for operation conv2d kernel initializer random uniform randomuniform could not satisfy explicit device specification because the node colocation node conv2d kernel initializer random uniform randomuniform be colocate with a group of nodes that require incompatible device job chief replica 0 task 0 device gpu 0 all available device job localhost replica 0 task 0 device cpu 0 job localhost replica 0 task 0 device xla cpu 0 job localhost replica 0 task 0 device xla gpu 0 job localhost replica 0 task 0 device xla gpu 1 job localhost replica 0 task 0 device gpu 0 job localhost replica 0 task 0 device gpu 1 colocation debug info colocation group have the follow type and support device root member assign device name index 1 request device name job chief replica 0 task 0 device gpu 0 assign device name resource device name job chief replica 0 task 0 device gpu 0 support device type gpu cpu possible device assignvariableop gpu cpu xla cpu xla gpu randomuniform gpu cpu xla cpu xla gpu varisinitializedop gpu cpu xla cpu xla gpu const gpu cpu xla cpu xla gpu mul gpu cpu xla cpu xla gpu readvariableop gpu cpu xla cpu xla gpu sub gpu cpu xla cpu xla gpu varhandleop gpu cpu xla cpu xla gpu add gpu cpu xla cpu xla gpu oddly the list of available device include the gpu I want to use but their name include job localhost rather than job chief the log use to launch the job show that only one worker be be use and it s be assign the type chief json run task with argument cluster chief 127 0 0 1 2222 task type chief index 0 job scale tier custom master type n1 standard 16 package uris snip python module psobot mirror strategy test region europe west1 runtime version 2 2 run on raw vm true python version 3 7 master config accelerator config count 2 type nvidia tesla k80 but also show that the device naming mismatch seem to happen early create tensorflow device device gpu 0 with 10634 mb memory physical gpu device 0 name tesla k80 pci bus i d 0000 00 04 0 compute capability 3 7 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero create tensorflow device device gpu 1 with 10634 mb memory physical gpu device 1 name tesla k80 pci bus i d 0000 00 05 0 compute capability 3 7 some request device in tf distribute strategy be not visible to tensorflow job chief replica 0 task 0 device gpu 0 job chief replica 0 task 0 device gpu 1 use mirroredstrategy with device job chief replica 0 task 0 device gpu 0 job chief replica 0 task 0 device gpu 1 describe the expect behavior tensorflow should identify that job chief and job localhost refer to the same machine the current machine and should be able to place op there and it should be possible to train on multiple gpu standalone code to reproduce the issue python import tensorflow dataset as tfds import tensorflow as tf this be the only functional change from the example code tf compat v1 disable eager execution copy from the example code at dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test strategy tf distribute mirroredstrategy print number of device format strategy num replicas in sync train dataset mnist train map lambda I m l tf cast I m tf float32 255 l batch 64 with strategy scope model tf keras sequential tf keras layer conv2d 32 3 activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense 64 activation relu tf keras layer dense 10 model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer adam metric accuracy model fit train dataset epoch 12 run the above code on ai platform with the follow bash gcloud ai platform job submit training psobot mirror dummy date s runtime version 2 2 python version 3 7 project your gcp project here region europe west1 module name your module name here mirror strategy test package path your package name here scale tier custom master machine type n1 standard 16 master accelerator count 2 type nvidia tesla k80 staging bucket your staging bucket here |
tensorflowtensorflow | get error when convert a simple convs model to int8 tflite | Bug | system information os platform and distribution ubuntu 18 04 tensorflow instal from binary tensorflow version tf nightly 2 4 0 dev20201015 and tf 2 3 0 tensorflow model optimization version 0 5 0 command use to run the converter or code if you re use the python api fp32 tflite be work but int8 tflite be not work with tf nightly please check the gist tf nightly 1 can not convert quantization aware model to int8 tflite 2 converter inference input type tf int8 and converter inference output type tf int8 be not work 3 fp32 tflite be work item 2 have be fix in 2 3 0 36024 but still have another error please check the gist tf 2 3 0 the output from the converter invocation item 1 runtimeerror tensorflow lite kernel quantize cc 113 affine quantization scale size 1 be not true node number 0 quantize fail to prepare item 2 valueerror the inference input type and inference output type must be tf float32 failure detail get error in tensorflow thank in advance |
tensorflowtensorflow | how do we know the compatibility of flatbuffer version in schemagenerated h | Bug | after build the flatbuffer late release and combine with the late schemagenerate h there s an error as below error 1 error lnk2001 unresolved external symbol private static class flatbuffer classiclocale flatbuffer classiclocale instance instance classiclocale flatbuffer 0v12 a this error be mostly due to incompatibility between flatbuffer version in my system and version use for schemagenerated h flatbuffer version use 1 12 0 |
tensorflowtensorflow | there be no define version as define in tensorflow website | Bug | screenshot 130 li screenshot 131 ohk so as define in the tensorflow gpu documentation the tensorflow 2 3 can have a cuda version of 10 1 and cudnn version of 7 4 but there isn t any cudnn v7 4 for cuda 10 1 accord to the nvidia cudnn archive so please update the version of cudnn in your documentation to the specify cuda and tensorflow version url s with the issue link to the tensorflow gpu section gpu |
tensorflowtensorflow | cudnn in invalid state after oom | Bug | system information have I write custom code os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below v1 13 1 0 g6612da8951 python version 3 6 9 cuda cudnn version cuda 10 0 130 1 cudnn 7 6 5 gpu model and memory geforce rtx 2070 super 8 gib describe the current behavior after get resourceexhaustederror during an operation use cudnn for instance due to out of memory it seem like cudnn be leave in a broken state and far call use cudnn fail even if they don t exceed the resource describe the expect behavior I would expect further call to cudnn to be ok this be important since it seem that the only way to know if a computation will use too much resource for instance when determine optimal batch size be to actually try and fail standalone code to reproduce the issue import tensorflow as tf import numpy as np from tensorflow python framework error impl import resourceexhaustederror x tf placeholder tf float32 shape none none none none 32 fil tf zero 3 3 3 32 32 dtype tf float32 conv tf nn conv3d x filter fil stride 1 1 1 1 1 padding valid name none with tf session as sess try c np zero 1 370 370 370 32 np float32 sess run conv feed dict x c r1 sess run conv except resourceexhaustederror print resource error with tf session as sess c np zero 1 100 100 100 32 np float32 sess run conv feed dict x c other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 526 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint8 np dtype qint8 np int8 1 home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 527 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint8 np dtype quint8 np uint8 1 home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 528 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint16 np dtype qint16 np int16 1 home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 529 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np quint16 np dtype quint16 np uint16 1 home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 530 futurewarne pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np qint32 np dtype qint32 np int32 1 home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework dtype py 535 futurewarning pass type 1 or 1type as a synonym of type be deprecate in a future version of numpy it will be understand as type 1 1 type np resource np dtype resource np ubyte 1 2020 10 15 13 41 46 910748 I tensorflow core platform cpu feature guard cc 141 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2020 10 15 13 41 47 076574 I tensorflow stream executor cuda cuda gpu executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 15 13 41 47 077067 I tensorflow compiler xla service service cc 150 xla service 0x1414860 execute computation on platform cuda device 2020 10 15 13 41 47 077083 I tensorflow compiler xla service service cc 158 streamexecutor device 0 geforce rtx 2070 super compute capability 7 5 2020 10 15 13 41 47 096784 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3498230000 hz 2020 10 15 13 41 47 097210 I tensorflow compiler xla service service cc 150 xla service 0x158f0a0 execute computation on platform host device 2020 10 15 13 41 47 097229 I tensorflow compiler xla service service cc 158 streamexecutor device 0 2020 10 15 13 41 47 097351 I tensorflow core common runtime gpu gpu device cc 1433 find device 0 with property name geforce rtx 2070 super major 7 minor 5 memoryclockrate ghz 1 815 pcibusid 0000 01 00 0 totalmemory 7 79gib freememory 7 31gib 2020 10 15 13 41 47 097368 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2020 10 15 13 41 47 098194 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2020 10 15 13 41 47 098208 I tensorflow core common runtime gpu gpu device cc 990 0 2020 10 15 13 41 47 098215 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2020 10 15 13 41 47 098286 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7110 mb memory physical gpu device 0 name geforce rtx 2070 super pci bus i d 0000 01 00 0 compute capability 7 5 2020 10 15 13 41 47 100978 w tensorflow core framework allocator cc 124 allocation of 6483584000 exceed 10 of system memory 2020 10 15 13 42 01 641512 w tensorflow core common runtime bfc allocator cc 267 allocator gpu 0 bfc run out of memory try to allocate 5 94gib current allocation summary follow 2020 10 15 13 42 01 641594 I tensorflow core common runtime bfc allocator cc 597 bin 256 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641616 I tensorflow core common runtime bfc allocator cc 597 bin 512 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641638 I tensorflow core common runtime bfc allocator cc 597 bin 1024 total chunk 1 chunk in use 1 1 2kib allocate for chunk 1 2kib in use in bin 1 0kib client request in use in bin 2020 10 15 13 42 01 641656 I tensorflow core common runtime bfc allocator cc 597 bin 2048 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641672 I tensorflow core common runtime bfc allocator cc 597 bin 4096 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641688 I tensorflow core common runtime bfc allocator cc 597 bin 8192 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641704 I tensorflow core common runtime bfc allocator cc 597 bin 16384 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641720 I tensorflow core common runtime bfc allocator cc 597 bin 32768 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641742 I tensorflow core common runtime bfc allocator cc 597 bin 65536 total chunk 1 chunk in use 1 108 0kib allocate for chunk 108 0kib in use in bin 108 0kib client request in use in bin 2020 10 15 13 42 01 641759 I tensorflow core common runtime bfc allocator cc 597 bin 131072 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641775 I tensorflow core common runtime bfc allocator cc 597 bin 262144 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641791 I tensorflow core common runtime bfc allocator cc 597 bin 524288 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641807 I tensorflow core common runtime bfc allocator cc 597 bin 1048576 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641823 I tensorflow core common runtime bfc allocator cc 597 bin 2097152 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641839 I tensorflow core common runtime bfc allocator cc 597 bin 4194304 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641856 I tensorflow core common runtime bfc allocator cc 597 bin 8388608 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641872 I tensorflow core common runtime bfc allocator cc 597 bin 16777216 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641888 I tensorflow core common runtime bfc allocator cc 597 bin 33554432 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641904 I tensorflow core common runtime bfc allocator cc 597 bin 67108864 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641924 I tensorflow core common runtime bfc allocator cc 597 bin 134217728 total chunk 0 chunk in use 0 0b allocate for chunk 0b in use in bin 0b client request in use in bin 2020 10 15 13 42 01 641942 I tensorflow core common runtime bfc allocator cc 597 bin 268435456 total chunk 2 chunk in use 1 6 94gib allocate for chunk 6 04gib in use in bin 6 04gib client request in use in bin 2020 10 15 13 42 01 641960 I tensorflow core common runtime bfc allocator cc 613 bin for 5 94gib be 256 00mib chunk state 2020 10 15 13 42 01 641986 I tensorflow core common runtime bfc allocator cc 619 size 926 76mib request size 0b in use 0 prev size 6 04gib request size 6 04gib in use 1 2020 10 15 13 42 01 642004 I tensorflow core common runtime bfc allocator cc 632 chunk at 0x7f86a6000000 of size 110592 2020 10 15 13 42 01 642017 I tensorflow core common runtime bfc allocator cc 632 chunk at 0x7f86a601b000 of size 1280 2020 10 15 13 42 01 642031 I tensorflow core common runtime bfc allocator cc 632 chunk at 0x7f86a601b500 of size 6483584000 2020 10 15 13 42 01 642043 I tensorflow core common runtime bfc allocator cc 632 free at 0x7f8828755900 of size 971780864 2020 10 15 13 42 01 642055 I tensorflow core common runtime bfc allocator cc 638 summary of in use chunk by size 2020 10 15 13 42 01 642071 I tensorflow core common runtime bfc allocator cc 641 1 chunk of size 1280 totalling 1 2kib 2020 10 15 13 42 01 642087 I tensorflow core common runtime bfc allocator cc 641 1 chunk of size 110592 total 108 0kib 2020 10 15 13 42 01 642101 I tensorflow core common runtime bfc allocator cc 641 1 chunk of size 6483584000 total 6 04gib 2020 10 15 13 42 01 642116 I tensorflow core common runtime bfc allocator cc 645 sum total of in use chunk 6 04gib 2020 10 15 13 42 01 642136 I tensorflow core common runtime bfc allocator cc 647 stat limit 7455476941 inuse 6483695872 maxinuse 6483695872 numalloc 3 maxallocsize 6483584000 2020 10 15 13 42 01 642153 w tensorflow core common runtime bfc allocator cc 271 2020 10 15 13 42 01 642223 w tensorflow core framework op kernel cc 1401 op require fail at conv op 3d cc 161 resource exhaust oom when allocate tensor with shape 1 368 368 368 32 and type float on job localhost replica 0 task 0 device gpu 0 by allocator gpu 0 bfc resource error 2020 10 15 13 42 01 801362 I tensorflow core common runtime gpu gpu device cc 1512 add visible gpu device 0 2020 10 15 13 42 01 801391 I tensorflow core common runtime gpu gpu device cc 984 device interconnect streamexecutor with strength 1 edge matrix 2020 10 15 13 42 01 801396 I tensorflow core common runtime gpu gpu device cc 990 0 2020 10 15 13 42 01 801400 I tensorflow core common runtime gpu gpu device cc 1003 0 n 2020 10 15 13 42 01 801462 I tensorflow core common runtime gpu gpu device cc 1115 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 7110 mb memory physical gpu device 0 name geforce rtx 2070 super pci bus i d 0000 01 00 0 compute capability 7 5 2020 10 15 13 42 02 948064 e tensorflow stream executor cuda cuda dnn cc 334 could not create cudnn handle cudnn status internal error 2020 10 15 13 42 02 967863 e tensorflow stream executor cuda cuda dnn cc 334 could not create cudnn handle cudnn status internal error traceback most recent call last file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1334 in do call return fn args file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1319 in run fn option feed dict fetch list target list run metadata file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1407 in call tf sessionrun run metadata tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv3d node conv3d during handling of the above exception another exception occur traceback most recent call last file home dbergh pycharm2019 3 config scratch scratch 59 py line 19 in sess run conv feed dict x c file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 929 in run run metadata ptr file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1152 in run feed dict tensor option run metadata file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1328 in do run run metadata file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python client session py line 1348 in do call raise type e node def op message tensorflow python framework error impl unknownerror fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv3d define at home dbergh pycharm2019 3 config scratch scratch 59 py 7 node conv3d define at home dbergh pycharm2019 3 config scratch scratch 59 py 7 cause by op conv3d define at file home dbergh pycharm2019 3 config scratch scratch 59 py line 7 in conv tf nn conv3d x filter fil stride 1 1 1 1 1 padding valid name none file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python ops gen nn op py line 1440 in conv3d dilation dilation name name file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework op def library py line 788 in apply op helper op def op def file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework op py line 3300 in create op op def op def file home dbergh virtualenvs boneseg lib python3 6 site package tensorflow python framework op py line 1801 in init self traceback tf stack extract stack unknownerror see above for traceback fail to get convolution algorithm this be probably because cudnn fail to initialize so try look to see if a warning log message be print above node conv3d define at home dbergh pycharm2019 3 config scratch scratch 59 py 7 node conv3d define at home dbergh pycharm2019 3 config scratch scratch 59 py 7 process finish with exit code 1 |
tensorflowtensorflow | tinyml book hello world example not run on ardunio nano 33 ble | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 masos catalina 10 15 5 tensorflow instal from source or binary binary tensorflow version commit sha if source 2 1 0 alpha precompile target platform e g arm mbe os arduino nano 33 etc arduino nano 33 describe the problem after follow the book and the instruction in the video screencast I be able to compile the program and apparently load it onto the board but nothing seem to happen then when I try to inspect the serial port logger I be always face with board at dev cu usbmodem14301 be not available and I be force to reset the board and start again but the same problem arise please provide the exact sequence of command step when you run into the problem for detail see below I face the same situation when I only use the code provide in the example folder and when I modify it for my own model follow instruction in the book 1 open the example arduino tensorflowlite hello world example and upload give this output library arduino tensorflowlite have be declare precompile use precompile library in user tallamjr document arduino library arduino tensorflowlite src cortex m4 fpv4 sp d16 softfp sketch use 231536 byte 23 of program storage space maximum be 983040 byte global variable use 58272 byte 22 of dynamic memory leave 203872 byte for local variable maximum be 262144 byte device nrf52840 qiaa version arduino bootloader sam ba extend 2 0 arduino ikxyz address 0x0 page 256 page size 4096 byte total size 1024 kb plane 1 lock region 0 lock none security false erase flash do in 0 001 second write 231544 byte to flash 57 page 100 57 57 page do in 9 084 second this immediately seem fine but I notice that the orange lead go from slowly blink in a bootloader state to a solid always on light I have try to change the const int kinferencespercycle as I initially think that I be just not able to see the flicker but this do not alter the solid light I also get this same problem when run a modified model use the step in the book and this notebook each time as well when I try to inspect use the serial plotter I get the follow error in the console board at dev cu usbmodem14301 be not available any help with this would be appreciate I feel very stuck at the moment on what to try as it seem to compile fine thank |
tensorflowtensorflow | not able to find libcudnn so 7 | Bug | system information operating system 20 04 4 lts tensorflow version 2 3 1 python version 3 7 4 gcc compiler 7 3 0 cuda 10 1 243 cudnn 8 0 4 gpu model and memory geforce gtx 960 m 4 gb nvidia smi 450 80 02 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version 2020 10 15 13 01 11 746116 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 v2 3 0 54 gfcc4b966f1 2 3 1 import tensorflow as tf 2020 10 15 12 44 04 961178 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 tf config list physical device gpu 2020 10 15 12 44 11 883542 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 10 15 12 44 11 915652 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 15 12 44 11 916045 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 02 00 0 name geforce gtx 960 m computecapability 5 0 coreclock 1 176ghz corecount 5 devicememorysize 3 95gib devicememorybandwidth 74 65gib s 2020 10 15 12 44 11 916111 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 10 15 12 44 11 918107 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2020 10 15 12 44 11 919968 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2020 10 15 12 44 11 920310 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2020 10 15 12 44 11 922495 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2020 10 15 12 44 11 923927 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2020 10 15 12 44 11 924102 w tensorflow stream executor platform default dso loader cc 59 could not load dynamic library libcudnn so 7 dlerror libcudnn so 7 can not open share object file no such file or directory ld library path usr lib cuda include usr lib cuda lib64 home nitin catkin ws devel lib usr lib cuda include usr lib cuda lib64 2020 10 15 12 44 11 924120 w tensorflow core common runtime gpu gpu device cc 1753 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device |
tensorflowtensorflow | densenet float point model report accuracy be much low than the original paper | Bug | url s with the issue float point model description of issue what need change the accuracy of densenet report in the url be much low than the one report in the original paper clear description in the original paper the performance be as follow top 1 74 98 top 5 92 29 densenet 121 however the performance report in the url be as follow top 1 64 2 top 5 85 6 densenet and there be no information about the depth of the provide densenet model be there any modification for this model or they be typo |
tensorflowtensorflow | in tfnighty tflite interpreter missing setusennapi wrapper method to call interpreter option setusennapi | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device android emulator tensorflow instal from source or binary org tensorflow tensorflow lite gpu 0 0 0 nightly tensorflow version use command below nightly python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior make call to interpreter s setusennapi method which no long exist in the tfnightly version as of oct 12 the method it should call interpreter option setusennapi still exist but not in the interperter itself describe the expect behavior either change the documentation to reflect this method do not exist or fix this standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook tflite new interpreter loadmodelfile assetmanager modelfilename if tflite null tflite setusennapi ischecke other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | patch cmsis source file take a lot of time | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary source tensorflow version commit sha if source 261bc3aba4e5c1611a417cf9d916c916996afad2 target platform e g arm mbe os arduino nano 33 etc any describe the problem the download and extract script patch cmsis source file to use qualified path in order to be compatible with arduino ide build system currently it take a significant amount of time since there be many include file that need to be patch ideally there would be no need for patch cmsis at all but that would require to change the entire cmsis repo to use qualified path another solution could be to do the patching only when generate arduino project however that would most likely require we to patch the tflu optimize op code kernel cmsis nn instead a short term solution be to optimize the patch algorithm please provide the exact sequence of command step when you run into the problem any command run use tag cmsis nn for example make f tensorflow lite micro tool make makefile tag cmsis nn target sparkfun edge person detection int8 bin |
tensorflowtensorflow | importerror can not import name momentumparameter | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information its on colab describe the problem I find your beautiful code and run the very second line of code and be thrilled to get the error and waste my one full day to figure out the issue line of code from tflite model maker import config error importerror can not import name momentumparameter provide the exact sequence of command step that you execute before run into the problem I just run your code from this and it fail at second line of code any other info log follow import fail as well from tflite model maker import exportformat from tflite model maker import model spec from tflite model maker import text classifier from tflite model maker import textclassifierdataloader thank so much happy code |
tensorflowtensorflow | valueerror the same saveable will be restore with two name layer with weight 1 table attribute table | Bug | tensorflow 2 3 1 describe the current behavior when I run pip install u tensorflow at the begin and run save model cli show dir my pet classifier all at the end of this document colab it raise error 2020 10 14 08 20 39 291886 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 metagraphdef with tag set serve contain the follow signaturedefs signature def save model init op the give savedmodel signaturedef contain the follow input s the give savedmodel signaturedef contain the follow output s output save model init op tensor info dtype dt invalid shape unknown rank name noop method name be signature def serve default the give savedmodel signaturedef contain the follow input s input age tensor info dtype dt int64 shape 1 1 name serve default age 0 input breed1 tensor info dtype dt string shape 1 1 name serve default breed1 0 input color1 tensor info dtype dt string shape 1 1 name serve default color1 0 input color2 tensor info dtype dt string shape 1 1 name serve default color2 0 input fee tensor info dtype dt float shape 1 1 name serve default fee 0 input furlength tensor info dtype dt string shape 1 1 name serve default furlength 0 input gender tensor info dtype dt string shape 1 1 name serve default gender 0 input health tensor info dtype dt string shape 1 1 name serve default health 0 input maturitysize tensor info dtype dt string shape 1 1 name serve default maturitysize 0 input photoamt tensor info dtype dt float shape 1 1 name serve default photoamt 0 input sterilize tensor info dtype dt string shape 1 1 name serve default sterilize 0 input type tensor info dtype dt string shape 1 1 name serve default type 0 input vaccinate tensor info dtype dt string shape 1 1 name serve default vaccinate 0 the give savedmodel signaturedef contain the follow output s output dense 1 tensor info dtype dt float shape 1 1 name statefulpartitionedcall 0 method name be tensorflow serve predict 2020 10 14 08 20 41 728805 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 10 14 08 20 41 738107 e tensorflow stream executor cuda cuda driver cc 314 fail call to cuinit cuda error no device no cuda capable device be detect 2020 10 14 08 20 41 738155 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host 1a8174b158ac proc driver nvidia version do not exist traceback most recent call last file usr local bin save model cli line 8 in sys exit main file usr local lib python3 6 dist package tensorflow python tool save model cli py line 1185 in main args func args file usr local lib python3 6 dist package tensorflow python tool save model cli py line 715 in show show all args dir file usr local lib python3 6 dist package tensorflow python tool save model cli py line 307 in show all show define function save model dir file usr local lib python3 6 dist package tensorflow python tool save model cli py line 187 in show define function trackable object load load save model dir file usr local lib python3 6 dist package tensorflow python save model load py line 603 in load return load internal export dir tag option file usr local lib python3 6 dist package tensorflow python save model load py line 633 in load internal ckpt option file usr local lib python3 6 dist package tensorflow python save model load py line 131 in init self restore checkpoint file usr local lib python3 6 dist package tensorflow python save model load py line 330 in restore checkpoint load status saver restore variable path self checkpoint option file usr local lib python3 6 dist package tensorflow python training track util py line 1320 in restore checkpoint checkpoint proto i d 0 restore self graph view root file usr local lib python3 6 dist package tensorflow python training tracking base py line 209 in restore restore op trackable restore from checkpoint position self pylint disable protect access file usr local lib python3 6 dist package tensorflow python training tracking base py line 914 in restore from checkpoint position tensor saveables python saveables file usr local lib python3 6 dist package tensorflow python training track util py line 290 in restore saveable tensor saveable file usr local lib python3 6 dist package tensorflow python training save saveable object util py line 361 in validate and slice input add saveable saveable see op convert saveable object file usr local lib python3 6 dist package tensorflow python training save saveable object util py line 331 in add saveable saveable name valueerror the same saveable will be restore with two name layer with weight 1 table attribute table |
tensorflowtensorflow | update batchnormalization documentation | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change clear description the document should mention that the axis can take in a list of integer not just an integer I test it and it be already implement in the tensorflow I be not aware of it and use reshape and transpose which be inefficient correct link be the link to the source code correct parameter define be all parameter define and format correctly the axis can be a list of integer return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | tf to tf lite int 8 result in error quantization not yet support for op | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes a mix os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 2 3 python version 3 describe the current behavior conversion to tf lite float 16 work and model run well conversion to tf lite int 8 do not convert describe the expect behavior conversion to tf lite int 8 work standalone code to reproduce the issue tflite model export try print nstarte tflite export with tensorflow s tf version if opt no tfl detect print don t export detect module m train true keras model keras model inputs input output tf model predict input fp16 tflite model export converter tf lite tfliteconverter from keras model keras model converter optimization tf lite optimize default converter target spec support type tf float16 converter target spec support op tf lite opsset tflite builtin converter allow custom op false converter experimental new converter true tflite model converter convert f opt weight replace pt tflite filename open f wb write tflite model print ntflite export success save as s f int8 tflite model export if opt tfl int8 dataset loadimage opt source img size opt img size auto false def representative datum gen n 0 for path img im0s vid cap in dataset get sample input datum as a numpy array in a method of your choose n 1 input np transpose img 1 2 0 input np expand dim input axis 0 astype np float32 input 255 0 yield input if n opt ncalib break converter tf lite tfliteconverter from keras model keras model this enable quantization converter optimization tf lite optimize default this set the representative dataset for quantization converter representative dataset representative datum gen this ensure that if any op can t be quantize the converter throw an error converter target spec support op tf lite opsset tflite builtins int8 for full integer quantization though support type default to int8 only we explicitly declare it for clarity converter target spec support type tf int8 these set the input and output tensor to uint8 add in r2 3 converter inference input type tf uint8 converter inference output type tf uint8 tflite model converter convert with open mobilenet v2 1 0 224 quant tflite wb as f f write tflite model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach log pdf |
tensorflowtensorflow | can not save mixed precision model when use activity regularizer | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 5 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 python version python 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 32 1 cuda10 1 gpu model and memory rtx 2080ti 11 gb describe the current behavior when use activity regularizer in a subclasse keras model it be not possible to save the model if mixed float16 policy be set if mixed float16 be not set the example below do work describe the expect behavior the example below should save model standalone code to reproduce the issue python import numpy as np import tensorflow as tf from tensorflow keras mix precision import experimental as mixed precision policy mix precision policy mixed float16 mixed precision set policy policy class testmodel tf keras model model def init self kwargs super testmodel self init kwargs self conv 0 tf keras layer conv2d 1 1 1 activity regularizer l1 def call self x return self conv 0 x def get config self return model testmodel model np zero 1 16 16 3 model save test model other info log valueerror python input incompatible with input signature input tensor statefulpartitionedcall 0 shape none 16 16 1 dtype float16 input signature tensorspec shape dtype tf float32 name none |
tensorflowtensorflow | can not use seq len other than 128 with bertnlclassifier model maker text classification example | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 opensuse leap 15 2 for training and ubuntu 18 10 with android studio mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device sony xperia xz1 compact lineage 16 0 tensorflow instal from source or binary from binary pip tensorflow version use command below v1 12 1 38915 gfe968502a9 2 4 0 dev20200810 python version 3 6 12 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 11 1 74 gpu model and memory tesla v100 pcie 32 gb I be try out the text classification with tensorflow lite model maker tutorial I would like to fine tune mobilebert with seq len 512 convert to tflite and run the tflite model on android I can fine tune mobilebert with seq len 512 and convert to tflite just fine when I try to use the tflite model on android with the java bertnlclassifier and try to classify text the app crash if I use a model fine tune with seq len 128 it work just fine in the tutorial it say that 128 be the default sequence length but that it can be adjust I try seq len 16 64 256 512 but they all produce the same error as give below describe the current behavior 1 fine tune mobilebert use model maker with seq len other than 128 e g with python model spec model spec get mobilebert classifier model spec seq len 512 2 convert the model to tflite 3 run the follow java code on android java context context getapplicationcontext bertnlclassifier classifier bertnlclassifier createfromfile context model tflite log v tag create classifi classifier tostring run inference list re classifier classify good produce an error log v tag successfully do an inference 4 run app 5 app crash 2020 10 11 18 41 31 045 9441 9441 org tensorflow lite example textclassification a libc fatal signal 6 sigabrt code 6 si tkill in tid 9441 tclassification pid 9441 tclassification describe the expect behavior use seq len other than 128 should not produce an error standalone code to reproduce the issue for the python fine tuning code see the tutorial colab notebook slightly modify mainactivity java from the text classification android app example other info log full traceback 10 12 19 30 03 424 23376 23376 f libc fatal signal 6 sigabrt code 6 si tkill in tid 23376 tclassification pid 23376 tclassification 10 12 19 30 03 485 23419 23419 f debug 10 12 19 30 03 485 23419 23419 f debug lineageo version 16 0 20190813 unofficial lilac 10 12 19 30 03 485 23419 23419 f debug build fingerprint sony lilac lilac 9 yoshino 2 2 0 190725 0908 1 user dev key 10 12 19 30 03 485 23419 23419 f debug revision 0 10 12 19 30 03 485 23419 23419 f debug abi arm64 10 12 19 30 03 485 23419 23419 f debug pid 23376 tid 23376 name tclassification org tensorflow lite example textclassification 10 12 19 30 03 485 23419 23419 f debug signal 6 sigabrt code 6 si tkill fault addr 10 12 19 30 03 485 23419 23419 f debug x0 0000000000000000 x1 0000000000005b50 x2 0000000000000006 x3 0000000000000008 10 12 19 30 03 485 23419 23419 f debug x4 00000000646f6f67 x5 00000000646f6f67 x6 00000000646f6f67 x7 00000000646f6f67 10 12 19 30 03 485 23419 23419 f debug x8 0000000000000083 x9 0000007c91bfbb40 x10 fffffff87ffffbdf x11 0000000000000001 10 12 19 30 03 485 23419 23419 f debug x12 fefefefefefefeff x13 1e1e1e1e1e1e1e1e x14 0000000000000018 x15 0000000000000000 10 12 19 30 03 485 23419 23419 f debug x16 0000007c91c332a8 x17 0000007c91b71360 x18 0000007beb7d3600 x19 0000000000005b50 10 12 19 30 03 485 23419 23419 f debug x20 0000000000005b50 x21 0000000000000083 x22 0000007c07a6e400 x23 aaaaaaaaaaaaaaab 10 12 19 30 03 485 23419 23419 f debug x24 0000007c07a6e470 x25 0000007c117ffe00 x26 0000007c0955f600 x27 0000007c117ffe0c 10 12 19 30 03 485 23419 23419 f debug x28 0000000000000001 x29 0000007fd32858a0 10 12 19 30 03 485 23419 23419 f debug sp 0000007fd3285860 lr 0000007c91b65b18 pc 0000007c91b65b44 10 12 19 30 03 658 23419 23419 f debug 10 12 19 30 03 658 23419 23419 f debug backtrace 10 12 19 30 03 658 23419 23419 f debug 00 pc 0000000000021b44 system lib64 libc so abort 124 10 12 19 30 03 658 23419 23419 f debug 01 pc 0000000000039848 datum app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg lib arm64 libtask text jni so 10 12 19 30 03 658 23419 23419 f debug 02 pc 0000000000073ea8 datum app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg lib arm64 libtask text jni so 10 12 19 30 03 658 23419 23419 f debug 03 pc 0000000000073d38 datum app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg lib arm64 libtask text jni so 10 12 19 30 03 658 23419 23419 f debug 04 pc 00000000000733d4 datum app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg lib arm64 libtask text jni so 10 12 19 30 03 658 23419 23419 f debug 05 pc 0000000000565be0 system lib64 libart so art quick generic jni trampoline 144 10 12 19 30 03 658 23419 23419 f debug 06 pc 000000000055ce4c system lib64 libart so art quick invoke static stub 604 10 12 19 30 03 658 23419 23419 f debug 07 pc 00000000000cf760 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 10 12 19 30 03 658 23419 23419 f debug 08 pc 00000000002823f0 system lib64 libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 344 10 12 19 30 03 658 23419 23419 f debug 09 pc 000000000027c3ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 948 10 12 19 30 03 658 23419 23419 f debug 10 pc 000000000052d95c system lib64 libart so mterpinvokestatic 204 10 12 19 30 03 658 23419 23419 f debug 11 pc 000000000054f294 system lib64 libart so executemterpimpl 14612 10 12 19 30 03 658 23419 23419 f debug 12 pc 0000000000163e18 dev ashmem dalvik class dex extract in memory from data app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg base apk delete org tensorflow lite task text nlclassifi bertnlclassifier classify 8 10 12 19 30 03 659 23419 23419 f debug 13 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 14 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 15 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 16 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 17 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 18 pc 0000000000144bd8 dev ashmem dalvik class dex extract in memory from data app org tensorflow lite example textclassification mots0rndnpfbnq09b2opyg base apk delete org tensorflow lite example textclassification mainactivity oncreate 104 10 12 19 30 03 659 23419 23419 f debug 19 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 20 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 21 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 22 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 23 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 24 pc 0000000000372386 system framework boot framework vdex android app activity performcreate 24 10 12 19 30 03 659 23419 23419 f debug 25 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 26 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 27 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 28 pc 000000000052f4a8 system lib64 libart so mterpinvokevirtualquick 584 10 12 19 30 03 659 23419 23419 f debug 29 pc 0000000000552e94 system lib64 libart so executemterpimpl 29972 10 12 19 30 03 659 23419 23419 f debug 30 pc 00000000004a1d50 system framework boot framework vdex android app activity performcreate 2 10 12 19 30 03 659 23419 23419 f debug 31 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 32 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 33 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 34 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 35 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 36 pc 00000000003910d4 system framework boot framework vdex android app instrumentation callactivityoncreate 6 10 12 19 30 03 659 23419 23419 f debug 37 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 38 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 39 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 40 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 41 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 42 pc 0000000000369cb0 system framework boot framework vdex android app activitythread performlaunchactivity 642 10 12 19 30 03 659 23419 23419 f debug 43 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 44 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 45 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 46 pc 000000000052d798 system lib64 libart so mterpinvokedirect 296 10 12 19 30 03 659 23419 23419 f debug 47 pc 000000000054f214 system lib64 libart so executemterpimpl 14484 10 12 19 30 03 659 23419 23419 f debug 48 pc 00000000003699a8 system framework boot framework vdex android app activitythread handlelaunchactivity 72 10 12 19 30 03 659 23419 23419 f debug 49 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 50 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 51 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 52 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 53 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 54 pc 00000000003af078 system framework boot framework vdex android app servertransaction launchactivityitem execute 112 10 12 19 30 03 659 23419 23419 f debug 55 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 56 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 57 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 58 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 59 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 60 pc 00000000003afce8 system framework boot framework vdex android app servertransaction transactionexecutor executecallback 196 10 12 19 30 03 659 23419 23419 f debug 61 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 62 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 63 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 64 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 65 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 66 pc 00000000003afbfe system framework boot framework vdex android app servertransaction transactionexecutor execute 68 10 12 19 30 03 659 23419 23419 f debug 67 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 68 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 69 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 70 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 71 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 72 pc 000000000036917e system framework boot framework vdex android app activitythread h handlemessage 78 10 12 19 30 03 659 23419 23419 f debug 73 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 74 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 75 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 76 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 77 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 78 pc 0000000000a25b9a system framework boot framework vdex android os handler dispatchmessage 42 10 12 19 30 03 659 23419 23419 f debug 79 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 80 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 81 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 82 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 83 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 84 pc 0000000000a2cb42 system framework boot framework vdex android os looper loop 414 10 12 19 30 03 659 23419 23419 f debug 85 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 86 pc 000000000025ba28 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 10 12 19 30 03 659 23419 23419 f debug 87 pc 000000000027c390 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 10 12 19 30 03 659 23419 23419 f debug 88 pc 000000000052d95c system lib64 libart so mterpinvokestatic 204 10 12 19 30 03 659 23419 23419 f debug 89 pc 000000000054f294 system lib64 libart so executemterpimpl 14612 10 12 19 30 03 659 23419 23419 f debug 90 pc 000000000036e9a4 system framework boot framework vdex android app activitythread main 216 10 12 19 30 03 659 23419 23419 f debug 91 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 92 pc 000000000051cb98 system lib64 libart so artquicktointerpreterbridge 1032 10 12 19 30 03 659 23419 23419 f debug 93 pc 0000000000565cfc system lib64 libart so art quick to interpreter bridge 92 10 12 19 30 03 659 23419 23419 f debug 94 pc 000000000055ce4c system lib64 libart so art quick invoke static stub 604 10 12 19 30 03 659 23419 23419 f debug 95 pc 00000000000cf760 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 10 12 19 30 03 659 23419 23419 f debug 96 pc 00000000004633f8 system lib64 libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 104 10 12 19 30 03 659 23419 23419 f debug 97 pc 0000000000464e50 system lib64 libart so art invokemethod art scopedobjectaccessalreadyrunnable const jobject jobject jobject unsigned long 1440 10 12 19 30 03 659 23419 23419 f debug 98 pc 00000000003f43f0 system lib64 libart so art method invoke jnienv jobject jobject jobjectarray 48 10 12 19 30 03 659 23419 23419 f debug 99 pc 00000000001176d4 system framework arm64 boot oat offset 0x10d000 java lang class getdeclaredmethodinternal dedupe 180 10 12 19 30 03 659 23419 23419 f debug 100 pc 000000000055cb88 system lib64 libart so art quick invoke stub 584 10 12 19 30 03 659 23419 23419 f debug 101 pc 00000000000cf740 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 200 10 12 19 30 03 659 23419 23419 f debug 102 pc 00000000002823f0 system lib64 libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 344 10 12 19 30 03 659 23419 23419 f debug 103 pc 000000000027c3ac system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 948 10 12 19 30 03 659 23419 23419 f debug 104 pc 000000000052c458 system lib64 libart so mterpinvokevirtual 584 10 12 19 30 03 659 23419 23419 f debug 105 pc 000000000054f114 system lib64 libart so executemterpimpl 14228 10 12 19 30 03 659 23419 23419 f debug 106 pc 0000000000b4650c system framework boot framework vdex com android internal os runtimeinit methodandargscaller run 22 10 12 19 30 03 659 23419 23419 f debug 107 pc 0000000000255ea8 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 1271626068 496 10 12 19 30 03 659 23419 23419 f debug 108 pc 000000000051cb98 system lib64 libart so artquicktointerpreterbridge 1032 10 12 19 30 03 659 23419 23419 f debug 109 pc 0000000000565cfc system lib64 libart so art quick to interpreter bridge 92 10 12 19 30 03 659 23419 23419 f debug 110 pc 0000000000bcb274 system framework arm64 boot framework oat offset 0x39f000 com android internal os zygoteinit main 3092 10 12 19 30 03 659 23419 23419 f debug 111 pc 000000000055ce4c system lib64 libart so art quick invoke static stub 604 10 12 19 30 03 659 23419 23419 f debug 112 pc 00000000000cf760 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 10 12 19 30 03 659 23419 23419 f debug 113 pc 00000000004633f8 system lib64 libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 104 10 12 19 30 03 659 23419 23419 f debug 114 pc 0000000000463050 system lib64 libart so art invokewithvarargs art scopedobjectaccessalreadyrunnable const jobject jmethodid std va list 416 10 12 19 30 03 659 23419 23419 f debug 115 pc 0000000000366894 system lib64 libart so art jni callstaticvoidmethodv jnienv jclass jmethodid std va list 644 10 12 19 30 03 659 23419 23419 f debug 116 pc 00000000000b20b0 system lib64 libandroid runtime so jnienv callstaticvoidmethod jclass jmethodid 120 10 12 19 30 03 659 23419 23419 f debug 117 pc 00000000000b4a30 system lib64 libandroid runtime so android androidruntime start char const android vector const bool 760 10 12 19 30 03 659 23419 23419 f debug 118 pc 00000000000021a0 system bin app process64 main 1200 10 12 19 30 03 659 23419 23419 f debug 119 pc 00000000000af8d0 system lib64 libc so libc init 88 thank you for your help |
tensorflowtensorflow | after convert to tflite the network lose structure and return zero | Bug | system information os platform and distribution linux ubuntu 20 04 lts 64bit tensorflow instal from conda tensorflow version 2 2 0 python version 3 8 5 gcc compiler version 9 3 0 cuda cudnn version 11 0 gpu model and memory gtx 1080 12 gb problem I do a model training for my own datum set I use the model from the tensorflow v2 zoo repository ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 python model main tf2 py model dir trening demo model my ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 pipeline config path trening demo pre train model ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 pipeline config I do train the model for the dataset and I export the model use this command python export tflite graph tf2 py pipeline config path trening demo pre train model ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 pipeline config train checkpoint dir trening demo model my ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 output directory trening demo export model tflite ex my ssd resnet50 v1 fpn 1024x1024 coco17 tpu 8 then I run the detector for the save model and the result be satisfactory then I want to convert the save model to tflite I use import tensorflow as tf convert the model converter tf lite tfliteconverter from save model save model dir path to the savedmodel directory converter optimization lite optimize default tflite model converter convert save the model with open model tflite wb as f f write tflite model then I run the detector for tflite model and the result be zero this line of code return zero retrieve detection result box interpreter get tensor output detail 0 index 0 bounding box coordinate of detect object class interpreter get tensor output detail 1 index 0 class index of detect object score interpreter get tensor output detail 2 index 0 confidence of detect object num interpreter get tensor output detail 3 index 0 total number of detect object inaccurate and not need error indexerror traceback most recent call last in 23 print interpreter get tensor output detail 3 index 24 retrieve detection result 25 box interpreter get tensor output detail 0 index 0 bounding box coordinate of detect object 26 class interpreter get tensor output detail 1 index 0 class index of detect object 27 score interpreter get tensor output detail 2 index 0 confidence of detect object indexerror invalid index to scalar variable so I compare the network structure with the netron program the structure of pre train model form tf model zoo p1 the structure of the network after train p2 the structuer after convert to tflite p3 why have the network be so reduce how to solve problem good |
tensorflowtensorflow | tf function and perreplica tf variable do not work together | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution ubuntu 16 04 tensorflow instal from source or binary binary from pip tensorflow version use command below v2 3 0 rc2 23 gb36436b087 2 3 0 python version 3 7 9 gpu model and memory 2080ti describe the current behavior python import tensorflow as tf from tensorflow python distribute import value as value lib strategy tf distribute mirroredstrategy shape 3 3 def create value ctx return tf variable tf random normal shape shape return tf random normal shape shape v strategy experimental distribute value from function create value assert isinstance v value lib perreplica tf function def my print v tf print v strategy run my print args v result traceback most recent call last file 4 tape and distribute issue report py line 19 in strategy run my print args v file home cloudhan miniconda3 lib python3 7 site package tensorflow python distribute distribute lib py line 1211 in run return self extend call for each replica fn args args kwargs kwargs file home cloudhan miniconda3 lib python3 7 site package tensorflow python distribute distribute lib py line 2585 in call for each replica return self call for each replica fn args kwargs file home cloudhan miniconda3 lib python3 7 site package tensorflow python distribute mirror strategy py line 585 in call for each replica self container strategy fn args kwargs file home cloudhan miniconda3 lib python3 7 site package tensorflow python distribute mirror run py line 78 in call for each replica return wrap args kwargs file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager def function py line 780 in call result self call args kwd file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager def function py line 846 in call return self concrete stateful fn filter call canon args canon kwd pylint disable protect access file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager function py line 1848 in filter call cancellation manager cancellation manager file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager function py line 1924 in call flat ctx args cancellation manager cancellation manager file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager function py line 550 in call ctx ctx file home cloudhan miniconda3 lib python3 7 site package tensorflow python eager execute py line 60 in quick execute input attrs num output tensorflow python framework error impl internalerror fail copy input tensor from job localhost replica 0 task 0 device cpu 0 to job localhost replica 0 task 0 device gpu 0 in order to run inference my print 79 can t copy tensor with type resource to device job localhost replica 0 task 0 device gpu 0 op inference my print 79 describe the expect behavior the above code should work currently the only way to make it work be that in create value I create tensor instead of variable or comment out tf function decorator of my print standalone code to reproduce the issue colab notebook for additional information |
tensorflowtensorflow | tflite interpreter crash when run with tf select op | Bug | hi I have a similar issue when I try to invoke the interpreter image tflite conversion be pass without error converter tf lite tfliteconverter from save model save model converter target op tf lite opsset tflite builtin tf lite opsset select tf op converter allow custom op true converter experimental new converter true tflite model converter convert open convert model tflite wb write tflite model interpreter tf lite interpreter model path convert model tflite interpreter allocate tensor model definition define lstm model with skip connection def lstm net n class 4 I input shape 79 40 name input x mask I x layernormalization name layer norm x s timedistribute dense 64 activation tanh name td dense tanh x x bidirectional lstm 128 return sequence true name bidirectional lstm s x concatenate s x axis 2 name skip connection x dense 64 activation relu name dense 1 relu x x maxpooling1d name max pool 1d x x dense 32 activation relu name dense 2 relu x x flatten name flatten x x dropout rate 0 5 name dropout x x dense 32 activation relu activity regularizer regularizer l2 0 001 name dense 3 relu x o dense n class activation softmax name softmax x model model input I output o name long short term memory return model however I do not get any error when I use batchnormalization instead of layernormalization not sure if that make any difference but model performance be not as expect with the batchnormalization I need to use layernormalization for my project please help as soon as possible originally post by thaslim in issuecomment 706484223 |
tensorflowtensorflow | slow inference with libopencm3 when compare to mbe | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 10 tensorflow instal from source tensorflow version 2 3 0 commit b36436b target platform building for nucleo f767zi dev board compare performance between mbed and libopencm3 describe the problem hello I have be have this problem for a while now but only recently make enough progress so I can ask for help I m try to make tensorflow lite for microcontroller tfmicro library to work with libopencm3 which be a open source firmware library for various arm cortex m microcontroller when this be do it should be quick to run tensorflow on any micro that libopencm3 support right now I m extensively test this on nucleo f767zi dev board which have stm32f767zi micro with 2 mb of flash and 1 m of sram 216 mhz tfmicro port be work as it should I test it with several different model everthing compile and I also get same output as compare to tflite python interpreter however on device inference be very slow it take about 1465 ms for one inference with o3 flag I manage to get the same setup work with a generate mbed project with o3 flag I get inference time of 486 ms almost a second fast mbe have a option to export a makefile of the project that you usually need to compile with mbed command line tool I export the makefile and with some small change the project compile and inference time be again 486 ms I hope that with a export mbed makefile and my makefile I could compare the difference and come down to the core fo the problem but so far I have not manage to so so far I can tell that the problem be not in the tensorflow code or in the model both setup use the same thing I also copy export mbed makefile into my project and start change out piece I manage get to a setup where I be use all compile flag from mbed makefile and a libopencm archive file linker file and startup file but inference be stilll around 1465 ms both setup set the clock frequency of a micro to 216mhz I be time inference in exactly the same way with a dwt counter which increment for every clock cycle to get to millisecond I do a bit of calculation that be the same in both case both setup also use cmsis nn kernel implementation I have to make manually sure of that for now I can not tell what exactly be the problem but the thing that be different be linker script file linker flag startup routine I can see that libopencm need linker with nostartfile spec nano spec spec nosys spec while mbe makefile just pass many wl wrap flag judge by the look of it it use crt0 s for some startup work but this be way over my head what can make microcontroller run slow even though the clock be the same in both example can a incorrect linker script slow down a micro or be there some setting that make load a model from flash slow any help or a suggest path how to continue wiht this problem would be extremly appreciate please provide the exact sequence of command step when you run into the problem both setup be available on my github project with fast mbed example to load the dependency this code require run mbe config root mbe deploy to compile this for any platform support by mbe mbe compile m auto t gcc arm f profile my profile json f to compile this for nucleo f767zi dev board you can run makefile make flash j4 or with mbed mbe compile m nucleo f767zi t gcc arm f profile my profile json f inspect serial output with minicom b 9600 location of main file linker file that be use can be find here startup file that be use can be find here project with slow libopencm3 example the setup for my project take more time and more command to setup as I m use tensorflow and libopencm as submodule please note that I m use branch mbe makefile to showcase my problem to get everthing setup you should just copy command below git clone recurse submodule git checkout mbe makefile cd tensorflow sudo make f tensorflow lite micro tool make makefile hello world cd make c libopencm3 mbe config root mbe deploy to compile and flash to nucelo board make flash j4 inspect serial output with minicom b 115200 location of main file and linker script that be use |
tensorflowtensorflow | optimize arc kernel need to use the new tfliteevaltensor api | Bug | the tflm team recently port all kernel except a handful of externally maintain optimize kernel the new api enable very low memory overhead for tflm the primary change be the tflitetensor c struct be only available during tfliteregistration prepare for a kernel those struct be serve under temporary memory and all datum be only available during the lifetime of that method all tflm kernel should request the tfliteevaltensor c struct during tfliteregistration eval call a sample change look like this this issue track update the arc kernel to this new api dzakhar jaccovg ptal happy to review any prs as they come in |
tensorflowtensorflow | per example gradient fail with lstm | Bug | system information colab with tf nightly gpu 2 4 0 dev20201007 issue per example gradient via tf vectorized map with an lstm model fail it work if you set unroll true for the lstm standalone code to reproduce the issue here be a colab gist with the error |
tensorflowtensorflow | target command line option to the tflm makefile and the target specific makefile have specific naming requirement | Bug | with 43896 we be go to enforce that target blah will only include micro tool make target blah makefile inc at minimum this mean that check such as l2 will no long be necessary 43896 remove the check for bluepill and apollo3evb this issue will remain open until the check be remove for cortex m gcc generic makefile inc stm32f4 makefile inc hexagon makefile inc xtensa hifimini makefile inc but the other target might need update beyond simply remove the no long necessary if statement tag some of maintainer of target that I be aware of yair ehrenwald dzakhar jaccovg mansnil |
tensorflowtensorflow | build on macos be break for tflite micro | Bug | late head doesn t build on macos make f tensorflow lite micro tool make makefile test micro interpreter test exception g std c 11 fno rtti fno exception fno threadsafe static fno unwind table ffunction section fdata section fmessage length 0 dtf lite static memory o3 werror wsign compare wdouble promotion wshadow wunuse variable wmisse field initializer wunuse function wswitch wvla wall wextra wstrict aliasing wno unused parameter dtf lite disable x86 neon dtf lite disable x86 neon I itensorflow lite micro tool make download itensorflow lite micro tool make download gemmlowp itensorflow lite micro tool make download flatbuffer include itensorflow lite micro tool make download ruy itensorflow lite micro tool make download kissfft o tensorflow lite micro tool make gen osx x86 64 bin micro interpreter test tensorflow lite micro tool make gen osx x86 64 obj tensorflow lite micro micro interpreter test o tensorflow lite micro tool make gen osx x86 64 lib libtensorflow microlite a wl fatal warning wl gc section lm framework foundation framework audiotoolbox ld unknown option fatal warning clang error linker command fail with exit code 1 use v to see invocation gmake tensorflow lite micro tool make makefile 460 tensorflow lite micro tool make gen osx x86 64 bin micro interpreter test error 1 look like the build cleanup add this flag |
tensorflowtensorflow | update kera conv2d layer doc to specify padding value | Bug | url s with the issue description of issue what need change specify what value be add as padding when padding argument be set to same be it zero value |
tensorflowtensorflow | fold batchnorm into conv in per tensor weight quantization | Bug | system information os platform and distribution linux ubuntu 18 04 tensorflow instal from binary tensorflow version or github sha if from source 2 3 1 command use to run the converter or code if you re use the python api I take the keras qat tutorial and add a batchnormalization layer in between the conv2d and relu import tempfile import os import tensorflow as tf from tensorflow import keras mnist keras datasets mnist train image train label test image test label mnist load datum train image train image 255 0 test image test image 255 0 model keras sequential keras layers inputlayer input shape 28 28 keras layer reshape target shape 28 28 1 kera layer conv2d filter 12 kernel size 3 3 kera layer batchnormalization keras layers relu keras layer maxpooling2d pool size 2 2 keras layer flatten kera layer dense 10 model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy model fit train image train label epoch 1 validation split 0 1 import tensorflow model optimization as tfmot quantize model tfmot quantization keras quantize model q aware model quantize model model q aware model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy train image subset train image 0 1000 out of 60000 train label subset train label 0 1000 q aware model fit train image subset train label subset batch size 500 epoch 1 validation split 0 1 converter tf lite tfliteconverter from keras model q aware model converter optimization tf lite optimize default quantize tflite model converter convert with open model tflite wb as f f write quantize tflite model when I quantize the weight per channel I can see use netron that the batchnorm be fold into conv2d as I expect it to be when I change default8bitconvweightsquantizer to use per tensor quantization flip per axis flag and remove shape argument in build I see that the batchnorm be not fold be this the way it should be or be it an issue the output from the converter invocation warn tensorflow from lib python3 6 site package tensorflow python training tracking track py 111 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically 2020 10 08 17 35 12 606042 w tensorflow python util util cc 348 set be not currently consider sequence but this may change in the future so consider avoid use they warn tensorflow from lib python3 6 site package tensorflow python training tracking track py 111 layer update from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically 2020 10 08 17 35 13 543657 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 08 17 35 13 544090 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 1 2020 10 08 17 35 13 544145 I tensorflow core grappler cluster single machine cc 356 start new session 2020 10 08 17 35 13 601942 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 08 17 35 13 602363 I tensorflow compiler xla service service cc 168 xla service 0x7313c60 initialize for platform cuda this do not guarantee that xla will be use device 2020 10 08 17 35 13 602374 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1080 ti compute capability 6 1 2020 10 08 17 35 13 602525 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 08 17 35 13 602870 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 582ghz corecount 28 devicememorysize 10 92gib devicememorybandwidth 451 17gib s 2020 10 08 17 35 13 603313 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 10 08 17 35 13 603317 w tensorflow core common runtime gpu gpu device cc 1753 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 10 08 17 35 13 603326 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 10 08 17 35 13 603330 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 10 08 17 35 13 603333 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2020 10 08 17 35 13 604761 I tensorflow core grappler optimizer meta optimizer cc 816 optimization result for grappler item graph to optimize 2020 10 08 17 35 13 604769 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0 001ms 2020 10 08 17 35 13 604772 I tensorflow core grappler optimizer meta optimizer cc 818 function optimizer function optimizer do nothing time 0ms 2020 10 08 17 35 13 650284 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 313 ignore output format 2020 10 08 17 35 13 650302 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 316 ignore drop control dependency 2020 10 08 17 35 13 653368 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 10 08 17 35 13 653747 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 582ghz corecount 28 devicememorysize 10 92gib devicememorybandwidth 451 17gib s 2020 10 08 17 35 13 654132 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2020 10 08 17 35 13 654135 w tensorflow core common runtime gpu gpu device cc 1753 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device 2020 10 08 17 35 13 654145 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2020 10 08 17 35 13 654148 I tensorflow core common runtime gpu gpu device cc 1263 0 2020 10 08 17 35 13 654151 I tensorflow core common runtime gpu gpu device cc 1276 0 n process finish with exit code 0 thank |
tensorflowtensorflow | tflite get error expect bias tensor to be a vector when try to convert and quantize a model | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary from pip tensorflow version use command below 2 3 1 python version 3 7 9 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior tensorflow lite fail to convert a fc4 model when quantization be activate I be get this error when try to perform full integer quantization with mixed 16 bit activation and 8 bit weight tf lite opsset experimental tflite builtin activation int16 weight int8 and when try to use 8 bit quantization method tf lite opsset tflite builtins int8 I do several test first it be work if I convert a mobilenet v1 network so the error depend on the network architecture by dichotomy I have find the layer which cause the error I guess the error be raise when there be a add addv2 in my case operation right after a conv2d operation the converter try to merge add operation as a bias in conv2d in my case the tensor dimension of the constant datum to be add be 1 1 1 64 maybe the converter be expect 1 64 which could explain the error raise please note that 8 bit quantization be work when use tensorflow 2 1 0 so it look like a regression introduce between 2 1 0 and 2 3 1 describe the expect behavior I expect the converter to successfully convert the model with desire quantization standalone code to reproduce the issue here be the code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds import os def representative dataset gen ds tfds load flic shuffle file true split train assert isinstance ds tf datum dataset print ds num calibration step 10 for in range num calibration step example ds take 1 for I in example image I image name I moviename print name print type str type image image tf image resize image size 256 256 image tf expand dim image axis 0 yield image convert onnx model to tflite format convert it first in tensorflow frozen graph if not os path exist convertedmodel tensorflow os makedirs convertedmodel tensorflow then convert it in tflite format converter tf compat v1 lite tfliteconverter from frozen graph pretraine model freeze pb tensorflow freezegraph pb model file input array fcn 1 alexnet mul name of input array as define in torch onnx export function before output array fcn 1 sum name of output array define in torch onnx export function before input shape fcn 1 alexnet mul 1 256 256 3 converter optimization tf lite optimize default converter target spec support op tf lite opsset experimental tflite builtin activation int16 weight int8 converter inference input type tf int16 or tf uint8 converter inference output type tf int16 or tf uint8 converter representative dataset tf lite representativedataset representative dataset gen tf lite model converter convert save the converted model open fc4 quant tflite wb write tf lite model here be the input model to convert model frozen zip other info log include any log or source code that would be helpful to diagnose the problem here be the trace I get traceback most recent call last file convert tflite py line 61 in tf lite model converter convert file home arnaud anaconda3 envs fc4 conv lib python3 7 site package tensorflow lite python lite py line 1970 in convert return super tfliteconverter self convert file home arnaud anaconda3 envs fc4 conv lib python3 7 site package tensorflow lite python lite py line 1339 in convert result self calibrate quantize model result flag file home arnaud anaconda3 envs fc4 conv lib python3 7 site package tensorflow lite python lite py line 452 in calibrate quantize model inference output type allow float activation type file home arnaud anaconda3 envs fc4 conv lib python3 7 site package tensorflow lite python optimize calibrator py line 98 in calibrate and quantize np dtype activation type as numpy dtype num runtimeerror expect bias tensor to be a vector |
tensorflowtensorflow | operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function | Bug | import numpy as np import os import tensorflow as tf from tensorflow import kera import tensorflow kera model as model import tensorflow keras layer as layer import tensorflow kera optimizer as optimizer from tensorflow keras model import from tensorflow keras layer import from tensorflow keras optimizer import from tensorflow keras callback import modelcheckpoint learningratescheduler from tensorflow keras import backend def unet pretraine weight none input size 256 256 1 input keras input shape input size conv1 conv2d 64 3 activation relu padding same kernel initializer he normal input conv1 conv2d 64 3 activation relu padding same kernel initializer he normal conv1 pool1 maxpooling2d pool size 2 2 conv1 conv2 conv2d 128 3 activation relu padding same kernel initializer he normal pool1 conv2 conv2d 128 3 activation relu padding same kernel initializer he normal conv2 pool2 maxpooling2d pool size 2 2 conv2 conv3 conv2d 256 3 activation relu padding same kernel initializer he normal pool2 conv3 conv2d 256 3 activation relu padding same kernel initializer he normal conv3 pool3 maxpooling2d pool size 2 2 conv3 conv4 conv2d 512 3 activation relu padding same kernel initializer he normal pool3 conv4 conv2d 512 3 activation relu padding same kernel initializer he normal conv4 drop4 dropout 0 5 conv4 pool4 maxpooling2d pool size 2 2 drop4 conv5 conv2d 1024 3 activation relu padding same kernel initializer he normal pool4 conv5 conv2d 1024 3 activation relu padding same kernel initializer he normal conv5 drop5 dropout 0 5 conv5 up6 conv2d 512 2 activation relu padding same kernel initializer he normal upsampling2d size 2 2 drop5 merge6 concatenate drop4 up6 axis 3 conv6 conv2d 512 3 activation relu padding same kernel initializer he normal merge6 conv6 conv2d 512 3 activation relu padding same kernel initializer he normal conv6 up7 conv2d 256 2 activation relu padding same kernel initializer he normal upsampling2d size 2 2 conv6 merge7 concatenate conv3 up7 axis 3 conv7 conv2d 256 3 activation relu padding same kernel initializer he normal merge7 conv7 conv2d 256 3 activation relu padding same kernel initializer he normal conv7 up8 conv2d 128 2 activation relu padding same kernel initializer he normal upsampling2d size 2 2 conv7 merge8 concatenate conv2 up8 axis 3 conv8 conv2d 128 3 activation relu padding same kernel initializer he normal merge8 conv8 conv2d 128 3 activation relu padding same kernel initializer he normal conv8 up9 conv2d 64 2 activation relu padding same kernel initializer he normal upsampling2d size 2 2 conv8 merge9 concatenate conv1 up9 axis 3 conv9 conv2d 64 3 activation relu padding same kernel initializer he normal merge9 conv9 conv2d 64 3 activation relu padding same kernel initializer he normal conv9 conv9 conv2d 2 3 activation relu padding same kernel initializer he normal conv9 conv10 conv2d 1 1 activation sigmoid conv9 model model input input output conv10 def iou y pre y true y pre tf cast y pre 0 dtype tf float32 I tf reduce sum y true y pre u tf reduce sum y true y pre return I u item if u 0 else u item ssim1 tf image ssim input conv10 max val 255 filter size 11 filter sigma 1 5 k1 0 01 k2 0 03 model compile optimizer adam lr 1e 4 loss ssim1 metric accuracy iou model summary if pretraine weight model load weight pretraine weight return model model unet |
tensorflowtensorflow | attributeerror tensor object have no attribute lazy read inside tf while loop contain tf scatter nd update | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below v1 15 0 rc3 22 g590d6ee 1 15 0 python version python 3 6 10 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version define cudnn major 7 define cudnn minor 6 define cudnn patchlevel 0 cuda compilation tool release 10 0 v10 0 130 gpu model and memory asus cerberus gtx 1070ti a8 g 8 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior implement a simple tf while loop which contain a tf scatter nd update function throw an error attributeerror tensor object have no attribute lazy read the behaviour be present only in lazy mode non eagar execution also it be not appear when use outside of tf while loop with a fix j describe the expect behavior I should be able to implement this fix iteration loop without an error in a similar issue the solution suggest convert tensor to variable which do not work in my case though standalone code to reproduce the issue import tensorflow as tf ref tf variable 0 1 0 2 0 1 2 2 1 2 1 3 dtype tf int32 true array tf variable 1 1 1 1 false array tf variable 1 0 1 0 num iter tf variable 3 dtype tf int32 3 def body ref true array false array j num iter sample tf cond tf equal tf reduce sum ref j axis 0 1 lambda true array lambda false array ref tf scatter nd update ref j sample j tf add j 1 return ref true array false array j num iter cond lambda ref true array false array j num iter tf less j num iter j tf variable 0 dtype tf int32 tf constant 0 ref true array false array j num iter tf while loop cond body ref true array false array j num iter init tf global variable initializer with tf session as sess sess run init print ref sess run ref print j sess run j provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | regression in layer name for tf operation layer in current tf nightly 2 4 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 dev20201006 python version 3 7 describe the current behavior in the last tf nightly release when attribute a name to a layer create by a tf operation this naming do not seem anymore to be effective anymore describe the expect behavior the naming of the layer shall work as expect as for tf 2 3 standalone code to reproduce the issue import tensorflow as tf print tf version test tf keras input 1 name input1 test2 tf identity test name abcd print test name print test2 name other info log 2 4 0 dev20201005 input1 tf identity identity 0 |
tensorflowtensorflow | tf2 3 converter convert valueerror input 0 of node statefulpartitionedcall functional 1 resnet50 conv1 bn assignnewvalue be pass float from func statefulpartitionedcall input 5 0 incompatible with expect resource | Bug | system information os platform and distribution ubuntu 18 04 cuda 10 1 tensorflow instal from binary tensorflow version tf 2 3 here be the part of my code export path test model tf save model save model export path converter tf lite tfliteconverter from save model export path converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert open convert model tflite wb write tflite model here be my model summary which be a typical image classification model use pre train resnet50 model for transfer learn model functional 1 layer type output shape param input 2 inputlayer none 160 160 3 0 sequential sequential none 160 160 3 0 tf op layer realdiv tensorf none 160 160 3 0 tf op layer sub tensorflowo none 160 160 3 0 resnet50 functional none 5 5 2048 23587712 global average pooling2d gl none 2048 0 dropout dropout none 2048 0 dense dense none 3 6147 total param 23 593 859 trainable param 23 540 739 non trainable param 53 120 here be the failure error traceback most recent call last file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework importer py line 497 in import graph def internal graph c graph serialize option pylint disable protect access tensorflow python framework error impl invalidargumenterror input 0 of node statefulpartitionedcall functional 1 resnet50 conv1 bn assignnewvalue be pass float from func statefulpartitionedcall input 5 0 incompatible with expect resource during handling of the above exception another exception occur traceback most recent call last file main py line 321 in main train lite file main py line 210 in main tflite model converter convert file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow lite python lite py line 1076 in convert return super tfliteconverterv2 self convert file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow lite python lite py line 878 in convert self func 0 low control flow false file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework convert to constant py line 1109 in convert variable to constant v2 as graph convert input index file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework convert to constant py line 1001 in construct concrete function new output name file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python eager wrap function py line 650 in function from graph def wrap import wrap function import graph def file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python eager wrap function py line 628 in wrap function collection file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework func graph py line 986 in func graph from py func func output python func func args func kwargs file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python eager wrap function py line 87 in call return self call with variable creator scope self fn args kwargs file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python eager wrap function py line 93 in wrap return fn args kwargs file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python eager wrap function py line 648 in import graph def importer import graph def graph def name file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python util deprecation py line 507 in new func return func args kwargs file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework importer py line 405 in import graph def producer op list producer op list file home ktatc anaconda3 envs hhproject lib python3 7 site package tensorflow python framework importer py line 501 in import graph def internal raise valueerror str e valueerror input 0 of node statefulpartitionedcall functional 1 resnet50 conv1 bn assignnewvalue be pass float from func statefulpartitionedcall input 5 0 incompatible with expect resource how can I solve it I don t know the reason why these kind of error happen please let I know in detail thank in advance |
tensorflowtensorflow | keras way of auto naming layer trigger a name issue when load an exist model and slice its output | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version use command below 2 3 1 python version 3 7 describe the current behavior when add a strided slice layer use numpy style syntax eg keras auto name this layer tf op layer stride slice I incremente I every time such layer be create however when open an already exist model without create any layer in the session I restart at 0 hence when add at strided slice operation at the end of the model an error be trigger as the model now have 2 layer with the same name tf op layer stride slice ps I can not use the explicit tf strided slice layer with naming as it do not have the same shape inference capability as numpy style slicing describe the expect behavior auto naming shall carry on incremente from all know layer in the session whether they have be create or load from a model file standalone code to reproduce the issue run this code first with initial builing true then false import tensorflow as tf initial buile false model loading true if initial buile layer1 tf keras input 1 layer2 tf keras layer dense 1 model output layer2 layer1 1 model tf keras model layer1 model output model summary model save testmodel if model loading model tf keras model load model testmodel model tf keras model model input model layer 1 output 1 other info log exception have occur valueerror the name tf op layer stride slice be use 2 time in the model all layer name should be unique file c test v2 py line 19 in model tf keras model model input model layer 1 output 1 |
tensorflowtensorflow | exception not raise because the raise keyword be miss in a few place | Bug | hello while analyze tensorflow on sonarcloud I see what look like two error in tensorflow python tpu tpu embed py l1639 and tensorflow python keras loss py l183 you can see both issue in sonarcloud here and here the problem be pretty simple exception be create but not raise because the raise keyword be miss this be a pretty common mistake in python in case you have any question suggestion or if you see a false positive on sonarcloud you can reach out on sonarsource community forum a few note in case you want to use sonarcloud I be currently test the python analyzer so the project on sonarcloud will only show python issue but sonarcloud can also analyze c c code and other language sonarcloud can also import pylint issue in case you want to use a rule sonarcloud do not already provide note however that pylint rule and sonarcloud rule be implement differently you might see new issue with sonarcloud or less issue in some case we try to avoid false positive as much as possible it be free for open source project |
tensorflowtensorflow | duplicate condition in be square | Bug | hello while analyze tensorflow on sonarcloud I see what look like an error in tensorflow python op linalg registration util py l59 you can see the issue in sonarcloud here the condition operator a be square be not none and operator a be square be not none doesn t make sense as it check twice the same thing I guess what the developer intend be operator a be square be not none and operator b be square be not none but I can t be sure as I don t know this code base in case you have any question suggestion or if you see a false positive on sonarcloud you can reach out on sonarsource community forum a few note in case you want to use sonarcloud I be currently test the python analyzer so the project on sonarcloud will only show python issue but sonarcloud can also analyze c c code and other language sonarcloud can also import pylint issue in case you want to use a rule sonarcloud do not already provide note however that pylint rule and sonarcloud rule be implement differently you might see new issue with sonarcloud or less issue in some case we try to avoid false positive as much as possible it be free for open source project |
tensorflowtensorflow | valueerror you be try to load a weight file contain 0 layer into a model with 19 layer | Bug | hey this be my code I still have the same error from future import absolute import from future import print function import os import numpy as np from keras layers import input from keras layers core import activation flatten reshape from keras layers convolutional import convolution2d maxpooling2d upsampling2d conv2d from keras layers normalization import batchnormalization from keras model import model from keras util import np util from keras application import imagenet util from architecture import architecture class ronneberger architecture staticmethod def get model config crossval i d none assert config arch num dimension be 2 assert config arch patch shape 0 16 0 invalid patch shape num modality len config dataset modality input layer shape num modality config arch patch shape config arch num dimension output layer shape config train num class np prod config arch patch shape config arch num dimension model generate unet model config arch num dimension config train num class input layer shape output layer shape config train activation downsize factor 2 return model def generate net model dimension num class input shape output shape activation downsize factor 2 img input input shape input shape x img input encoder x conv2d 64 3 3 border mode same img input x batchnormalization x x activation relu x x maxpooling2d pool size 2 2 x x convolution2d 128 3 3 border mode same x x batchnormalization x x activation relu x x maxpooling2d pool size 2 2 x x convolution2d 256 3 3 border mode same x x batchnormalization x x activation relu x x maxpooling2d pool size 2 2 x x convolution2d 512 3 3 border mode same x x batchnormalization x x activation relu x x convolution2d 1024 3 3 border mode same x x batchnormalization x x activation relu x decoder x convolution2d 512 2 2 border mode same x x batchnormalization x x activation relu x x upsampling2d size 2 2 x x convolution2d 256 2 2 border mode same x x batchnormalization x x activation relu x x upsampling2d size 2 2 x x convolution2d 128 2 2 border mode same x x batchnormalization x x activation relu x x upsampling2d size 2 2 x x convolution2d 64 2 2 border mode same x x batchnormalization x x activation relu x x convolution2d num class 1 1 border mode valid x print img input shape x reshape 48 48 num class x x activation softmax x model model img input x return model and the error be load weight from hdf5 group str len filter layer layer valueerror you be try to load a weight file contain 0 layer into a model with 19 layer |
tensorflowtensorflow | mlir tflite contain unused file | Bug | tensorflow compiler mlir lite transform load quantization recipe cc be call via tf opt allow unregistered dialect tfl load recipe s filecheck s however accord to this commit allow unregistered dialect be disable the only way to call the file be deprecate hence in tensorflow compiler mlir lite transform load quantization recipe cc can be remove as it be not involve in any mlir tfl pass |
tensorflowtensorflow | trivial example do not save restore weight print warning from tensorflow keras implementation | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 7 colab tensorflow instal from source or binary pip colab tensorflow version use command below 2 3 1 macos v2 3 0 0 gb36436b087 2 3 0 colab python version 3 8 5 macos 3 6 9 colab describe the current behavior train weight be not restore two warning about the tensorflow implementation be print warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 model state update from tensorflow python keras engine training be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically warn tensorflow from usr local lib python3 6 dist package tensorflow python training tracking track py 111 layer update from tensorflow python keras engine base layer be deprecate and will be remove in a future version instruction for update this property should not be use in tensorflow 2 0 as update be apply automatically describe the expect behavior weight be restore test accuracy be consistent with save model no warning print I find several closed bug report from other over that look like the same problem lack of saving and the bogus warning go back to the 2 0 0 day it have clearly not be fix standalone code to reproduce the issue python import tensorflow as tf x train y train x test y test tf keras datasets mnist load datum x train x train astype float32 255 x test x test astype float32 255 model tf keras model sequential tf keras layer flatten tf keras layer dense 512 activation relu tf keras layer dense 10 activation softmax model compile optimizer rmsprop loss sparse categorical crossentropy metric accuracy model fit x train y train epoch 8 batch size 128 validation split 0 2 test loss test acc model evaluate x test y test print model test acc test acc model save my save path save model tf keras model load model my save path test loss test acc save model evaluate x test y test print save model test acc test acc |
tensorflowtensorflow | wrong paragraph order in tutorial | Bug | url s with the issue description of issue what need change when you open the above tutorial the description of the dataset fashion mnist be place at the bottom of the page however in the original ipynb file it be in the first section import the fashion mnist dataset it seem that the order of the paragraph be wrongly rearrange when build the web page |
tensorflowtensorflow | create segmentation | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | fatal exception java lang illegalargumentexception internal error fail to apply delegate opencl library not load dlopen fail library libopencl pixel so not find | Bug | system information mobile device android across multiple android version 6 7 8 describe the current behavior invoke org tensorflow lite interpreter cause crash on some case describe the expect behavior should not crash standalone code to reproduce the issue I be get this on crash analysis tool since this be not reproducible on my device I be not able to produce a standalone code other info log fatal exception java lang illegalargumentexception internal error fail to apply delegate opencl library not load dlopen fail library libopencl pixel so not find fall back to opengl tflitegpudelegate init no egl error but eglchooseconfig fail tflitegpudelegate prepare delegate be not initialize node number 31 tflitegpudelegatev2 fail to prepare restore previous execution plan after delegate application failure at org tensorflow lite nativeinterpreterwrapper applydelegate nativeinterpreterwrapper java at org tensorflow lite nativeinterpreterwrapper init nativeinterpreterwrapper java 85 at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 61 at org tensorflow lite interpreter interpreter java 224 at a b c datum posenet getinterpreter posenet java 184 at a b c datum posenet estimatesinglepose posenet java 293 at a b c call imageanalyser processimage imageanalyser java 157 at a b c call imageanalyser access processimage imageanalyser java 26 at a b c call imageanalyser processimageforanalysis 2 invokesuspend imageanalyser java 127 at kotlin coroutine jvm internal basecontinuationimpl resumewith basecontinuationimpl java 33 at kotlinx coroutines dispatchedtask run dispatchedtask java 56 at java util concurrent threadpoolexecutor runworker threadpoolexecutor java 1162 at java util concurrent threadpoolexecutor worker run threadpoolexecutor java 636 at java lang thread run thread java 764 |
tensorflowtensorflow | documentation example for tf keras util sequence be incorrect fencepost error in example code | Bug | on in the example generator there be code to return a batch def getitem self idx batch x self x idx self batch size idx 1 self batch size batch y self y idx self batch size idx 1 self batch size however this return batch size 1 result for example if batch size 1 request idx 0 return item of index 0 and 1 two item this will cause an array overrun when the batch size be an even multiple of the data size the correct code be def getitem self idx batch x self x idx self batch size idx 1 self batch size 1 batch y self y idx self batch size idx 1 self batch size 1 I see the incorrect code copy all over the place |
tensorflowtensorflow | keras backend one like with lambda be not serializable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 databrick runtime 7 3 tensorflow instal from source or binary binary tensorflow version use command below 2 3 python version 3 cuda cudnn version 10 1 gpu model and memory aw p3 xlarge describe the current behavior wrap a tf keras backend one like in a tf keras layer lambda fail serialization the follow code create the model that fail to serialize python x keras input shape 1 name x one like layer keras layers lambda k one like name one like one like layer x logit kera layer dense 1 activation sigmoid model keras sequential x one like layer logit name one like model error typeerror traceback most recent call last usr local lib python3 6 dist package tensorflow python keras backend py in wrapper args kwargs 200 try 201 return target args kwargs 202 except typeerror valueerror typeerror str object be not callable during handling of the above exception another exception occur typeerror traceback most recent call last 11 frame usr local lib python3 6 dist package tensorflow python keras backend py in wrapper args kwargs 203 note convert to eager tensor currently raise a valueerror not a 204 typeerror when give unexpected type so we need to catch both 205 result dispatch wrapper args kwargs 206 if result be not opdispatcher not support 207 return result typeerror module object be not callable this happen on tf 2 3 and tf nightly see describe the expect behavior the model should be serializable standalone code to reproduce the issue see above other info log include any log or source code that would be helpful to workaround instead of use the lambda just call one like directly this work but lead to the model be less interpretable this require use the functional model see also issuecomment 698918718 |
tensorflowtensorflow | person detection benchmark do not build for stm32f4 | Bug | tensorflow micro system information tensorflow instal from source or binary source tensorflow version commit sha if source 43509 target platform e g arm mbe os arduino nano 33 etc stm32f4 after 43509 be merge remove the exclusion for the person detection and person deteciton experimental benchmark and then make f tensorflow lite micro tool make makefile target stm32f4 tag cmsis nn person detection benchmark will give the follow error arm none eabi 7 3 1 arm none eabi bin ld tensorflow lite micro tool make gen stm32f4 cortex m4 bin person detection benchmark section rodata will not fit in region flash home advaitjain github tensorflow tensorflow lite micro tool make download gcc embed bin lib gcc arm none eabi 7 3 1 arm none eabi bin ld tensorflow lite micro tool make gen stm32f4 cortex m4 bin person detection benchmark section bss will not fit in region ram home advaitjain github tensorflow tensorflow lite micro tool make download gcc embed bin lib gcc arm none eabi 7 3 1 arm none eabi bin ld region ram overflow by 72568 byte home advaitjain github tensorflow tensorflow lite micro tool make download gcc embed bin lib gcc arm none eabi 7 3 1 arm none eabi bin ld region flash overflow by 158376 byte collect2 error ld return 1 exit status make tensorflow lite micro benchmarks makefile inc 31 tensorflow lite micro tool make gen stm32f4 cortex m4 bin person detection benchmark error 1 the easy fix be to increase the number here l34 l37 but will let the cmsis nn team weigh in on this |
tensorflowtensorflow | execute vectorize map on batch trigger retrace | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 dockerhub container late digest c57fb9628d80 tensorflow version tf version git version v2 3 0 54 gfcc4b966f1 tf version version 2 3 1 python version 3 6 9 describe the current behavior repeatedly call vectorized map on the same function with parameter of same shape and dtype trigger retrace it seem that vectorized map allocate memory to run all iteration simultaneously and this be cause oom therefore I separate the execution on batch to reduce the memory allocation of the function if there be a well way to do this please let I know describe the expect behavior no retracing standalone code to reproduce the issue python import tensorflow as tf import time print tf version git version format tf version git version print tf version version format tf version version def jvp f primal tangent with tf autodiff forwardaccumulator primal tangent as acc primal out f primal return primal out acc jvp primal out unconnected gradient tf unconnectedgradient zero def f x z t c v1 v2 v3 v4 p tf concat x z t axis 1 pe p none ce tf transpose a c none perm 2 1 0 d ce pe r tf reduce sum input tensor tf square d axis 1 g tf exp r 2 p tf reduce sum input tensor g v1 axis 1 keepdim true b tf reduce sum input tensor g v2 axis 1 keepdim true u tf reduce sum input tensor g v3 axis 1 keepdim true w tf reduce sum input tensor g v4 axis 1 keepdim true return p b u w def g x z t c v1 v2 v3 v4 tf print tf print shape tf shape w for w in x z t c v1 v2 v3 v4 tf print tf print dtype w dtype for w in x z t c v1 v2 v3 v4 print py print x shape tf shape x fx lambda xi f xi z t c v1 v2 v3 v4 with tf autodiff forwardaccumulator primals x tangent tf one like x as fwd outer dpdx dbdx dudx dwdx jvp fx x tf one like x 1 d2bd2x d2ud2x d2wd2x fwd outer jvp dbdx dudx dwdx tf unconnectedgradient zero fz lambda zi f x zi t c v1 v2 v3 v4 with tf autodiff forwardaccumulator primal z tangent tf one like z as fwd outer dpdz dbdz dudz dwdz jvp fz z tf one like z 1 d2bd2z d2ud2z d2wd2z fwd outer jvp dbdz dudz dwdz tf unconnectedgradient zero ft lambda ti f x z ti c v1 v2 v3 v4 p b u w dpdt dbdt dudt dwdt jvp ft t tf one like t return dudx dudz dudt dwdx dwdz dwdt dbdx dbdz dbdt dpdx dpdz d2ud2x d2ud2z d2wd2x d2wd2z d2bd2x d2bd2z n 7500 x tf random uniform n 1 dtype tf float64 z tf random uniform n 1 dtype tf float64 t tf random uniform n 1 dtype tf float64 c tf random uniform n 3 dtype tf float64 v1 tf random uniform 1 n dtype tf float64 v2 tf random uniform 1 n dtype tf float64 v3 tf random uniform 1 n dtype tf float64 v4 tf random uniform 1 n dtype tf float64 def batch execution fb args batch size batch size tf cast batch size tf int32 n tf shape args 0 0 i0 tf constant 0 dtype tf int32 sout while tf less equal i0 batch size n tf print tf print batch iteration batch size format batch size il i0 batch size bsargs a i0 il for a in args bout fb bsargs sout append bout i0 il if tf less i0 n tf print tf print last batch iteration batch size format n i0 bsarg a i0 n for a in args bout fb bsargs sout append bout sout tf concat o I for o in sout axis 0 for I in range len sout 0 return sout def shape vectorized map fv args axis 2 sarg tf expand dim a axis axis for a in args out batch execution lambda args tf vectorized map fv args fallback to while loop false sarg batch size 1000 sout tf squeeze o axis axis for o in out return sout start time time clock e2v shape vectorize map lambda args g args c v1 v2 v3 v4 x z t delta time time clock start time print run with batch vectorize map take f second format delta time other info log tf version git version v2 3 0 54 gfcc4b966f1 tf version version 2 3 1 tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 warn tensorflow 5 out of the last 5 call to f at 0x7f54481b3620 trigger tf function retracing tracing be expensive and the excessive number of tracing could be due to 1 create tf function repeatedly in a loop 2 pass tensor with different shape 3 pass python object instead of tensor for 1 please define your tf function outside of the loop for 2 tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing for 3 please refer to python or tensor args and for more detail tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 warn tensorflow 6 out of the last 6 call to f at 0x7f54402c4620 trigger tf function retracing tracing be expensive and the excessive number of tracing could be due to 1 create tf function repeatedly in a loop 2 pass tensor with different shape 3 pass python object instead of tensor for 1 please define your tf function outside of the loop for 2 tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing for 3 please refer to python or tensor args and for more detail tf print batch iteration batch size 1000 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 warn tensorflow 7 out of the last 7 call to f at 0x7f54407a37b8 trigger tf function retracing tracing be expensive and the excessive number of tracing could be due to 1 create tf function repeatedly in a loop 2 pass tensor with different shape 3 pass python object instead of tensor for 1 please define your tf function outside of the loop for 2 tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing for 3 please refer to python or tensor args and for more detail tf print last batch iteration batch size 500 tf print shape 1 1 1 1 1 1 7500 3 1 7500 1 7500 1 7500 1 7500 tf print dtype tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 tf float64 warn tensorflow 8 out of the last 8 call to f at 0x7f54400c17b8 trigger tf function retracing tracing be expensive and the excessive number of tracing could be due to 1 create tf function repeatedly in a loop 2 pass tensor with different shape 3 pass python object instead of tensor for 1 please define your tf function outside of the loop for 2 tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing for 3 please refer to python or tensor args and for more detail run with batch vectorize map take 119 087042 second relate issue 42835 43252 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.