repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
tensorflowtensorflow
bug in use the savedmodel format guide
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information ubuntu 18 04 system with two gpu tensorflow 2 2 describe the problem the guide near the top include the following physical device tf config experimental list physical device gpu if physical device tf config experimental set memory growth physical device 0 true and on a multi gpu system this will lead to problem later because memory growth will only be manage on one of the gpu it should instead be physical device tf config experimental list physical device gpu for device in physical device tf config experimental set memory growth device true I be aware that this be really a minor documentation issue and would prefer to simply upload the fix myself unfortunately I be not willing to sign the contributor license agreement I have many friend at google with whom I have many technical discussion I don t want those discussion to automatically grant license to google if I forget to say not a contribution at the beginning of they source code log
tensorflowtensorflow
tensorflow lite for microcontroller sigabort with a mobilenetv2 alpha 0 1 model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 5 linux gcc compiler version if compile from source apple clang version 11 0 3 clang 1103 0 32 62 c 1 describe the current behavior I be use tensorflow lite for microcontroller at commit 4f69f62c61ecf3cd23286324af62d00643186ec2 I ve train two mobilenetv2 model in kera with 48x48 input size and a single input channel then convert to int8 quantize I be attempt to run both model use tensorflow lite for microcontroller on x86 build with clang on macos and with gcc on ubuntu the first model have a mobilenetv2 filter scaling factor a of 0 35 this model run perfectly the second model have a scaling factor of 0 1 this model sigabort during the invoke call strangely both model run perfectly when execute from the openmv h7 arm cortex m7 and the small model run perfectly on the h7 it might be worth note that on the openmv device the model be store in dynamic memory that say I ve try declare the model without static on x86 and it have no impact I ve attach zip contain both model file plus an example program that exhibit the sigabort to build and run the example with an empty input make f makefile tflite build edge impulse standalone within the example code the call to invoke be on line 260 of edge impulse sdk classifier ei run classifi h to switch to the 0 35 model which doesn t sigabort replace tflite model model parameter and edge impulse sdk directory with the version contain within 0 35 grayscale zip describe the expect behavior the a 0 1 model should run successfully on x86 the same as the 0 35 do standalone code to reproduce the issue example code here example standalone inference zip the lite model file be locate here model zip
tensorflowtensorflow
can t predict bound box with my own train model even though tensorflow return reduce loss during training
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 home mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device no tensorflow instal from source or binary source tensorflow version use command below 1 14 0 python version 3 6 7 bazel version if compile from source not use bazel gcc compiler version if compile from source not use gcc cuda cudnn version cuda cudnn 10 0 gpu model and memory nvidia geforce gtx 1080 6 g exact command to reproduce vis util visualize box and label on image array image np squeeze box np squeeze class astype np int32 np squeeze score category index use normalize coordinate true line thickness 8 min score thresh 0 20 describe the current behavior when try to predict bound box with my own train model with the object detection api it win t predict any box and return the original image the problem be not with my prediction code as I use the same as in the tutorial at and with a different model which return the predict bound box describe the expect behavior I don t know why this be happen as the tensorboard report say that my loss be reduce as I train the model but when actually use it to predict bound box in the image it win t work standalone code to reproduce the issue the model the prediction code import os import cv2 import numpy as np import tensorflow as tf import sys from util import label map util from util import visualization util as vis util model name league model image name test1 jpg grab path to current work directory cwd path os getcwd path to frozen detection graph pb file which contain the model that be use for object detection path to ckpt os path join cwd path model name frozen inference graph pb path to label map file path to label os path join cwd path annotation label map pbtxt path to image path to image os path join cwd path image name number of class the object detector can identify num class 2 load the label map label map map index to category name so that when our convolution network predict 5 we know that this correspond to king here we use internal utility function but anything that return a dictionary mapping integer to appropriate string label would be fine label map label map util load labelmap path to label category label map util convert label map to category label map max num class num class use display name true category index label map util create category index category load the tensorflow model into memory detection graph tf graph with detection graph as default od graph def tf compat v1 graphdef with tf io gfile gfile path to ckpt rb as fid serialize graph fid read od graph def parsefromstring serialize graph tf import graph def od graph def name sess tf session graph detection graph define input and output tensor I e datum for the object detection classifier input tensor be the image image tensor detection graph get tensor by name image tensor 0 output tensor be the detection box score and class each box represent a part of the image where a particular object be detect detection box detection graph get tensor by name detection box 0 each score represent level of confidence for each of the object the score be show on the result image together with the class label detection score detection graph get tensor by name detection score 0 detection class detection graph get tensor by name detection class 0 number of object detect num detection detection graph get tensor by name num detection 0 load image use opencv and expand image dimension to have shape 1 none none 3 I e a single column array where each item in the column have the pixel rgb value image cv2 imread path to image image rgb cv2 cvtcolor image cv2 color bgr2rgb image expand np expand dim image rgb axis 0 perform the actual detection by run the model with the image as input box score class num sess run detection box detection score detection class num detection feed dict image tensor image expand draw the result of the detection aka visulaize the result vis util visualize box and label on image array image np squeeze box np squeeze class astype np int32 np squeeze score category index use normalize coordinate true line thickness 8 min score thresh 0 20 all the result have be draw on image now display the image cv2 imshow object detector image press any key to close the image cv2 waitkey 0 clean up cv2 destroyallwindow
tensorflowtensorflow
maybe incorrect bri score calculation
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow version use command below 1 15 0 tensorflow probability version 0 8 0 describe the current behavior I believe that the way the bri score calculation be implement in tensorflow probability stat bri score be incorrect the formula use be sum I p I p I 2 p k where p be the probability vector over all discrete outcome and k be the realize outcome note this give element wise bri score and to get the actual bri score across the dataset reduce mean must be call describe the expect behavior this formula do not match the reference cite nor wikipedia nor the definition that sklearn use also it be state in the tensorflow probability doc that the bri score can be negative which be not true the reference cite be bri s original paper find here which state the formula as where r be the number of possible class n be the number of forecast fij be the probability forecast of class j for instance I and eij be 0 or 1 depend if the event occur in class j or not this be the same formula that be use in sklearn and wikipedia by definition this score can not be negative if we consider the element wise bri score from this formula it be sum j p j e j 2 which be not equivalent to sum I p I p I 2 p k tensorflow s bri score formula should be correct to match the commonly accept definition
tensorflowtensorflow
typo in optimizer v2 py
Bug
l285 I be read through the code to learn how to create a custom optimizer I think the line should be this class be stateful and thread compatible
tensorflowtensorflow
the first paragraph of the documentation for tf linalg inv be cut in half
Bug
url s with the issue please provide a link to the documentation entry for example description of issue what need change the first paragraph say compute the inverse of one or more square invertible matrix or their instead of compute the inverse of one or more square invertible matrix or their adjoint conjugate transpose image
tensorflowtensorflow
tf lite tfliteconverter produce inconsistent converted model
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 osx 10 15 5 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 37224 ga6cd18a133 2 4 0 dev20200722 python version 3 7 describe the current behavior tf lite tfliteconverter from save model convert lstm to fuse op but tf lite tfliteconverter from concrete function doesn t describe the expect behavior the converted result should be the same for the same model standalone code to reproduce the issue produce fuse lstm import numpy as np import tensorflow as tf model tf keras model sequential tf keras layers input shape 28 28 name input tf keras layers bidirectional tf keras layers lstm 20 return sequence true tf keras layer flatten tf keras layer dense 10 activation tf nn softmax name output model compile optimizer adam loss sparse categorical crossentropy metric accuracy model summary run model tf function lambda x model x this be important let s fix the input size batch size 1 step 28 input size 28 concrete func run model get concrete function tf tensorspec batch size step input size model input 0 dtype model directory model dir keras lstm converter tf lite tfliteconverter from concrete function concrete func converter tf lite tfliteconverter from save model model dir tflite model converter convert with tf io gfile gfile tflite test tflite wb as f f write tflite model produce original lstm import numpy as np import tensorflow as tf model tf keras model sequential tf keras layers input shape 28 28 name input tf keras layers bidirectional tf keras layers lstm 20 return sequence true tf keras layer flatten tf keras layer dense 10 activation tf nn softmax name output model compile optimizer adam loss sparse categorical crossentropy metric accuracy model summary run model tf function lambda x model x this be important let s fix the input size batch size 1 step 28 input size 28 concrete func run model get concrete function tf tensorspec batch size step input size model input 0 dtype model directory model dir keras lstm converter tf lite tfliteconverter from concrete function concrete func converter tf lite tfliteconverter from save model model dir tflite model converter convert with tf io gfile gfile tflite test tflite wb as f f write tflite model
tensorflowtensorflow
tf slim resnet v1 pre train model preprocesse
Bug
affect doc url pre train model description both tf slim vgg and inception preprocesse cause the resnet v1 model in the above link to have incorrect output the other imagenet model run correctly with either the vgg or inception preprocesse however under the same code path resnet v1 model produce incorrect output this issue be also document in this unresolved issue be we aware of the correct step to take to correctly run slim s resnet v1 model and could this be update in the documentation thank you
tensorflowtensorflow
use learn rate decay schedule and verbose 1 create fatal typeerror
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 and window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary pip3 tensorflow version use command below 2 2 0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior in the follow minimal example set verbose 1 in the fitting create a fatal error I realize that reducelronplateau isn t need when also use exponentialdecay but the fact that this error happen if verbose 1 and not when verbose 0 in training show there be a deep problem minimum kera example import tensorflow as tf from tensorflow keras optimizer import sgd from tensorflow keras optimizer schedule import exponentialdecay from tensorflow keras layer import dense from tensorflow keras import sequential from tensorflow keras callbacks import terminateonnan earlystopping reducelronplateau model tf keras sequential model add tf keras layer dense 8 input shape 16 model add tf keras layer dense 1 callback terminateonnan earlystopping monitor val loss reducelronplateau monitor val loss optimizer sgd learn rate exponentialdecay initial learning rate 1 e 3 decay step 2 decay rate 0 5 model compile optimizer optimizer loss mse x tf one 32 16 y tf one 32 1 v tf one 8 1 model fit x y batch size 4 epoch 8 callback callback validation split 0 2 step per epoch 4 verbose 1 yield the follow error epoch 1 8 1 4 eta 0s loss 1 4363 typeerror traceback most recent call last in 19 v tf one 8 1 20 model fit x y batch size 4 epoch 8 callback callback 21 validation split 0 2 step per epoch 4 verbose 1 appdata local continuum anaconda3 lib site package tensorflow python keras engine training py in method wrapper self args kwargs 64 def method wrapper self args kwargs 65 if not self in multi worker mode pylint disable protect access 66 return method self args kwargs 67 68 run inside run distribute coordinator already appdata local continuum anaconda3 lib site package tensorflow python keras engine training py in fit self x y batch size epoch verbose callback validation split validation datum shuffle class weight sample weight initial epoch step per epoch validation step validation batch size validation freq max queue size worker use multiprocesse 874 epoch log update val log 875 876 callback on epoch end epoch epoch log 877 if self stop train 878 break appdata local continuum anaconda3 lib site package tensorflow python keras callbacks py in on epoch end self epoch log 363 log self process log log 364 for callback in self callback 365 callback on epoch end epoch log 366 367 def on train batch begin self batch log none appdata local continuum anaconda3 lib site package tensorflow python keras callbacks py in on epoch end self epoch log 892 893 def on epoch end self epoch log none 894 self finalize progbar log 895 896 def on test end self log none appdata local continuum anaconda3 lib site package tensorflow python keras callbacks py in finalize progbar self log 933 self progbar target self see 934 log log or 935 self progbar update self see list log item finalize true 936 937 appdata local continuum anaconda3 lib site package tensorflow python keras util generic util py in update self current value finalize 568 value base max current self see so far 1 569 if k not in self value 570 self value k v value base value base 571 else 572 self value k 0 v value base typeerror unsupported operand type s for exponentialdecay and int describe the expect behavior model should fit without any error here be what happen if I don t use the callback epoch 1 8 4 4 0s 20ms step loss 6 3209 val loss 4 6678 epoch 2 8 4 4 0s 10ms step loss 4 4087 val loss 4 0870 epoch 3 8 4 4 0s 9ms step loss 4 0231 val loss 3 9547 epoch 4 8 4 4 0s 10ms step loss 3 9385 val loss 3 9224 epoch 5 8 4 4 0s 9ms step loss 3 9185 val loss 3 9143 epoch 6 8 4 4 0s 9ms step loss 3 9131 val loss 3 9123 epoch 7 8 4 4 0s 9ms step loss 3 9121 val loss 3 9118 epoch 8 8 4 4 0s 11ms step loss 3 9117 val loss 3 9117 standalone code to reproduce the issue see above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
cudnnlstm variable sequence length sometimes fail with cudnn status execution fail
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 debian sid 2020 07 01 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source and binary tensorflow version use command below 1 15 python version 3 6 3 7 8 bazel version if compile from source 0 26 1 gcc compiler version if compile from source 9 0 cuda cudnn version 10 0 7 4 1 10 0 7 4 2 1 10 0 7 5 1 10 10 0 7 6 5 32 gpu model and memory 2x rtx 2080 ti 4x gtx 1080 ti you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version v1 15 3 0 g4386a6640c describe the current behavior training with some dataset trigger 2020 07 22 16 15 42 108252 e tensorflow stream executor dnn cc 588 cudnn status execution fail in tensorflow stream executor cuda cuda dnn cc 1778 cudnnrnnforwardtrainingex cudnn handle rnn desc handle input desc datum handle input datum opaque input h desc handle input h datum opaque input c desc handle input c datum opaque rnn desc param handle param opaque output desc datum handle output datum opaque output h desc handle output h datum opaque output c de sc handle output c datum opaque nullptr nullptr nullptr nullptr nullptr nullptr nullptr nullptr workspace opaque workspace size reserve space opaque reserve space size 2020 07 22 16 15 42 108385 w tensorflow core framework op kernel cc 1651 op require fail at cudnn rnn op cc 1527 internal fail to call thenrnnforward with model config rnn mode rnn input mode rnn direction mode 2 0 0 num layer input size num unit dir count max seq length batch size cell num unit 1 2048 2048 1 75 2 2048 describe the expect behavior training should succeed or tensorflow or cudnn should expose a more actionable error standalone code to reproduce the issue will be provide after other info log will be provide after some noisy debugging session can be see at
tensorflowtensorflow
where be tensorflow lite support library
Bug
here be the guide to tensorflow lite support library an exert the tensorflow lite android support library be design to help process the input and output of tensorflow lite model and make the tensorflow lite interpreter easy to use which link to which be 404 page not find
tensorflowtensorflow
convert tflite model produce wrong result while pb model produce correct result
Bug
system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary pip install tf nightly tensorflow version or github sha if from source 2 4 0 dev20200721 command use to run the converter or code if you re use the python api colab gpu be disabled toco graph def file model f46da743 pb output file model f46da743 tflite output format tflite inference type float inference input type float input array 0 output array 1195 enable v1 converter target op select tf op the output from the converter invocation 2020 07 21 19 15 30 657556 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2020 07 21 19 15 32 252326 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2020 07 21 19 15 32 255365 e tensorflow stream executor cuda cuda driver cc 314 fail call to cuinit cuda error no device no cuda capable device be detect 2020 07 21 19 15 32 255412 I tensorflow stream executor cuda cuda diagnostic cc 156 kernel driver do not appear to be run on this host 71fb61714d13 proc driver nvidia version do not exist 2020 07 21 19 15 32 255749 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2020 07 21 19 15 32 261434 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2200000000 hz 2020 07 21 19 15 32 261660 I tensorflow compiler xla service service cc 168 xla service 0x1aeef40 initialize for platform host this do not guarantee that xla will be use device 2020 07 21 19 15 32 261724 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version i0721 19 15 35 806807 139688388913024 lite py 1321 use experimental converter if you encounter a problem please file a bug you can opt out by set experimental new converter false 2020 07 21 19 15 36 648004 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 315 ignore output format 2020 07 21 19 15 36 648067 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 318 ignore drop control dependency also please include a link to the save model or graphdef model f46da743 pb model f46da743 tflite failure detail the conversion be successful but the generate model be wrong produce wrong result any other info log I convert model from onnx to pb and it work well with pb model tensorflow it show desire good result good but when I convert pb to tflite it work with tflite model tensorflow but it show unexpected bad result bad reprocudable command and code gpu be disabled
tensorflowtensorflow
label map util
Bug
it be open by mistake I haven t problem
tensorflowtensorflow
jpeg decode for example when loading tfrecord from file cause error on tpu when try to fit a model
Bug
system information tensorflow version use command below 2 2 0 v2 2 0 0 g2b96f3662b python version 3 6 9 gpu model and memory google colab tpu I m not sure that this be a bug but I ve encounter this weird behaviour with my tfrec dataset and make simple code to reproduce it this problem only exist at tpu firstly I initialize tpu import os import tensorflow as tf import numpy as np tf get logg propagate false resolver tf distribute cluster resolver tpuclusterresolver tpu grpc os environ colab tpu addr tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver info tensorflow initialize the tpu system grpc 10 26 115 226 8470 info tensorflow clear out eager cache info tensorflow finish initialize tpu system info tensorflow find tpu system info tensorflow num tpu core 8 info tensorflow num tpu worker 1 info tensorflow num tpu core per worker 8 info tensorflow available device deviceattribute job localhost replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job localhost replica 0 task 0 device xla cpu 0 xla cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device cpu 0 cpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 0 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 1 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 2 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 3 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 4 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 5 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 6 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu 7 tpu 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device tpu system 0 tpu system 0 0 info tensorflow available device deviceattribute job worker replica 0 task 0 device xla cpu 0 xla cpu 0 0 then I run the follow code which create tf datum dataset of dummy image encode it to jpeg and back then normalize to float32 and make batch with strategy scope def encode jpg image class idx return tf io encode jpeg image quality 95 optimize size true chroma downsample false class idx def decode jpg image class idx return tf image decode jpeg image channel 3 class idx def normalize img image class idx return image 255 0 5 class idx dataset tf datum dataset from tensor slice tf cast np zero 256 256 3 dtype tf uint8 for in range 300 0 for in range 300 dataset dataset map encode jpg dataset dataset map decode jpg dataset dataset map normalize img dataset dataset batch 8 print nhow do our dataset look like for I image label in enumerate dataset print image shape label shape if I 2 break model tf keras sequential tf keras layer flatten input shape 256 256 3 tf keras layer dense 100 activation relu tf keras layer dense 10 print nhow do our model model like model summary model compile loss sparse categorical crossentropy optimizer adam model fit dataset epoch 1 I receive the following output which end with exception how do our dataset look like 8 256 256 3 of 8 of 8 256 256 3 of 8 of 8 256 256 3 of 8 of how do our model model like model sequential layer type output shape param flatten flatten none 196608 0 dense dense none 100 19660900 dense 1 dense none 10 1010 total param 19 661 910 trainable param 19 661 910 non trainable param 0 unimplementederror traceback most recent call last in 36 37 model compile loss sparse categorical crossentropy optimizer adam 38 model fit dataset epoch 1 10 frame usr local lib python3 6 dist package six py in raise from value from value unimplementederror function node inference train function 5323 compilation failure ask to propagate a dynamic dimension from hlo dot 472 0 to hlo all reduce 477 f32 196608 100 1 0 all reduce f32 196608 100 1 0 dot 472 replica group 0 1 2 3 4 5 6 7 to apply sum 473 metadata op type crossreplicasum op name crossreplicasum 2 which be not implement tpu compilation fail node tpu compile succeed assert 4970277850434216321 5 when I remove these two line dataset dataset map encode jpg dataset dataset map decode jpg then it work 38 38 1s 16ms step loss 16 8563 however shape and type of dataset batch remain the same how do our dataset look like 8 256 256 3 of 8 of 8 256 256 3 of 8 of 8 256 256 3 of 8 of to fix this error I try to case label to tf int64 but error still occur I try to run this code on cpu version of colab remove with strategy scope and then it work perfectly so I guess the problem be in tpu and jpeg encoding decode
tensorflowtensorflow
when use mirroredstrategy for multi gpu and fit with multi worker there be a error task do call too many time
Bug
system information os ubuntu 18 04 tensorflow 2 20 from pip install python version 3 7 7 cuda version 10 2 cudnn version release 10 2 v10 2 89 gpu 2070 x 2 describe the current behavior code python gpu tf config experimental list physical device gpu for g in gpu tf config experimental set virtual device configuration g tf config experimental virtualdeviceconfiguration memory limit 7000 mirror strategy tf distribute mirroredstrategy device gpu 0 gpu 1 train image generator imagedatagenerator rescale 1 255 horizontal flip true zoom range 0 1 rotation range 45 generator for our training datum validation image generator imagedatagenerator rescale 1 255 generator for our validation datum train datum gen train image generator flow from directory batch size batch size directory train dir shuffle true target size img height img width class mode binary val datum gen validation image generator flow from directory batch size batch size directory validation dir target size img height img width class mode binary sample training image next train datum gen with mirror strategy scope tinydarknet keras sequential keras layer conv2d 16 3 3 stride 1 1 padding same input shape img height img width 3 kera layer batchnormalization keras layers leakyrelu alpha 0 1 kera layer maxpooling2d 2 2 stride 2 2 kera layer batchnormalization keras layer averagepooling2d keras layer flatten keras layer dense 1 in 8 tinydarknet compile optimizer adam loss tf keras loss binarycrossentropy from logit true metric accuracy in 9 history tinydarknet fit generator train datum gen step per epoch total train batch size epoch epoch validation datum val datum gen validation step total val batch size worker num worker acc history history accuracy val acc history history val accuracy loss history history loss val loss history history val loss epoch range range epoch tinydarknet save keras model describe the expect behavior when I set worker 1 in fit it be normal work but when I worker more than one it be get a error tensorflow core framework op kernel cc 1741 invalid argument valueerror task do call too many time traceback most recent call last file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python op script op py line 243 in call ret func args file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python autograph impl api py line 309 in wrapper return func args kwargs file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python data op dataset op py line 785 in generator py func value next generator state get iterator iterator i d file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python keras engine datum adapter py line 801 in wrap generator for datum in generator fn file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python keras util datum util py line 880 in get six reraise sys exc info file datum ssd anaconda3 envs tf2 lib python3 7 site package six py line 703 in reraise raise value file datum ssd anaconda3 envs tf2 lib python3 7 site package tensorflow python keras util datum util py line 875 in get self queue task do file datum ssd anaconda3 envs tf2 lib python3 7 queue py line 74 in task do raise valueerror task do call too many time valueerror task do call too many time
tensorflowtensorflow
set name parameter in keras layers result in accuracy degradation
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 gcp vm debian gnu linux 9 stretch tf 1 15 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary preinstalle tensorflow version use command below 1 15 3 v1 15 2 30 g4386a66 python version 3 5 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory tpuv2 8 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I set name parameter in keras layer conv2d the training behavior change accuracy go down consistently describe the expect behavior the training behavior should be similar accuracy have to be similar standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook 0 clone official mnasnet code git clone cd tpu model official mnasnet export pythonpath pythonpath pwd pwd efficientnet 1 train mnasnet for 50 epoch python mnasnet main py datum dir imagnet dir tpu tpu name train step 62550 step per eval 6255 train batch size 1024 eval batch size 1024 model dir save dir model name mnasnet a1 2 add name to keras layer can be add use diff below diff git model official mnasnet mnasnet model py model official mnasnet mnasnet model py index 935421e 7909ec2 100644 model official mnasnet mnasnet model py model official mnasnet mnasnet model py 117 7 117 8 def get conv2d filter padding use bias datum format channel last use kera true use kera true name none a helper function to create conv2d layer if use kera return tf keras layer conv2d 127 7 128 8 def get conv2d filter kernel initializer kernel initializer padding padding datum format datum format use bias use bias use bias use bias name name else return tf layer conv2d filter filter 136 7 138 8 def get conv2d filter kernel initializer kernel initializer padding padding datum format datum format use bias use bias use bias use bias name name class mnasblock object 190 13 193 15 class mnasblock object pad same use bias false datum format self datum format use keras self use kera use keras self use kera name expand conv todo hongkuny b 120622234 need to manage update op directly self bn0 tf layer batchnormalization axis self channel axis momentum self batch norm momentum epsilon self batch norm epsilon fuse true fuse true name expand bn kernel size self block args kernel size depth wise convolution phase 207 7 212 8 class mnasblock object depthwise initializer conv kernel initializer padding same datum format self datum format use bias false use bias false name dw conv else self depthwise conv mnas util depthwiseconv2d kernel size kernel size 215 12 221 14 class mnasblock object depthwise initializer conv kernel initializer padding same datum format self datum format use bias false use bias false name dw conv self bn1 tf layer batchnormalization axis self channel axis momentum self batch norm momentum epsilon self batch norm epsilon fuse true fuse true name dw bn if self have se num reduce filter max 234 7 242 8 class mnasblock object pad same use bias true datum format self datum format use keras self use kera use keras self use kera name se reduce self se expand get conv2d filter kernel size 1 1 243 7 252 8 class mnasblock object pad same use bias true datum format self datum format use keras self use kera use keras self use kera name se expand output phase filter self block args output filter 255 12 265 14 class mnasblock object pad same use bias false datum format self datum format use keras self use kera use keras self use kera name proj conv self bn2 tf layer batchnormalization axis self channel axis momentum self batch norm momentum epsilon self batch norm epsilon fuse true fuse true name proj bn def call se self input tensor call squeeze and excitation layer 3 use same code to train again python mnasnet main py datum dir imagnet dir tpu tpu name train step 62550 step per eval 6255 train batch size 1024 eval batch size 1024 model dir different save dir model name mnasnet a1 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach when I add name the accuracy consistently degrade I repeat 3 time image
tensorflowtensorflow
unable to generate pbtxt file for tensorflow 2 0 model
Bug
system information run code on google colab use tensorflow 2 2 problem I m try to generate a pbtxt file from a pb file that I train use tfod 2 api but I m get a parse message error in fact I m even get this error when I try to use a train ssd mobilenet v2 model from tensorflow model zoo this be the code use to get pbtxt you can just paste this code in google colab and the error will be reproduce wget tar xf ssd mobilenet v2 320x320 coco17 tpu 8 tar gz model path ssd mobilenet v2 320x320 coco17 tpu 8 save model save model pb import tensorflow as tf with tf io gfile gfile model path rb as f graph def tf compat v1 graphdef graph def parsefromstre f read for I in reverse range len graph def node if graph def node I op const del graph def node I tf io write graph graph def graph pbtxt as text true face this error while run the code graph def parsefromstring f read decodeerror error parse message how can I generate a pbtxt file from tensorflow object detection api 2 x model the end goal be to use these model inside opencv dnn module
tensorflowtensorflow
tfdbg doesn t display any tensor
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 0rc2 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior screenshot from 2020 07 21 10 49 27 describe the expect behavior display record tensor which like tf1 x standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python m tensorflow python debug example v2 debug mnist v2 dump dir tmp tfdbg2 logdir dump tensor debug mode full health python m tensorflow python debug cli offline analyzer dump dir tmp tfdbg2 logdir other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tflite interpreter invoke crash on gpu despite successful tflite conversion
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 google colab notebook mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary pip tensorflow version use command below 2 4 0 dev20200720 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 7 google colab default gpu gpu model and memory google colab default gpu describe the current behavior I convert a simple 3dcnn kera model to tflite upon create an interpreter from that tflite model and call interpreter invoke the google colab notebook crash this only happen when use a gpu runtime interpreter invoke work fine on a cpu the converted tflite model use both tf lite opsset tflite builtin and tf lite opsset select tf op and I be convert to tflite in an attempt to apply quantization aware training and post training quantization accord to this guide describe the expect behavior I expect to be able to call interpreter invoke successfully on a gpu without any crash standalone code to reproduce the issue 1 add this google colab notebook and this dataset to your google drive 2 open the colab notebook set the runtime to gpu runtime change runtime type gpu 3 mount your drive and change content drive my drive full dataset vector h5 to wherever you re store the dataset 4 run all cell of the notebook the actual crash only happen in the last cell with interpreter invoke other info log the log in the google colab notebook didn t provide any information about the cause for the crash to get more info I try run the code on an identical local environment and manage to obtain a gdb backtrace it seem likely to be relate to this issue but the backtrace look different enough to possibly be a separate bug so I decide to create a new issue thread 1 python3 receive signal sigsegv segmentation fault memmove sse2 unaligned erm at sysdep x86 64 multiarch memmove vec unaligned erm s 370 370 sysdep x86 64 multiarch memmove vec unaligned erm s no such file or directory gdb bt 0 memmove sse2 unaligned erm at sysdep x86 64 multiarch memmove vec unaligned erm s 370 1 0x00007ff291abd70c in tflite flexdelegate copyfrombufferhandle tflitecontext int tflitetensor from usr local lib python3 6 dist package tensorflow python pywrap tensorflow internal so 2 0x00007ff280226f1d in tflite impl subgraph invoke from usr local lib python3 6 dist package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 3 0x00007ff280229bd0 in tflite impl interpreter invoke from usr local lib python3 6 dist package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 4 0x00007ff27ffc8a58 in tflite interpreter wrapper interpreterwrapper invoke from usr local lib python3 6 dist package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 5 0x00007ff27ffbe6be in void pybind11 cpp function initialize pybind11 init pywrap tensorflow interpreter wrapper pybind11 module lambda tflite interpreter wrapper interpreterwrapper 4 pybind11 object tflite interpreter wrapper interpreterwrapper pybind11 name const pybind11 be method const pybind11 sible const lambda pybind11 detail function call 3 fun pybind11 detail function call from usr local lib python3 6 dist package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 6 0x00007ff27ffbfc19 in pybind11 cpp function dispatcher object object object from usr local lib python3 6 dist package tensorflow lite python interpreter wrapper pywrap tensorflow interpreter wrapper so 7 0x00000000005669ac in pycfunction fastcalldict at object methodobject c 231 8 0x000000000050a5c3 in call function lto priv at python ceval c 4875 9 0x000000000050bfb4 in pyeval evalframedefault at python ceval c 3335 10 0x0000000000509758 in pyeval evalframeex throwflag 0 f frame 0x7ff180002e48 for file usr local lib python3 6 dist package tensorflow lite python interpreter py line 524 in invoke self delegate at remote 0x7ff2779fd710 at python ceval c 754 11 pyfunction fastcall global nargs 140675211341384 args co at python ceval c 4933 12 fast function lto priv at python ceval c 4968 13 0x000000000050a48d in call function lto priv at python ceval c 4872 14 0x000000000050bfb4 in pyeval evalframedefault at python ceval c 3335 15 0x0000000000509758 in pyeval evalframeex throwflag 0 f frame 0x7ff1b4001df8 for file inference py line 80 in tflite inference interpreter delegate at remote 0x7ff2779fd710 test data nargs 140676083752440 args co at python ceval c 4933 17 fast function lto priv at python ceval c 4968 18 0x000000000050a48d in call function lto priv at python ceval c 4872 19 0x000000000050bfb4 in pyeval evalframedefault at python ceval c 3335 20 0x0000000000509758 in pyeval evalframeex throwflag 0 f frame 0x543b138 for file inference py line 120 in run inference at python ceval c 754 21 pyfunction fastcall global nargs 88322360 args co at python ceval c 4933 22 fast function lto priv at python ceval c 4968 23 0x000000000050a48d in call function lto priv at python ceval c 4872 24 0x000000000050bfb4 in pyeval evalframedefault at python ceval c 3335 25 0x0000000000507d64 in pyeval evalframeex throwflag 0 f frame 0x18c6bd8 for file inference py line 150 in at python ceval c 754 type to continue or q to quit 26 pyeval evalcodewithname lto priv 1820 at python ceval c 4166 27 0x000000000050ae13 in pyeval evalcodeex closure 0x0 kwdef 0x0 defcount 0 def 0x0 kwcount 0 kws 0x0 argcount 0 args 0x0 local global co at python ceval c 4187 28 pyeval evalcode co global local at python ceval c 731 29 0x0000000000634c82 in run mod at python pythonrun c 1025 30 0x0000000000634d37 in pyrun fileexflag at python pythonrun c 978 31 0x00000000006384ef in pyrun simplefileexflag at python pythonrun c 419 32 0x00000000006386c5 in pyrun anyfileexflag at python pythonrun c 81 33 0x0000000000639091 in run file p cf 0x7ffece85e63c filename fp at module main c 340 34 py main at module main c 810 35 0x00000000004b0d00 in main argc 2 argv 0x7ffece85e838 at program python c 69
tensorflowtensorflow
introduction to tensor have incorrect example image
Bug
url s with the issue multi axis indexing description of issue what need change the left side image in the figure call select the last feature across all location in each example in the batch have batch 1 blue and 2 green swap the right side image and the surround code example show the correct order the image that need an edit be submit a pull request no I do not have a way to edit the image
tensorflowtensorflow
autograph fail due to an end of line between parenthesis
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 uname a darwin daniels macbook pro local 18 7 0 darwin kernel version 18 7 0 tue aug 20 16 57 14 pdt 2019 root xnu 4903 271 2 2 release x86 64 x86 64 tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version python 3 6 8 anaconda inc describe the current behavior autograph fail if we introduce an end of line in the definition of a lambda function describe the expect behavior autograph should work the same way as if there be no eol an eol between parenthesis be legitimate python syntax in my own code yapf introduce the eol that cause the problem it be difficult to figure out that this be the problem standalone code to reproduce the issue colab python f tf function lambda a b a b print f 1 2 autograph work however if one remove the eol between a and b other info log warn tensorflow autograph could not transform at 0x7f1132d6a268 and will run it as be cause could not parse the source code lambda a this error may be avoid by create the lambda in a standalone statement to silence this warning decorate the function with tf autograph experimental do not convert warning autograph could not transform at 0x7f1132d6a268 and will run it as be cause could not parse the source code lambda a this error may be avoid by create the lambda in a standalone statement to silence this warning decorate the function with tf autograph experimental do not convert
tensorflowtensorflow
incorrect processing in tf image decode gif for multiple frame image
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow n a os platform and distribution e g linux ubuntu 16 04 macos or linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 and 2 3 0rc1 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior while work with tf image decode gif for mult frame image I notice the return value be incorrect after the first image describe the expect behavior all frame should be handle correctly standalone code to reproduce the issue import tensorflow as tf import matplotlib pyplot as plt print tf version version curl ol image tf image decode gif tf io read file 440px animate gif chelovechek gif for I in range image shape 0 plt imshow image I plt figure other info log include any log or source code that would be helpful to the image be download from wiki page animate gif original picture gif cradle first frame extract test 0 second frame extract test 1
tensorflowtensorflow
train a tensorflow model use 1 terabyte swap file
Bug
hi there I want to train a tensorflow model that be huge and require alot of memory so I increase the swap file size and swapiness but in training it also use the ram so it fail be there a way to train tensorflow with use swap file as memory
tensorflowtensorflow
nccl op all sum do not correctly reduce gradient
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc3 33 g70087ab4f4 2 2 0 rc4 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 10 1 7 6 5 gpu model and memory p100 v100 describe the current behavior the allreduce operation nccl op all sum do not correctly sum gradient the result be incorrect standalone code to reproduce the issue python usr bin env python import argparse from tensorflow compat import v1 as tf import tqdm def split grad list grad list g v for tower in grad list g append x 0 for x in tower v append x 1 for x in tower return g v def allreduce grad all grad reduce gradient for n variable on k devices from tensorflow python op import nccl op as nccl nr tower len all grad assert nr tower 1 new all grad n x k for grad in zip all grad k grad sum nccl all sum grad grad for device k true sum tf add n grad for g in sum diff tf abs true sum g eql diff 1e 4 nccl re correct tf reduce all eql name corr grad 0 op name def flat x x tf reshape x 1 x tf slice x 0 tf minimum tf size x 200 return x assert op tf debugging assert nccl res correct tf reduce max diff flat true sum flat g summarize 1000 name assert grad 0 op name with tf control dependency assert op g tf identity g grad for device append g new all grad append grad for device transpose to k x n ret list zip new all grad return ret def build graph image label idx v1 tf get variable aaa w shape 3 3 3 64 trainable true v2 tf get variable bbb w shape 3 3 3 64 trainable true v v1 if idx 0 else v2 image tf nn conv2d image v 1 padding same datum format nchw def conv name x chan stride 1 with tf variable scope name in chan x shape 1 w tf get variable w 3 3 in chan chan ret tf nn conv2d x w strides stride padding same datum format nchw return tf nn relu ret x conv conv1 image 64 x conv conv2 x 64 x conv conv3 x 1280 stride 2 x conv conv4 x 1280 stride 2 x conv conv5 x 10 logit tf reduce mean x axis 2 3 cost tf nn sparse softmax cross entropy with logit logit logit label label cost tf reduce mean cost name cross entropy loss return cost if name main parser argparse argumentparser parser add argument gpu type int args parser parse args num gpu args gpu with tf graph as default opt tf train gradientdescentoptimizer 0 001 grad list for k in range num gpu with tf device gpu format k tf variable scope tower format k print building format k image tf random uniform 32 3 30 30 label tf random uniform 32 maxval 9 dtype tf int32 cost build graph image label k varlist x for x in tf trainable variable if x name startswith tower format k print varlist for tower format k x name for x in varlist wd cost tf reduce sum x 1e 3 for x in varlist cost tf add n cost wd cost grad opt compute gradient cost var list varlist grad list append grad all grad all var split grad list grad list all grad allreduce grad all grad grad list list zip gs vs for gs vs in zip all grad all var train op for idx grad and var in enumerate grad list with tf device gpu format idx train op append opt apply gradient grad and var name apply grad format idx train op tf group train op sess tf session sess run tf global variable initializer print training for k in tqdm trange 5000 sess run train op the above code train a toy network on random datum and allreduce the gradient use nccl op all sum it check the allreduce result against the sum of gradient compute by a naive add n and assert that the difference be reasonably small however the difference can be quite large sometimes and the assertion usually fail within 100 step of train the code above write in tf1 style can be run on a machine with 2 gpu use tf2 behavior 0 python a py gpu 2 building 0 varlist for tower 0 tower0 aaa w 0 tower0 bbb w 0 tower0 conv1 w 0 tower0 conv2 w 0 tower0 conv3 w 0 tower0 conv4 w 0 tower0 conv5 w 0 build 1 varlist for tower 1 tower1 aaa w 0 tower1 bbb w 0 tower1 conv1 w 0 tower1 conv2 w 0 tower1 conv3 w 0 tower1 conv4 w 0 tower1 conv5 w 0 1 71 5000 00 06 07 39 10 73it s traceback most recent call last file private home yuxinwu env py37 tf2 2v2 lib python3 7 site package tensorflow python client session py line 1365 in do call return fn args file private home yuxinwu env py37 tf2 2v2 lib python3 7 site package tensorflow python client session py line 1350 in run fn target list run metadata file private home yuxinwu env py37 tf2 2v2 lib python3 7 site package tensorflow python client session py line 1443 in call tf sessionrun run metadata tensorflow python framework error impl invalidargumenterror 2 root error s find 0 invalid argument assertion fail 0 00100000016 0 00234295963 0 00230941921 0 00176228327 0 00197261758 0 00213356828 0 00188576151 0 00211580051 0 00221353304 my initial investigation suggest no proof just a guess that the bug might appear because the gradient be compute on each gpu in different order the bug be find to exist in tf 1 15 as well have not test early version the bug rarely trigger itself if I revert which be a pr that make allreduce op schedule as early as possible collective op all reduce with the ring implementation do not seem to have similar issue but it significantly slow down my training cc dubey yuefengz chsigg who may have context on this issue
tensorflowtensorflow
tf ragged stack issue
Bug
tf ragged stack input return tf tensor if the first dim of the input be of length 1 and tf raggedtensor if not this be a problem as those type don t share the same interface and if the length of the input be not know in advance as be often the case when use ragged tensor one must have in if condition inside the graph to check the output type tf ragged stack 1 2 3 tf tensor 1 2 3 shape 1 3 dtype int32 tf ragged stack 1 2 3 1 2 3
tensorflowtensorflow
dataset not repeat when ignore error and shuffle before indefinite repetition
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux tensorflow instal from source or binary binary tensorflow version 2 2 0 python version python 3 x cuda cudnn version not use gpu gpu model and memory not use gpu describe the current behavior when use tf datum experimental ignore error indefinite repetition do not work if an error occur during get the element to fill the shuffle buffer initially I think it s a easy to see in code def assert great 0 x tf debugging assert great x tf convert to tensor 0 return x dataset tf datum dataset from tensor slice 1 2 0 3 4 dataset dataset map assert great 0 dataset dataset shuffle buffer size 3 dataset dataset repeat dataset tf data experimental ignore error dataset yield a dataset that have 4 element whereas adjust the number in the initial list slightly dataset tf datum dataset from tensor slice 1 2 3 0 4 dataset dataset map assert great 0 dataset dataset shuffle buffer size 3 dataset dataset repeat dataset tf data experimental ignore error dataset yield an infinitely repeat dataset more datum remove the shuffle produce infinite dataset in both case specify the number of repetition as 2 produce the same result 8 element in both case describe the expect behavior I would expect that the two code snippet both produce infinite dataset standalone code to reproduce the issue
tensorflowtensorflow
generator not compatible with ragged tensor when use model fit
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 2 2 python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version cudnn 10 gpu model and memory v100 24 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when provide model fit with a python generator that create ragged input for a keras model while the output of the generator will allow model fit to train if one provide the generator to model fit it will not below I have attach a standalone script show in synthetic datum how when the generator be incorporate into a for loop the model will fit on the output of the generator but if one provide the generator to the model fit command it will not train describe the expect behavior the model should train give the python generator accord to the doc standalone code to reproduce the issue python import tensorflow as tf import numpy as np def turn into rag x return tf raggedtensor from row length tf concat tf raggedtensor from row length np concatenate sample axis 0 list map len sample for sample in x axis 0 list map len x def get batch m x y batch size 10 random false return a generator that yield batch from var batch size len x n batch if len x 0 batch size 0 n batch len x 0 batch size else n batch len x 0 batch size 1 sel np asarray list range x 0 shape 0 if random be true np random shuffle sel for ii in range 0 n batch batch size batch size if we re not on the last batch grab datum with size batch size if ii n batch 1 batch size sel ind sel ii ii batch size else sel ind sel ii x out turn into ragged var sel ind for var in x y out var sel ind for var in y yield tuple x out tuple y out def generate syn datum n I 2000 n s 30 n t 200 shape 10 10 3 value np random uniform 0 1 n I shape astype np float32 idx t np random choice n t n I l np unique idx t return count true rt0 tf raggedtensor from row length value l idx s np random choice n s n t l np unique idx s return count true rt1 tf raggedtensor from row length rt0 l y tf constant np eye 2 np random choice 2 n s astype np float32 return rt1 y def basic ragged graph input shape ragged input tf keras layers input shape none none shape dtype tf float32 rag true for shape in input shape sample aggregation tf concat tf keras layers lambda lambda x tf reduce sum x axis 1 2 ragged input for ragged input in ragged input axis 1 logit tf keras layer dense unit 2 activation none tf keras layer flatten sample aggregation return tf keras model input ragged input output logit create simple model that take 2 ragged input and return 1 output tile shape 10 10 3 model basic ragged graph tile shape tile shape model compile optimizer tf keras optimizer adam learning rate 0 001 loss tf keras loss categoricalcrossentropy from logit true generate synthetic datum for input and output x1 x2 generate syn datum 0 numpy generate syn datum 0 numpy y generate syn datum 1 numpy create generator objeect to batch and convert datum to tf raggge train gen get batch m x1 x2 y batch size 5 random true this method use generator and output x train y train that work for x train y train in train gen model fit x train y train however when one provide this generator to the model fit the model will not train train gen get batch m x1 x2 y batch size 5 random true model fit train gen other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
mislead error message in tf broadcast to
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when run tf broadcast to the input show in the error message for an invalid argument be different from the give input 110 53 104 147 157 123 5 24 188 40 5 2 give vs 2 2 2 2 2 2 2 2 2 2 2 2 error message describe the expect behavior correct error message standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf x tf constant 1 2 3 tf broadcast to x 110 53 104 147 157 123 5 24 188 40 5 2 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach invalidargumenterror shape 2 2 2 2 2 2 2 2 2 2 2 2 would have more than 2 63 1 element op broadcastto
tensorflowtensorflow
smart resize in kera preprocesse not compatible with dataset from tf datum
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 dev20200717 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior use smart resize on dataset object size 200 200 ds ds map lambda img smart resize img size throw error operatornotallowedingrapherror in user code 4 none lambda image tf keras preprocesse image smart resize image size usr local lib python3 6 dist package tensorflow python keras preprocesse image py 126 smart resize if target ratio img ratio usr local lib python3 6 dist package tensorflow python framework op py 878 bool self disallow bool cast usr local lib python3 6 dist package tensorflow python framework op py 491 disallow bool cast self disallow in graph mode use a tf tensor as a python bool usr local lib python3 6 dist package tensorflow python framework op py 480 disallow in graph mode this function with tf function format task operatornotallowedingrapherror use a tf tensor as a python bool be not allow in graph execution use eager execution or decorate this function with tf function describe the expect behavior this should work accord to documentation standalone code to reproduce the issue import tensorflow as tf from tensorflow import kera import numpy as np img size 224 size img size img size np image np random rand 32 size 0 size 1 3 ds train tf datum dataset from tensor slice np image ds train ds train map lambda image tf keras preprocesse image smart resize image size
tensorflowtensorflow
fit the model with feature on gpu assertionerror could not compute output tensor dense 3 identity 0 shape none 1 dtype float32
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template tf version 2 2 issue occur when we call model fit after create keras model we be use tfhub bert layer for its word embedding set trainable false as below bert layer hub keraslayer trainable false pool output sequence output bert layer input word ids input mask segment ids bert model model input input word ids input mask segment id output pool output x train pool output shape 50 000 768 and x test pool output shape 20 000 768 batch size 32 mode3 fit x train pool output y train epoch nb epoch batch size batch size verbose 1 validation datum x test pool output y test callback callback list image
tensorflowtensorflow
rgb to yuv conflict information between example and description
Bug
the documentation say this function be only well define if the value be between 0 and 1 but the example use an input with value great than 1
tensorflowtensorflow
tf parallel stack doesn t support eager execution
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 2 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook bug example python x tf constant 1 4 y tf constant 2 5 z tf constant 3 6 tf parallel stack x y z work example python x tf constant 1 4 y tf constant 2 5 z tf constant 3 6 with tf compat v1 session as sess print sess run tf parallel stack x y z other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
fl16 model run on gpu
Bug
system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version commit sha if source 1 15 target platform e g arm mbe os arduino nano 33 etc android 9 api28 mali t864 gpu describe the problem we try to run a post quantize to float16 model on a robot with gpu delegate accord to and but it fail to run on gpu even after we graph transform non gpu support operator in it the log be attach interesting thing be if we do not quantize it to fl16 all operator of the model can successfully run on gpu netron show there be lot of dequantize operator add to the graph after we use tflite converter to quantize the model to fl16 so what should we do to let the quantize fl16 model run on gpu entirely one more question be we find a parameter setallowfp16precisionforfp32 in tflite c what be the difference between 1 set this to true and use a fl32 model 2 set this to true and use fl16 model 3 set this to false and use fl32 model 4 set this to false and use fl16 model many thank model be upload in input be image of size 193 321 3 please provide the exact sequence of command step when you run into the problem info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error next operation be not support by gpu delegate conv 2d expect 1 input tensor s but node have 3 runtime input s depthwise conv 2d expect 1 input tensor s but node have 3 runtime input s dequantize operation be not support first 0 operation will run on the gpu and the remain 198 on the cpu
tensorflowtensorflow
model maker output single file no label txt
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change the tutorial for tensorflow model maker say that on export there will be a model tflite file and then a label txt file however when I export the model use the instruction it only output a single model tflite the console output say that it be store in a temp directory which appear to be subsequently delete and that the label be merge into the model tflite file would I still be able to use this on mobile or be there any way I can extract label txt clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
invalidargumenterror only rank up to 5 support
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento inux release 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version v9 1 85 gpu model and memory tesla k80 24c I be work with kera to build the follow simplify model class mymodel def init self pass def build model self inp input shape self output shape x dense 128 activation elu inp we ll start explore with one hide layer outp dense 1 activation elu x self model model input inp output outp self model compile loss dice loss optimizer adam metric dice coef def run model self for the sake of simplicity self output np random rand 8 64 12 86 98 1 label shape 64 12 86 98 to get 1 64 12 86 98 self label np expand dim self label axis 0 1 output shape 8 64 12 86 98 1 to 1 64 12 86 98 8 self output np swapaxe self output 0 1 self build model self model fit self output self label batch size 1 epoch 200 however upon try to fit the model as model mymodel model run model the follow error appear file home kdqm927 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training v1 py line 785 in fit use multiprocesse use multiprocesse file home kdqm927 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training array py line 634 in fit shuffle shuffle file home kdqm927 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training v1 py line 2308 in standardize user datum batch size batch size file home kdqm927 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training v1 py line 2335 in standardize tensor exception prefix input file home kdqm927 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training util py line 573 in standardize input datum with shape str data shape valueerror error when check input expect input 1 to have 7 dimension but get array with shape 1 64 12 86 98 8 if I tweak the code by add another dimension to self output follow the exception from the previous error self model fit np expand dim self output axis 0 self label batch size 1 epoch 200 the error I get be traceback most recent call last file file py line 257 in s fit metamodel outputs validation arr file file py line 157 in fit metamodel callback callback file home miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training v1 py line 785 in fit use multiprocesse use multiprocesse file home miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training array py line 666 in fit step name step per epoch file home miniconda3 envs segment lib python3 7 site package tensorflow python keras engine training array py line 386 in model iteration batch out f in batch file home miniconda3 envs segment lib python3 7 site package tensorflow python keras backend py line 3632 in call run metadata self run metadata file home miniconda3 envs segment lib python3 7 site package tensorflow python client session py line 1472 in call run metadata ptr tensorflow python framework error impl invalidargumenterror only rank up to 5 support 1 1 64 12 86 98 128 node dense biasadd edit tensorflow version use command below 2 2 0
tensorflowtensorflow
readvariableop value 0 be not find when convert savedmodel to tflite
Bug
system information os platform and distribution os x 10 14 6 tensorflow instal from binary pip3 tensorflow version 2 2 0 my initial issue be very similar to this so have follow the advice there and at issue 34350 I be run the follow code save model obj tf save model load export dir args model dir concrete func save model obj signature serve default converter tf lite tfliteconverter from concrete function concrete func converter optimization tf lite optimize default converter experimental new converter true converter target spec support op tf lite opsset tflite builtin tf lite opsset select tf op tflite model converter convert the output from the converter invocation traceback most recent call last file converter py line 32 in main file converter py line 26 in main tflite model converter convert file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python lite py line 472 in convert graph frozen func graph file library framework python framework version 3 7 lib python3 7 site package tensorflow lite python util py line 218 in run graph optimization return tf optimizer optimizegraph config meta graph file library framework python framework version 3 7 lib python3 7 site package tensorflow python grappler tf optimizer py line 58 in optimizegraph graph i d strip default attribute tensorflow python framework error impl invalidargumenterror return readvariableop readvariableop value 0 be not find in function output readvariableop float the full output include the function optimizer line and the save model be attach the save model be export from an estimator use estimator export save model full output txt save model pb zip many thank in advance
tensorflowtensorflow
error in tf session run or tf reduce max
Bug
question opt sess run pre dfs feed dict x clinic factor val keep prob 1 0 treatment treat opt1 pat pre sess run output1 pre dfs feed dict x clinic factor val keep prob 1 0 treatment treat print opt1 print opt print pat pre the relationship between pred dfs and output1 pre dfs tf reduce max output1 axis 1 output1 be an intermediate variable when pre dfs be run alone the result be wrong but when pre dfs and output1 be run together the result of pred dfs be correct why the result of a run opt1 0 23930925 0 36091098 0 61886203 opt 0 72315866 pat pre 0 61886203 note that the result be consistent when output1 be run alone and when pre dfs and output1 be run together
tensorflowtensorflow
can not run tensorflow2 0 save model with java
Bug
in java can not feed the input tensor to the load model model which be save in tf2 0 in pb file my model tf2 0 model sequential layer type output shape param nn input flatten none 5 0 nn output dense none 1 513 my java code final int num prediction 1 final int input size 5 try savedmodelbundle b savedmodelbundle load nn visualization nnregressor save model serve session sess b session tensor x tensor create new long input size floatbuffer wrap new float 10 9 23 7 1 float y sess runner feed nn input x fetch nn output run get 0 copyto new float num prediction system out println y 0 error exception in thread main java lang illegalargumentexception no operation name nn input in the graph at org tensorflow session runner operationbyname session java 380 at org tensorflow session runner parseoutput session java 389 at org tensorflow session runner feed session java 131 at hellotensorflow main hellotensorflow java 25 instalation java jdk version 1 8 tf model save version python3 c import tensorflow as tf print tf version git version tf version version v2 2 0 rc4 8 g2b96f3662b 2 2 0 java use tensorflow version tensorflow version 1 14 0 assumption I assume that the problem arise due to the difference in the tensorflow version in which the model be save tf2 0 and in which the model be load tf1 4 I guess I m wrong with the first argument of the feed method any help thank in advance milan
tensorflowtensorflow
doc incomplete description of callback modelcheckpoint s save well only parameter
Bug
url s with the issue description of issue what need change 1 mode auto min max should be auto min max I guess minor 2 save well only I believe the description be incomplete with save well only true not only will the late good model not be overwrite but also the current model be not write at all even if it have another filename than the late good model this be kind of imply by the name of the parameter but the description should include that too
tensorflowtensorflow
threshold to be set 0 when use binary accuracy with raw prediction value from logit true
Bug
this be mainly a documentation bug official tensorflow tutorial but it be a dangerous trap and might also happen in general to user so see below my last sentence this could also be fix in tensorflow that it detect this automatically in this tutorial raw prediction value form logit true be use so we have negative value and positive value while prediction will be treat as a logit or a raw prediction value positive number predict class 1 negative number predict class 0 however the model compile statement be as follow base learning rate 0 0001 model compile optimizer tf keras optimizer rmsprop lr base learning rate loss tf keras loss binarycrossentropy from logit true metric accuracy this be wrong as per default threshold value to classify be 0 5 tf keras metric binary accuracy y true y pre threshold 0 5 tf keras metric binaryaccuracy name binary accuracy dtype none threshold 0 5 threshold optional float represent the threshold for decide whether prediction value be 1 or 0 this lead to the wrong classification model evaluate will also give false accuracy measure reason be that predict value in range 0 0 49999 be wrongly classify as 0 I be not sure what happen to a value of exactly 0 5 whereas they actually should be classify as 1 so it need to be correct to model compile optimizer tf keras optimizer rmsprop lr base learning rate loss tf keras loss binarycrossentropy from logit true metric tf keras metric binaryaccuracy threshold 0 0 would be even well if this be correct inside tensorflow that it automatically detect that from logit true be set and then assume that default threshold be not 0 5 anymore but 0 0 and maybe additional warning output
tensorflowtensorflow
run into this issue while import the tflearn library
Bug
d software anaconda lib site package tflearn init py in 2 3 config 4 from import config 5 from config import be train get training mode init graph 6 d software anaconda lib site package tflearn config py in 3 import tensorflow as tf 4 5 from variable import variable 6 7 d software anaconda lib site package tflearn variable py in 5 import tflearn 6 7 from tensorflow contrib framework python ops import add arg scope as contrib add arg scope 8 from tensorflow python framework import op 9 from tensorflow python op import variable scope modulenotfounderror no module name tensorflow contrib I have instal the library
tensorflowtensorflow
runtimeerror cudagetdevice fail status cuda driver version be insufficient for cuda runtime version
Bug
I want to run tensorflow with cuda 10 0 or 10 1 on window abstract error message runtimeerror cudagetdevice fail status cuda driver version be insufficient for cuda runtime version system infomeation tensorflow gpu 2 2 0 python 3 7 1 window 10 pro cuda 10 1 105 cudnn 7 6 4 nvidia driver 451 48 visual studio 2017 geforce gtx 1080 ti detail nvidia driver nvidia smi we d jul 15 15 14 29 2020 nvidia smi 451 48 driver version 451 48 cuda version 11 0 gpu name tcc wddm bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m 0 geforce gtx 108 wddm 00000000 01 00 0 off n a 25 30c p8 9w 250w 1310mib 11264mib 1 default process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 2228 c g es textinput inputapp exe n a 0 n a n a 9272 c g insufficient permission n a 0 n a n a 9372 c g insufficient permission n a 0 n a n a 11260 c g c window explorer exe n a 0 n a n a 11740 c g artmenuexperiencehost exe n a 0 n a n a 12340 c g w5n1h2txyewy searchui exe n a 0 n a n a 12952 c g zf8qxf38zg5c skypeapp exe n a 0 n a n a 13272 c g y shellexperiencehost exe n a 0 n a n a 15784 c g I application chrome exe n a 0 n a n a 16300 c g bbwe microsoft photo exe n a cuda nvcc v nvcc nvidia r cuda compiler driver copyright c 2005 2019 nvidia corporation build on fri feb 8 19 08 26 pacific standard time 2019 cuda compilation tool release 10 1 v10 1 105 cudnn c program files nvidia gpu computing toolkit cuda v10 1 include cudnn h define cudnn major 7 define cudnn minor 6 define cudnn patchlevel 4 environment variable cuda path c program files nvidia gpu computing toolkit cuda v10 1 cuda path v10 1 c program files nvidia gpu computing toolkit cuda v10 1 cudnn path c program files nvidia gpu computing toolkit cuda v10 1 path c program files nvidia gpu computing toolkit cuda v10 1 bin c program files nvidia gpu computing toolkit cuda v10 1 libnvvp c program files nvidia gpu computing toolkit cuda v10 1 extra cupti libx64 c program files nvidia gpu computing toolkit cuda v10 1 include python python v python 3 7 1 pip list keras 2 4 3 kera preprocesse 1 1 2 tensorboard 2 2 2 tensorboard plugin wit 1 7 0 tensorflow estimator 2 2 0 tensorflow gpu 2 2 0 tensorflow gpu estimator 2 2 0 error message python c from tensorflow python client import device lib print device lib list local device 2020 07 15 15 00 36 136510 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 07 15 15 00 37 931078 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 07 15 15 00 37 946142 I tensorflow compiler xla service service cc 168 xla service 0x148edb04360 initialize for platform host this do not guarantee that xla will be use device 2020 07 15 15 00 37 957182 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 07 15 15 00 37 965143 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 07 15 15 00 37 996562 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 01 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 582ghz corecount 28 devicememorysize 11 00gib devicememorybandwidth 451 17gib s 2020 07 15 15 00 38 013444 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 07 15 15 00 38 026126 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 07 15 15 00 38 038540 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 07 15 15 00 38 049231 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 07 15 15 00 38 061932 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 07 15 15 00 38 074267 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 07 15 15 00 38 091257 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 07 15 15 00 38 100188 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 traceback most recent call last file line 1 in file d lib site package tensorflow python client device lib py line 43 in list local device convert s for s in pywrap device lib list device serialize config runtimeerror cudagetdevice fail status cuda driver version be insufficient for cuda runtime version
tensorflowtensorflow
pylint error on master branch
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary n a tensorflow version use command below master branch python version 3 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior when I run pylint on tf python file I get a lot of failure the follow be an example use master branch be this expect or be a bug fix pylint issue in our prs be hard give all the exist failure what do you suggest to make the contribution easy tensorflow python pylint rcfile tool ci build pylintrc keras layer pool py module keras layer pool keras layer pool py 167 71 c0303 trail whitespace trail whitespace keras layer pool py 168 74 c0303 trail whitespace trail whitespace keras layer pool py 213 71 c0303 trail whitespace trail whitespace keras layer pool py 214 74 c0303 trail whitespace trail whitespace keras layer pool py 377 0 c0303 trail whitespace trail whitespace keras layer pool py 379 0 c0303 trail whitespace trail whitespace keras layer pool py 383 60 c0303 trail whitespace trail whitespace keras layer pool py 385 41 c0303 trail whitespace trail whitespace keras layer pool py 387 62 c0303 trail whitespace trail whitespace keras layer pool py 426 71 c0303 trail whitespace trail whitespace keras layer pool py 427 74 c0303 trail whitespace trail whitespace keras layer pool py 483 71 c0303 trail whitespace trail whitespace keras layer pool py 484 74 c0303 trail whitespace trail whitespace keras layer pool py 628 71 c0303 trail whitespace trail whitespace keras layer pool py 629 74 c0303 trail whitespace trail whitespace keras layer pool py 681 71 c0303 trail whitespace trail whitespace keras layer pool py 682 74 c0303 trail whitespace trail whitespace keras layer pool py 72 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 95 4 r1705 unnecessary else after return no else return kera layer pool py 288 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 315 4 r1705 unnecessary else after return no else return kera layer pool py 564 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 600 4 r1705 unnecessary else after return no else return kera layer pool py 734 4 r1705 unnecessary else after return no else return kera layer pool py 739 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 792 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 794 4 r1705 unnecessary else after return no else return kera layer pool py 868 4 r1705 unnecessary else after return no else return kera layer pool py 873 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 918 4 r1705 unnecessary else after return no else return kera layer pool py 959 4 r1705 unnecessary else after return no else return kera layer pool py 975 4 r1705 unnecessary else after return no else return kera layer pool py 980 2 w0221 parameter differ from overridden call method argument differ kera layer pool py 1019 4 r1705 unnecessary else after return no else return kera layer pool py 1054 4 r1705 unnecessary else after return no else return your code have be rate at 8 54 10 previous run 8 81 10 0 27
tensorflowtensorflow
segmentation fault in subgraph h when run tensorflow lite in ro foxy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow I use example code to write a ros node os platform and distribution e g linux ubuntu 16 04 ros foxy docker container ubuntu 20 04 currently test on a x86 64 system but will deploy it to a aarch64 system mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device no tensorflow instal from source or binary source tensorflow version use command below git tag d855adfc5a0195788bf5f92c3c7352e638aa1109 python version none I m use c bazel version if compile from source none I m use cmake 3 17 gcc compiler version if compile from source 4 9 3 0 1ubuntu2 cuda cudnn version none gpu model and memory google coral chip exact command to reproduce describe the problem I already describe the problem here to make thing short I try use these example in ro but get a segmentation fault error when I try call interpreter input index I ve be tell to ask here since it seem to be a tensorflow api issue source code log start a docker container docker run it ro foxy execute source opt ro foxy setup bash run apt update apt install unzip apt install curl apt install libusb 1 0 0 update cmake like this apt install y wget apt get install libssl dev wget tar xvf cmake 3 17 0 tar gz rm cmake 3 17 0 tar gz cd cmake 3 17 0 configure make make install create a dev ws directory in the home folder cd into home dev ws create a src folder copy the zip file folder nn tflite into the src folder directory structure should be home dev ws src dev ws nn tflite cd into home dev ws and execute colcon build package select nn tflite symlink install execute install setup bash run the ros node with ros2 run nn tflite tflite nn run gbd cd install nn tflite lib nn tflite gbd args tflite nn nn tflite zip
tensorflowtensorflow
tensorflow 2 2 0 doesn t detect gpu with cuda version 10 2
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary source tensorflow version use command below 2 2 0 python version 3 6 10 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 2 gpu model and memory tesla k40 m describe the current behavior keras version 2 4 3 with tensorflow as backend doesn t detect the gpu this be verify by run tf config experimental list physical device gpu return emplty list also check as from tensorflow python client import device lib print device lib list local device return name device cpu 0 device type cpu memory limit 268435456 locality incarnation 11064916497553704899 name device xla cpu 0 device type xla cpu memory limit 17179869184 locality incarnation 5592130336569042773 physical device desc device xla cpu device describe the expect behavior it should detect the gpu as detect by pytorch as torch cuda be available output true torch cuda current device output 0 torch cuda get device name 0 output tesla k40 m standalone code to reproduce the issue not require in this case as it seem to be some version mismatch issue other info log attach the tf env txt output file for the environment setting tf env txt thank
tensorflowtensorflow
tf math acos raise unimplementederror for complex tensor
Bug
system information os platform and distribution windows10 1909 tensorflow instal from binary tensorflow version 2 2 0rc2 python version 3 8 0 cuda cudnn version none describe the current behavior the documentation say complex input be allow but tf math acos and tf math asin raise unimplementederror for complex64 or complex128 input I find that tf math acos and tf math asin use atan2 which do not support complex input so these op may need a new implement without atan2 when the input be complex standalone code to reproduce the issue python3 import tensorflow as tf a tf constant 1j dtype tf complex64 print tf math acos a describe the expect behavior tf tensor 1 5708 0 88137j shape 1 dtype complex64 output 2020 07 14 15 50 38 372937 w tensorflow core framework op kernel cc 1752 op require fail at xla compile on demand op cc 216 unimplemente binary complex op atan2 traceback most recent call last file acos err py line 15 in tf math acos a file c user root appdata local program python python38 lib site package tensorflow python ops gen math op py line 193 in acos op raise from not ok status e name file c user root appdata local program python python38 lib site package tensorflow python framework op py line 6653 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl unimplementederror binary complex op atan2 op acos
tensorflowtensorflow
tensorflow lite runtime check doesn t account for bytecode
Bug
this line in tensorflow lite doesn t consider the possibility that the name of the file could end with pyc instead of py if the module be compile to bytecode e g when use by pyinstaller I can file a pr but want to check first if there s a reason why this be the way that it be right now this cause the tensorflow lite runtime to try to import full tensorflow when it s compile to bytecode which of course fail l28 I would plan to change this to import os if not os path splitext file 0 endswith tflite runtime interpreter feedback welcome
tensorflowtensorflow
tensorflow 2 2 no gradient provide for any variable
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary pip install tensorflow version use command below 2 2 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 1 7 6 5 gpu model and memory gpu nvidia 1080ti 12 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I build a custom unet with custom densenet encoder after I compile model and use fit generator I get no gradient provide for any variable describe the expect behavior model run the training standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook link colab other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
doc error for tf linalg trace
Bug
I think the doc string here l2758 python output I j k l trace x I j I l should be python output I j k l trace x I j k l
tensorflowtensorflow
create a simple speech recognition example use tensorflow version 2
Bug
a tracking bug for migrate the tf1 speech recognition example to tf 2
tensorflowtensorflow
tflite two consecutive dequantize op be generate in the structure of the model after integer quantization
Bug
1 system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 x86 64 tensorflow instal from source or binary binary tensorflow version or github sha if from source tensorflow 2 3 0 rc1 2 command use to run the converter and code use the python api here s a minimal google colaboratory that can reproduce the situation 1 integer quantization perform integer quantization the resource use for quantization be obtain at google colaboratory this step will successfully generate an integer quantize tflite file python tensorflow 2 3 0 rc1 import tensorflow as tf import numpy as np def representative dataset gen for image in raw test datum image tf image resize image 512 512 image image np newaxis image image 127 5 image image 0 007843 yield image raw test datum np load calibration datum img coco 512x512 npy allow pickle true integer quantization input output float32 converter tf lite tfliteconverter from save model save model converter optimization tf lite optimize default converter representative dataset representative dataset gen tflite quant model converter convert with open efficientdet d0 512x512 integer quant tflite wb as w w write tflite quant model print integer quantization complete efficientdet d0 512x512 integer quant tflite console warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow import a function metagraph import with op with custom gradient will likely fail if a gradient be request warn tensorflow issue encounter when serialize global step type be unsupported or the type of the item don t match field type in collectiondef note this be a warning and probably safe to ignore to proto not support in eager mode integer quantization complete efficientdet d0 512x512 integer quant tflite 2 check the operation of the model the next step be to check the execution of the model with a minimal amount of test code when you run this test code you ll see an error in the input geometry mismatch for the dequantize op occur python import numpy as np import tensorflow as tf interpreter tf lite interpreter model path efficientdet d0 512x512 integer quant tflite interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail print input input detail print print output output detail input shape input detail 0 shape input datum np array np random random sample input shape dtype np float32 interpreter set tensor input detail 0 index input datum interpreter invoke output datum interpreter get tensor output detail 0 index print output data shape output data shape console runtimeerror traceback most recent call last in 3 4 interpreter tf lite interpreter model path efficientdet d0 512x512 integer quant tflite 5 interpreter allocate tensor 6 input detail interpreter get input detail 7 output detail interpreter get output detail usr local lib python3 6 dist package tensorflow lite python interpreter py in allocate tensor self 241 def allocate tensor self 242 self ensure safe 243 return self interpreter allocatetensor 244 245 def safe to run self runtimeerror tensorflow lite kernel dequantize cc 61 op context input type ktfliteuint8 op context input type ktfliteint8 op context input type ktfliteint16 op context input type ktflitefloat16 be not true node number 782 dequantize fail to prepare 3 link to save model graphdef and test dataset the link below contain efficientdet d0 s graphdef and save model and test dataset the integer quantize tflite file can be obtain from the link below 4 failure detail in some place a double dequantize op be generate in duplicate due to this issue run the python api interpreter allocate tensor cause an error indicate that the structure of the model be flawed console runtimeerror tensorflow lite kernel dequantize cc 61 op context input type ktfliteuint8 op context input type ktfliteint8 op context input type ktfliteint16 op context input type ktflitefloat16 be not true node number 782 dequantize fail to prepare screenshot 2020 07 11 21 57 04 the content of the error message and the index number of the op indicate that it be the second dequantize op that be cause this problem screenshot 2020 07 12 00 38 44 5 relate issue tflite slice isn t compatible with quantisation 29571 tflite interpreter allocate tensor fail to prepare not ktfliteint8 uint8 31053
tensorflowtensorflow
dlpack dlpack doesn t work with int32 on gpu
Bug
reproduce code scrollto ui luv7euick error message on my machine python 2020 07 11 14 42 25 598880 e tensorflow stream executor cuda cuda driver cc 1037 fail to enqueue async memcpy from device to host cuda error invalid value invalid argument host dst 0x1abfd2c0 gpu src 0x1a027e80 size 12 0xc 2020 07 11 14 42 25 598954 f tensorflow core common runtime gpu gpu util cc 291 gpu cpu memcpy fail it seem the int32 tensor s datum pointer be on cpu instead of be on gpu althought its device be gpu previously as I know there s special case handle for int32 tensor not sure whether this be relate and I just find I use constant op constant to test dlpack which mean now it only test the case on cpu since tf constant always place datum on host memory do you have any idea on this issue sanjoy alextp
tensorflowtensorflow
confuse documentation for tf image rgb to yuv
Bug
the documentation for tf image rgb to yuv say output a tensor of the same shape as the image tensor contain the yuv value of the pixel the output be only well define if the value in image be in 0 1 do that mean the rgb value should be 0 1 if so the usage example add confusion x 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 tf image rgb to yuv x clearly x do not lie in 0 1
tensorflowtensorflow
tf keras model evaluate not work as expect
Bug
system information code be run locally on macbook pro 2 8 ghz quad core intel core i7 radeon pro 555 2 gb intel hd graphic 630 1536 mb current behavior in jupyter import tensorflow as tf mnist tf keras datasets fashion mnist training image training label test image test label mnist load datum training image training image 255 0 test image test image 255 0 model tf keras model sequential tf keras layer flatten tf keras layer dense 128 activation tf nn relu tf keras layer dense 10 activation tf nn softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model fit training image training label epoch 5 test loss model evaluate test image test label the output for the evaluate method be currently 10000 1 follow by thousand of I also believe the fraction should be 10000 10000 expect behavior something like 10000 10000 code to reproduce paste above exactly as be in a jupyter notebook
tensorflowtensorflow
typeerror fail to convert object of type to tensor
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary tensorflow version use command below 2 4 0 dev20200709 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory exact command to reproduce describe the problem I be refer to this official kera example that show how to train an ocr model use the ctc loss I be try to extend it to the iam dataset which be rawer in term of its quality and the image be of handwritten character I be able to construct the dataset in the way expect by the model use in the example however the label in this case be of variable length this be why they be get convert to raggedtensor this be produce the follow error typeerror in user code usr local lib python3 6 dist package tensorflow python keras engine training py 806 train function return step function self iterator 10 call batch len tf cast tf shape y true 0 dtype int64 usr local lib python3 6 dist package tensorflow python util dispatch py 201 wrapper return target args kwargs usr local lib python3 6 dist package tensorflow python op array op py 617 shape v2 return shape input name out type usr local lib python3 6 dist package tensorflow python util dispatch py 201 wrapper return target args kwargs usr local lib python3 6 dist package tensorflow python op array op py 644 shape return shape internal input name optimize true out type out type usr local lib python3 6 dist package tensorflow python op array op py 668 shape internal input op convert to tensor input usr local lib python3 6 dist package tensorflow python framework op py 1525 convert to tensor ret conversion func value dtype dtype name name as ref as ref usr local lib python3 6 dist package tensorflow python framework constant op py 338 constant tensor conversion function return constant v dtype dtype name name usr local lib python3 6 dist package tensorflow python framework constant op py 264 constant allow broadcast true usr local lib python3 6 dist package tensorflow python framework constant op py 282 constant impl allow broadcast allow broadcast usr local lib python3 6 dist package tensorflow python framework tensor util py 553 make tensor proto support type type value value typeerror fail to convert object of type to tensor content tf raggedtensor value tf raggedtensor value tensor ocr model v1 cast 1 0 shape none dtype float32 row split tensor raggedfromvariant raggedtensorfromvariant 1 shape none dtype int64 row split tensor raggedfromvariant raggedtensorfromvariant 0 shape none dtype int64 consider cast element to a support type what I be look for be a way to bypass this so that the ground truth label and the predict label could be send to the ctc loss source code log here s the colab notebook the command that download the dataset be as follow wget user user password password wget user user password password please register here first and then replace the username and password accordingly this be a requirement
tensorflowtensorflow
gpu delegate library be libtensorflowlite gpu delegate so not libtensorflowlite gpu gl so
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change bazel build c opt config android arm64 tensorflow lite delegates gpu gl delegate for static library bazel build c opt config android arm64 tensorflow lite delegates gpu libtensorflowlite gpu gl so for dynamic library should be change to bazel build c opt config android arm64 tensorflow lite delegates gpu delegate for static library bazel build c opt config android arm64 tensorflow lite delegates gpu libtensorflowlite gpu delegate so for dynamic library because gl delegate be not gpu delegate runtime library it be for opengl delegate right clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide
tensorflowtensorflow
configure tf config in practice a small bug in the multi worker training with keras tutorial
Bug
hey lamberta and team a couple of thing here relate to the multi worker training with keras tutorial with tf distribute experimental multiworkermirroredstrategy 1 configure the tf config environment variable for multi worker training in practice url multi worker configuration description there be an important section dedicate to multi worker configuration multi worker configuration which state that outside of the free colab environment in practice user would create multiple worker on external ip address port and set tf config on each worker appropriately the current example snippet show that in the task component of tf config the index value of the worker be set to 0 task be different on each worker python import necessary library not currently include in the tutorial import json import os os environ tf config json dump cluster worker localhost 12345 localhost 23456 task type worker index 0 however this may create some confusion as demonstrate in the user try set the tf config environment variable with index 0 as show in the tutorial but that wouldn t let the rest of the script proceed with execution if the user run the experiment with index 1 all be ok as mention in you should be follow this tensorflow distribute training guide set up tf config environment variable where it say one example of tf config be python os environ tf config json dump cluster worker host1 port host2 port host3 port ps host4 port host5 port task type worker index 1 propose solution 1 borrow the sample code from the set up tf config environment variable set up tf config environment variable section from the distribute training with tensorflow guide into and 2 add this code sample to the multi worker configuration multi worker configuration section of the multi worker training with keras tutorial for example in this example we set the task type to worker and the task index to 0 this mean the machine that have such setting be the first worker original code snippet in your use case you may also set the task type to worker and the task index to 1 instead of 0 python import necessary library not currently include in the tutorial import json import os os environ tf config json dump cluster worker host1 port host2 port host3 port ps host4 port host5 port task type worker index 1 2 a small bug in the dataset sharding and batch size section url dataset sharding and batch size description the follow sentence in dataset sharding and batch size look a bit odd if you prefer manual sharding for your training automatic sharding can be turn off via tf datum experimental distributeoption api concretely this be follow by a code snippet submit a pull request yes for 1 once this have be clarify regard 2 I think it s up to the team
tensorflowtensorflow
mlir converter crash because tf decodejpeg be not an op
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 tensorflow instal from source or binary source tensorflow version or github sha if from source 2 4 0 dev20200709 command use to run the converter or code if you re use the python api convert to tflite mkdir p tflite save model dir repo path tmp output dir repo path tflite python tpu repo path model official detection export tflite model py save model dir save model dir output dir output dir the output from the converter invocation loc callsite decodejpeg home rsaini pyenv version edge lib python3 7 site package tensorflow python save model loader impl py 299 0 at callsite home rsaini pyenv version edge lib python3 7 site package tensorflow python util deprecation py 324 0 at callsite home rsaini pyenv version edge lib python3 7 site package tensorflow lite python convert save model py 198 0 at callsite home rsaini pyenv version edge lib python3 7 site package tensorflow lite python lite py 1923 0 at callsite home rsaini repos application tpu model official detection export tflite model py 35 0 at callsite home rsaini repos application tpu model official detection export tflite model py 49 0 at callsite home rsaini pyenv version edge lib python3 7 site package absl app py 250 0 at callsite home rsaini pyenv version edge lib python3 7 site package absl app py 299 0 at callsite home rsaini pyenv version edge lib python3 7 site package tensorflow python platform app py 40 0 at home rsaini repos application tpu model official detection export tflite model py 55 0 error tf decodejpeg op be neither a custom op nor a flex op error fail while convert main op that need custom implementation enable via set the emit custom op flag tf decodejpeg acceptable fraction 1 000000e 00 f32 channel 0 i64 dct method device fancy upscale true ratio 1 i64 try recover truncated false traceback most recent call last file home rsaini pyenv version edge lib python3 7 site package tensorflow lite python convert py line 199 in toco convert protos enable mlir converter file home rsaini pyenv version edge lib python3 7 site package tensorflow lite python wrap toco py line 38 in wrap toco convert enable mlir converter cst 9 std constant value dense 0x0000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430020084400200844002008440020084400200844002008440020084400200844002008440020184400201844002018440020184400201844002018440020184400201844002018440000024200000242000002420000024200000242000002420000024200000242000002420000c1420200c1420100c1420000c1420000c1420000c142feffc0420000c1420000c142008020430180204300802043008020430080204300802043ff7f20430080204300802043008060430180604300806043008060430080604300806043ff7f604300806043008060430040904300409043004090430040904300409043004090430040904300409043004090430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040b0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040d0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f0430040f043002008440020084400200844002008440020084400200844002008440020084400200844002018440020184400201844002018440020184400201844002018440020184400201844 note the output be an endless string of random hex character unfortunately I miss the last few stack frame as I be unable to perfectly time pause the process the hex character fill the terminal s buffer more quickly than I could react also please include a link to the save model or graphdef model zip failure detail no model be produce
tensorflowtensorflow
target tensor argument from tf keras model compile method have dissapeare
Bug
I be not entirely sure whether this be a bug or not I have a piece of code which be rely in the target tensor argument of the compile method this code work fine in tf 2 1 however when update to tf 2 2 it stop work look in the documentation I ve notice that indeed the argument target tensor have dissapeare link compile however I haven t see any mention to this in the changelog be this a bug or be it intentional the error message just say valueerror target tensor argument be not support when execute eagerly but look at the source code l2496 this argument be clearly not accept anymore if it be intentional be there a recommend workaround I can think of several not very intrusive way around it but all they feel a bit hacky
tensorflowtensorflow
keras model object have no attribute callable loss
Bug
have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento inux release 7 6 1810 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 7 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version v9 1 85 gpu model and memory tesla k80 24c I have a class with a model and I want to add a custom loss with three argument upon build the model the follow error be raise in build self 50 51 self get optimiser 52 model self get loss input x 53 54 return model in get loss self input output 151 model model input input y true be weight output output 152 153 model add loss weight dice loss y true output be weight 154 self model compile optimizer self optimiser loss none metric dice coef 155 miniconda3 envs segment lib python3 7 site package tensorflow python keras engine base layer py in add loss self loss input 899 eager loss append tag unconditional loss 900 901 self callable loss callable loss 902 903 call context base layer util be in call context attributeerror model object have no attribute callable loss the expect behaviour be thus that the model accept this custom loss since it have three argument instead of two when call unet build without get throw an error unet unet unet build the minimum reproducible code be as follow class unet def init self kwargs self input shape kwarg get input shape 12 86 98 1 self block kwarg get block 2 self layer kwarg get layer 8 self n filter kwarg get n filter 16 self patch kwarg get patch 3 3 3 self activation kwarg get activation elu self activation last kwarg get activation last sigmoid self kernel initializer kwarg get kernel initializer glorot normal self padding kwarg get pad same self learnrate kwarg get learnrate 0 001 self momentum kwarg get momentum 0 99 self decay kwarg get decay 0 0 self mode kwarg get mode train def build self initialise array to keep skip connection self skip input input shape self input shape x self first layer input x self contractive path x x self middle path x x self expansive path x self get optimiser model self get loss input x return model def first layer self input layer conv3d filter self n filter kernel size self patch activation self activation kernel initializer self kernel initializer padding self padding input return layer def contractive path self layer for b in range 0 self block for I in range 0 self layer layer conv3d filter self n filter kernel size self patch activation self activation kernel initializer self kernel initializer padding self padding layer append for later use in up sample self skip append layer downsample use patch 2 2 2 and stride of 2 similar to maxpooling3d but use less parameter layer conv3d filter self n filter kernel size 2 2 2 stride 2 2 2 activation self activation kernel initializer self kernel initializer padding self padding layer post pool double number of filter self n filter int self n filter 2 return layer def middle path self layer for I in range 0 self layer layer conv3d filter self n filter kernel size self patch activation self activation kernel initializer self kernel initializer padding self padding layer return layer def expansive path self layer for u in range 0 self block layer upsampling3d size 2 2 2 datum format none layer skip connection from down path concat lr self skip 1 layer concat lr crop tensor layer concat lr layer concatenate layer concat lr for I in range 0 self layer layer conv3d filter self n filter kernel size self patch activation self activation kernel initializer self kernel initializer padding self padding layer self n filter int self n filter 2 print upblock str u end n filter str n filter self skip self skip 1 get rid of last skip connection output layer y pre conv3d 1 1 1 1 activation self activation last layer return y pre def get optimiser self self optimiser sgd lr self learnrate momentum self momentum decay self decay nesterov false def get loss self input output y true input self input shape name y true be weight input self input shape name be weight model model input input y true be weight output output model add loss weight dice loss y true output be weight self model compile optimizer self optimiser loss none metric dice coef return self model the weight dice loss function be define as follow def weight dice loss y true y pre w return weight dice coef y true y pre w
tensorflowtensorflow
bad input tensor parameter in model
Bug
tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 freertos tensorflow instal from source or binary tensorflow lite tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc esp32 with 2 mb external flash describe the problem follow the step in deploy to esp32 get the follow error bad input tensor parameter in model guru meditation error core 1 panic ed loadprohibite exception be unhandled core 1 register dump pc 0x400d389b ps 0x00060730 a0 0x800d35cf a1 0x3ffc2080 0x400d389b featureprovider populatefeaturedata tflite errorreporter int int int at user velsaran project sound ai micro speech esp idf build main feature provider cc 37 a2 0x00000000 a3 0x3ffb1200 a4 0x00000000 a5 0x00000000 a6 0x3ffc20c4 a7 0x00000004 a8 0x800f1610 a9 0x3ffc1f20 a10 0x00000000 a11 0x3f402bac a12 0x3ffc2080 a13 0x3ffc2060 a14 0x00000008 a15 0x3ffb56a0 sar 0x00000001 exccause 0x0000001c excvaddr 0x00000000 lbeg 0x400014fd lend 0x4000150d lcount 0xffffffff backtrace 0x400d3898 0x3ffc2080 0x400d35cc 0x3ffc20c0 0x400d2fe2 0x3ffc20f0 0x400d3898 featureprovider populatefeaturedata tflite errorreporter int int int at user velsaran project sound ai micro speech esp idf build main feature provider cc 36 0x400d35cc loop at user velsaran project sound ai micro speech esp idf build main main function cc 132 0x400d2fe2 tf main int char at user velsaran project sound ai micro speech esp idf build main esp main cc 29 discriminator 1 please provide the exact sequence of command step when you run into the problem
tensorflowtensorflow
index error find in tf keras backend dot function
Bug
sorry for bother an error both find in tf1 15 system information os window 10 1904 tensorflow instal from source or binary binary tensorflow version use command below tensorflow 1 15 and tensorflow 2 1 python version python 3 6 for tensorflow 1 15 python 3 6 for tensorflow 2 1 cuda cudnn version 10 gpu model and memory nvidia rtx 2080ti 11 g describe the current behavior here be the source code of tensorflow keras backend dot keras export keras backend dot def dot x y multiplie 2 tensor and or variable and return a tensor when attempt to multiply a nd tensor with a nd tensor it reproduce the theano behavior e g 2 3 4 3 5 2 4 5 argument x tensor or variable y tensor or variable return a tensor dot product of x and y if ndim x be not none and ndim x 2 or ndim y 2 x shape for I s in zip int shape x tf unstack tf shape x if I be not none x shape append I else x shape append s x shape tuple x shape y shape for I s in zip int shape y tf unstack tf shape y if I be not none y shape append I else y shape append s y shape tuple y shape y permute dim list range ndim y y permute dim y permute dim pop 2 y permute dim xt tf reshape x 1 x shape 1 yt tf reshape tf transpose y perm y permute dim y shape 2 1 return tf reshape tf matmul xt yt x shape 1 y shape 2 y shape 1 if be sparse x out tf sparse sparse dense matmul x y else out tf matmul x y return out the issue be that I have a tensor x which shape be 128 and a tensor y with shape 128 thus satisfy the if judgement clause and variant y permute dim will be 0 next an index out of range error would be raise from the code y permute dim y permute dim pop 2 y permute dim it be obvious for there be only one element in y permute dim this error be catch both in tensorflow backend py tensorflow1 15 and source code be just the same in backend py tensorflow 2 1 look forward for your reply other info log I file d anaconda3 envs tf1 15 lib site package keras backend tensorflow backend py line 1365 in dot y permute dim y permute dim pop 2 y permute dim indexerror pop index out of range
tensorflowtensorflow
typeerror init get an unexpected keyword argument lambda
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 n a google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 15 2 python version 3 6 9 bazel version if compile from source n a describe the current behavior I can not load a save model contain a subclass of keras regularizer regularizer error typeerror init get an unexpected keyword argument lambda describe the expect behavior I expect it to load my model since the model work well and be save correctly standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook class my regularizer regularizer def init self lambd matrix x matrix y matrix z self lambd lambd self matrix x matrix x self matrix y matrix y self matrix z matrix z def call self x return tf linalg tensor diag tf diag part self lambd k dot k transpose k square x k variable self matrix x dtype float32 k variable self matrix y dtype float32 k variable self matrix z dtype float32 def get config self return lambd float self lambd matrix x self matrix x matrix y self matrix y matrix z self matrix z other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach typeerror traceback most recent call last in 1 test load model content drive my drive memoire wolfsky model final regularized model h5 13 frame usr local lib python3 6 dist package keras regularizer py in from config cls config 24 classmethod 25 def from config cls config 26 return cls config 27 28 typeerror init get an unexpected keyword argument lambda
tensorflowtensorflow
timedistribute do not infer output batch size when timestep none
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary n a tensorflow version use command below v2 2 0 0 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be try to create a convolutional lstm model which make use of convlstm2d as well as other layer such as conv2d and maxpool2d wrap in timedistribute layer I m use the functional api this model should be stateful therefore I pass the batch size to the input layer but I would like to keep the timestep dimension flexible so I set that to none the problem arise when use timedistribute layer because their output batch size be always set to none even if their input batch size be fix see follow summary layer type output shape param input 18 inputlayer 4 none 64 64 1 0 time distribute 27 timedi none none 64 64 1 257 no lstms after this point because batch size none as a result stateful lstm layer can not be use after timedistribute as they will fail with valueerror if a rnn be stateful it need to know its batch size describe the expect behavior I certainly could be miss something but I don t see any reason why the timedistribute layer shouldn t be able to infer its output batch size correctly be it not always the same as the input batch size even if that s not true shouldn t it be possible to have a mechanism to help the layer figure it out standalone code to reproduce the issue link to a simple example in colab other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
tensorflow tensorflow 2 1 1 gpu have python3 instead of python2
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 n a mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary n a tensorflow version use command below v2 1 0 33 g3ffdb91 2 1 1 python version see detail bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior docker image tensorflow tensorflow 2 1 1 gpu contain python3 6 9 describe the expect behavior docker image tensorflow tensorflow 2 1 1 gpu contain python2 7 17 as be expect for docker image 2 1 standalone code to reproduce the issue docker run tensorflow tensorflow 2 1 1 gpu python version python 3 6 9 other info log include any log or source code that would be helpful to note tensorflow tensorflow 2 1 0 gpu contain python2 docker run tensorflow tensorflow 2 1 0 gpu python version python 2 7 17
tensorflowtensorflow
uint8 model runtime input s num be 2
Bug
system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 2 1 command use to run the converter or code if you re use the python api import tensorflow as tf from tensorflow keras model import load model keras file kera 0127127 h5 model load model keras file converter tf lite tfliteconverter from keras model model converter inference input type tf uint8 converter inference type tf uint8 tf lite constant quantize uint8 converter inference output type tf uint8 converter optimization tf lite optimize optimize for size tflite uint8 model converter convert open uint8 tflite wb write tflite uint8 model the output from the converter invocation I use netron software to visualiz the uint8 tflite model it show that image we can see some depthwise conv 2d operation have 2 input also please include a link to the keras model I use above failure detail if the conversion be successful but the generate model be wrong state what be wrong produce model that c tflite gpu delegate library can not process when I use c tflite gpu delegate library to run the model I get such log info initialize tensorflow lite runtime info create tensorflow lite delegate for gpu error follow operation be not support by gpu delegate depthwise conv 2d expect 1 runtime input tensor s but node have 2 runtime input s dequantize expect 1 runtime input tensor s but node have 0 runtime input s 20 operation will run on the gpu and the remain 41 operation will run on the cpu info initialize opencl base api info create 1 gpu delegate kernel
tensorflowtensorflow
bug in create an op of tensorflow 1 15
Bug
tensorflow tensorflow core create an op gpu kernel there be an example of gpu kernel implementation there be three file kernel example h kernel example cc kernel example cu cc when I compile the kernel example cu cc with the commond nvcc std c 11 c o kernel example cu o kernel example cu cc tf cflag d google cuda 1 x cu xcompiler fpic there be some error kernel example h error a class or namespace qualified name be require kernel example h warn nonstandard qualified name in global scope declaration kernel example h error class template examplefunctor have already be define how to deal with that
tensorflowtensorflow
tf mlir graph prune canonicalization pass doesn t eliminate some dead constant node
Bug
this be with the tensorflow trunk at e13ff2da43ab00dba94dd00d87a876b2a03f48e7 jul 6 this be a reduce test case where both nod operation in the graph be suppose to be dead result unused and not side effecting but neither the tf executor graph pruning nor the canonicalize pass get rid of they however if one of they be remove canonicalize be able to eliminate the other similar pattern be often dce d by tf executor graph pruning in the presence of other large island module func main attribute tf entry function control output input output tf executor graph output control tf executor island wrap tf const value dense 1 tensor 1xi32 tensor 1xi32 output 0 control 1 tf executor island wrap tf const value dense 100 tensor 1xi32 tensor 1xi32 tf executor fetch return to reproduce run tf opt tf executor graph pruning canonicalize test case mlir os cento 8 x86 64 tf build from source with gcc 8 3 1 and with linkopt fuse ld lld
tensorflowtensorflow
doc of rmsprop optimizer seem not consistent with the code
Bug
null
tensorflowtensorflow
syncbatchnormalization layer segfault on multi worker with nccl
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 14 04 in a docker container on an 18 04 host mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 2 2 0 python version 3 6 8 bazel version if compile from source 0 24 1 3 0 gcc compiler version if compile from source 4 8 5 cuda cudnn version 10 0 gpu model and memory nvidia titan x 11 gb describe the current behavior when training model with the tf keras layer experimental syncbatchnormalization layer and use tf distribute experimental multiworkermirroredstrategy to train across multiple worker with tf distribute experimental collectivecommunication nccl communication the model train for some amount of time e g several thousand step then crash with a segfault describe the expect behavior the model should train without segfaulte standalone code to reproduce the issue an example be below please note that this code must run on multiple worker the tf config environment variable must be set appropriately for your specific multi worker configuration import tensorflow as tf from tensorflow import kera def get dataset x tf zero 10 dtype tf float32 x tf datum dataset from tensor x y tf constant 5 y tf datum dataset from tensor slice y dataset tf datum dataset zip x y dataset dataset batch 1 dataset dataset repeat return dataset def main note you must set os environ tf config as appropriate strategy tf distribute experimental multiworkermirroredstrategy tf distribute experimental collectivecommunication nccl assert strategy num replicas in sync 2 create dataset dataset get dataset with strategy scope construct model model keras sequential layer tf keras layer experimental syncbatchnormalization tf keras layer dense 1 model compile optimizer keras optimizer adam loss keras loss meansquarederror model fit x dataset step per epoch 10 6 epoch 10 3 if name main main this be reproducible across a wide array of contexts for example a keras model an estimator model different gpu type etc other info log I use gdb to inspect a coredump from the crashed process the backtrace be 0 0x00007f1d68cef711 in tensorflow ncclreducer run std function this this entry 0x7f18cc011af0 do at external org tensorflow tensorflow core kernel collective nccl reducer cc 185 1 0x00007f1d71797ca6 in tensorflow basecollectiveexecutor operator void const closure at external org tensorflow tensorflow core common runtime base collective executor cc 276 2 0x00007f1d71797efe in std function handler m invoke const std any data functor at external gcc 7 4 usr include c 7 bit std function h 316 3 0x00007f1d669a0e08 in std function operator const this this entry 0x7f185e7fbe60 at external gcc 7 4 usr include c 7 bit std function h 706 4 0x00007f1d71cc4f44 in tensorflow unboundedworkqueue pooledthreadfunc this 0x20758660 at external org tensorflow tensorflow core platform default unbounded work queue cc 99 5 0x00007f1d71cc5004 in tensorflow unboundedworkqueue operator closure at external org tensorflow tensorflow core platform default unbounded work queue cc 68 6 std function handler m invoke const std any data functor at external gcc 7 4 usr include c 7 bit std function h 316 7 0x00007f1d669a0e08 in std function operator const this at external gcc 7 4 usr include c 7 bit std function h 706 8 0x00007f1d71d136dd in std invoke impl std invoke other std function f at external gcc 7 4 usr include c 7 bit invoke h 60 9 std invoke std function fn at external gcc 7 4 usr include c 7 bit invoke h 95 10 std thread invoker m invoke 0ul std index tuple 0ul this at external gcc 7 4 usr include c 7 thread 234 11 std thread invoker operator this at external gcc 7 4 usr include c 7 thread 243 12 std thread state impl m run this at external gcc 7 4 usr include c 7 thread 186 13 0x00007f1d2eb72ae0 in from usr lib x86 64 linux gnu libstdc so 6 14 0x00007f1da0c12184 in start thread arg 0x7f185e7fc700 at pthread create c 312 15 0x00007f1d9fdef03d in clone at sysdep unix sysv linux x86 64 clone s 111 disassemble the function show that this be the offend instruction 0x00007f1d68cef707 2651 jg 0x7f1d68cf0642 6550 0x00007f1d68cef70d 2657 mov 0x18 rbx rax 0x00007f1d68cef711 2661 mov 0x8 rax rdi 0x00007f1d68cef715 2665 mov rdi rax 0x00007f1d68cef718 2668 mov 0x20 rbx rsi and print out the register show rax 0x0 0 so some sort of pointer be set to 0 it therefore look like there be some sort of null pointer exception in line 185 of collective nccl reducer cc which I believe be the line col ctx col exec unblockdependencie col param I don t have any idea why it would segfault there however the same line appear shortly above on line 176 so it s strange that it would segfault the second time also a log be attach here however it be not very interesting as it just run for a while and then segfault
tensorflowtensorflow
socket close when layernormalization be use for tensor of 3 or more dimension with tpu in colab
Bug
when I use layernormalization on tensor of 3 or more dimension in the colab with tpu I encounter the follow error unavailableerror socket close additional grpc error information create 1593962551 753581454 description error receive from peer ipv4 10 10 67 202 8470 file external com github grpc grpc src core lib surface call cc file line 1056 grpc message socket close grpc status 14 please find the gist here thank
tensorflowtensorflow
multiworkermirroredstrategy multi worker don t train once sync
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no I use keras tutorial multi worker training with keras os platform and distribution e g linux ubuntu 16 04 chief ubuntu 20 04 worker ubuntu 20 04 tensorflow instal from source or binary from a docker image tensorflow tensorflow late gpu tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version v10 1 243 gpu model and memory chief titan rtx 24 go worker 4x gtx 1060 6go you can collect some of this information use our environment capture tf env txt describe the current behavior I m run the multi worker with keras tutorial on 2 computer 1 chief and 1 worker both equip with different gpu I run the tutorial into a tensorflow gpu docker image on each computer each computer have be configure to communicate with an ssh key without a password need 1 the single worker model part of the tutorial work fine on both computer each computer go through the epoch individually 2 regard the multiworkermirroredstrategy part if I omit to set tf config as an environment variable both node train correctly on their side note on the worker the strategy distribute the work across the 4 local gpu as expect 3 when I run the final script tf config multiworkermirroredstrategy by start the chief first then the worker the chief wait for the worker then they sync they finally both start epoch 1 and hold describe the expect behavior 1 have both worker to train after log epoch 1 60 2 have both worker to sync standalone code to reproduce the issue on the chief docker run gpu all it p 12345 12345 rm tensorflow tensorflow late gpu export tf config cluster worker 192 168 1 31 12345 192 168 1 46 12345 task index 0 type worker copy the script to the docker container python multi worker with keras py on the worker docker run gpu all it p 12345 12345 rm tensorflow tensorflow late gpu export tf config cluster worker 192 168 1 31 12345 192 168 1 46 12345 task index 1 type worker copy the script to the docker container python multi worker with keras py note only the index change the script import tensorflow as tf import numpy as np def mnist dataset batch size x train y train tf keras datasets mnist load datum x train x train np float32 255 y train y train astype np int64 train dataset tf datum dataset from tensor slice x train y train shuffle 60000 repeat batch batch size return train dataset def build and compile cnn model model tf keras sequential tf keras input shape 28 28 tf keras layers reshape target shape 28 28 1 tf keras layer conv2d 32 3 activation relu tf keras layer flatten tf keras layer dense 128 activation relu tf keras layer dense 10 model compile loss tf keras loss sparsecategoricalcrossentropy from logit true optimizer tf keras optimizer sgd learn rate 0 001 metric accuracy return model strategy tf distribute experimental multiworkermirroredstrategy per worker batch size 512 num worker 2 global batch size per worker batch size num worker multi worker dataset mnist dataset global batch size with strategy scope multi worker model build and compile cnn model multi worker model fit multi worker dataset epoch 60 step per epoch 60 note I have raise the number of epoch step per epoch and per worker batch size I have change the number of num worker from 4 to 2 note I haven t do the modelcheckpoint callback part as I want to validate the training sync first note I have test with auto sharding policy off code from the tutorial and I don t get the find an unshardable source dataset error but the worker don t train either option tf datum option option experimental distribute auto shard policy tf datum experimental autoshardpolicy off dataset no auto shard multi worker dataset with option option other info log on the chief chief log 2020 07 05 12 52 14 867922 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 07 05 12 52 14 867929 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 07 05 12 52 14 867935 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 07 05 12 52 14 867941 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 07 05 12 52 14 867948 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 07 05 12 52 14 867954 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 07 05 12 52 14 867960 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 05 12 52 14 868007 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 05 12 52 14 868671 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 05 12 52 14 869292 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 07 05 12 52 14 869303 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 07 05 12 52 14 869307 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 07 05 12 52 14 869311 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 07 05 12 52 14 869374 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 05 12 52 14 870033 I tensorflow stream executor cuda cuda gpu executor cc 981 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2020 07 05 12 52 14 870662 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job worker replica 0 task 0 device gpu 0 with 21960 mb memory physical gpu device 0 name titan rtx pci bus i d 0000 09 00 0 compute capability 7 5 2020 07 05 12 52 14 873278 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job worker 0 localhost 12345 1 192 168 1 46 12345 2020 07 05 12 52 14 873849 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 12345 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation 2020 07 05 12 52 22 818158 w tensorflow core grappler optimizer datum auto shard cc 434 in auto mode and switch to datum base sharding instead of file base sharding as we can not find appropriate reader dataset op s to shard error find an unshardable source dataset name tensorslicedataset 2 op tensorslicedataset input placeholder 0 input placeholder 1 attr key toutput type value list type dt float type dt int64 attr key output shape value list shape dim size 28 dim size 28 shape epoch 1 60 2020 07 05 12 52 25 479341 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 07 05 12 52 25 731883 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 on the worker worker log 2020 07 05 12 52 20 891307 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2020 07 05 12 52 20 891332 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 07 05 12 52 20 891358 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2020 07 05 12 52 20 891383 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2020 07 05 12 52 20 891407 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2020 07 05 12 52 20 891431 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2020 07 05 12 52 20 891455 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2020 07 05 12 52 20 898699 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 1 2 3 2020 07 05 12 52 20 898900 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 07 05 12 52 20 898916 I tensorflow core common runtime gpu gpu device cc 1108 0 1 2 3 2020 07 05 12 52 20 898928 I tensorflow core common runtime gpu gpu device cc 1121 0 n y y y 2020 07 05 12 52 20 898938 I tensorflow core common runtime gpu gpu device cc 1121 1 y n y y 2020 07 05 12 52 20 898948 I tensorflow core common runtime gpu gpu device cc 1121 2 y y n y 2020 07 05 12 52 20 898958 I tensorflow core common runtime gpu gpu device cc 1121 3 y y y n 2020 07 05 12 52 20 904332 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job worker replica 0 task 1 device gpu 0 with 5644 mb memory physical gpu device 0 name geforce gtx 1060 6 gb pci bus i d 0000 19 00 0 compute capability 6 1 2020 07 05 12 52 20 905569 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job worker replica 0 task 1 device gpu 1 with 5644 mb memory physical gpu device 1 name geforce gtx 1060 6 gb pci bus i d 0000 1a 00 0 compute capability 6 1 2020 07 05 12 52 20 906769 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job worker replica 0 task 1 device gpu 2 with 5644 mb memory physical gpu device 2 name geforce gtx 1060 6 gb pci bus i d 0000 67 00 0 compute capability 6 1 2020 07 05 12 52 20 907967 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job worker replica 0 task 1 device gpu 3 with 5631 mb memory physical gpu device 3 name geforce gtx 1060 6 gb pci bus i d 0000 68 00 0 compute capability 6 1 2020 07 05 12 52 20 915563 I tensorflow core distribute runtime rpc grpc channel cc 301 initialize grpcchannelcache for job worker 0 192 168 1 31 12345 1 localhost 12345 2020 07 05 12 52 20 917163 I tensorflow core distribute runtime rpc grpc server lib cc 390 start server with target grpc localhost 12345 warn tensorflow eval fn be not pass in the worker fn will be use if an evaluator task exist in the cluster warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation 2020 07 05 12 52 22 833446 w tensorflow core grappler optimizer datum auto shard cc 434 in auto mode and switch to datum base sharding instead of file base sharding as we can not find appropriate reader dataset op s to shard error find an unshardable source dataset name tensorslicedataset 2 op tensorslicedataset input placeholder 0 input placeholder 1 attr key toutput type value list type dt float type dt int64 attr key output shape value list shape dim size 28 dim size 28 shape epoch 1 60 2020 07 05 12 52 25 979992 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2020 07 05 12 52 26 242036 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7
tensorflowtensorflow
break link in www tensorflow org resource learn ml
Bug
the four area of machine learn education build your own project colab give I notebook not find fetch for fail message no commit find for the ref r2 0rc documentation url
tensorflowtensorflow
unexpected crash when load a modify save model pb file
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below v2 0 2 0 g2c2fdd3205 2 0 2 python version bazel version if compile from source 0 26 1 gcc compiler version if compile from source clang 10 0 cuda cudnn version no gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I use the libfuzzer to mutate an intact save model pb file and use loadsavedmodel c api to load it instead of return a status whose ok be false the program directly crash with follow error 2020 06 20 15 29 05 816403 I tensorflow cc save model reader cc 31 reading savedmodel from home xxx playground tensorflow save model crash 2020 06 20 15 29 05 817167 I tensorflow cc save model reader cc 54 read meta graph with tag serve 2020 06 20 15 29 05 829727 I tensorflow cc save model loader cc 202 restore savedmodel bundle libprotobuf fatal external com google protobuf src google protobuf map h 1059 check fail it end key not find value terminate call after throw an instance of google protobuf fatalexception what check fail it end key not find value 1 26722 abort core dump loader test describe the expect behavior since loadsavedmodel return an status object the status ok should return false rather than a direct crash standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook c include tensorflow cc save model loader h include tensorflow cc save model constant h include tensorflow cc save model tag constant h include use namespace tensorflow int main const string export dir home xxx playground save model intact original pb file status ok true const string export dir2 home xxx playground save model test modify pb file status ok false const string export dir3 home xxx playground save model crash modify pb file crash savedmodelbundle bundle sessionoption session option runoption run option std cout hello inference context get if be feed if isconstant node c output tensor proto resize 1 const tensorproto tensor proto node attr at value tensor at xx if xx do not exist will bring a crash once the value attribute can not be obtain an exception will be throw
tensorflowtensorflow
categorical crossentropy with label smoothing support on tf bfloat
Bug
it seem that when enable label smooth categorical crossentropy only support input in float32 so not useful when use tf bfloat on tpu I change it to the following to make it work from tensorflow python framework import op from tensorflow python op import math op from tensorflow python framework import smart cond from tensorflow python keras import backend as k from tensorflow python op import array op def categorical crossentropy y true y pre from logit false label smooth 0 y pre op convert to tensor v2 y pre y true math op cast y true y pre dtype label smooth op convert to tensor v2 label smoothing dtype tf bfloat16 def smooth label num class math op cast array op shape y true 1 y pre dtype return y true 1 0 label smooth label smooth num class y true smart cond smart cond label smooth smooth label lambda y true return k categorical crossentropy y true y pre from logit from logit
tensorflowtensorflow
tf keras layer depthwiseconv2d and tf keras layers separableconv2d do not handle kernel regularizer correctly
Bug
please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with the documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 cento 7 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device na tensorflow instal from source or binary binary from anaconda tensorflow version use command below 2 2 0 python version 3 7 7 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version 10 1 7 6 5 gpu model and memory nvidia tesla v100 32 gb exact command to reproduce source code below you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request kernel regularizer parameter on tf keras layer depthwiseconv2d and tf keras layer separableconv2d do not register the loss to model loss despite of bias regularizer work I ll describe full test code below source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem expect case 1 dense layer python m tf keras model sequential m add tf keras layer dense 3 input dim 5 kernel regularizer tf keras regularizer l2 1 0e 01 bias regularizer tf keras regularizer l2 1 0e 01 m add tf keras layer dense 3 input dim 5 kernel regularizer tf keras regularizer l2 1 0e 01 print m loss output expect case 2 conv2d layer python m tf keras model sequential m add tf keras layer input 32 32 3 m add tf keras layer conv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 bias regularizer tf keras regularizer l2 1 0e 01 m add tf keras layer conv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 print m loss output expect case 3 convlstm2d layer python m tf keras model sequential m add tf keras layer input 3 32 32 3 m add tf keras layer convlstm2d 3 3 return sequence true kernel regularizer tf keras regularizer l2 1 0e 01 bias regularizer tf keras regularizer l2 1 0e 01 m add tf keras layer convlstm2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 print m loss output do not expect case 1 depthwiseconv2d layer python m tf keras model sequential m add tf keras layer input 32 32 3 m add tf keras layer depthwiseconv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 bias regularizer tf keras regularizer l2 1 0e 01 m add tf keras layer depthwiseconv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 print m loss output do not expect case 2 separableconv2d layer python m tf keras model sequential m add tf keras layer input 32 32 3 m add tf keras layer separableconv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 bias regularizer tf keras regularizer l2 1 0e 01 m add tf keras layer separableconv2d 3 3 kernel regularizer tf keras regularizer l2 1 0e 01 print m loss output be I have a mistake use regularizer with these layer
tensorflowtensorflow
issue use segment prod in custom keras layer
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary nightly tensorflow version use command below nightly python version colab I m try to write a custom kera layer that use segment prod on model feature I e not the batch dimension to do that I ve be use tf transpose to put the feature dimension first and then transpose the result calculation forward calculation seem to work use this approach but there appear to be an issue with gradient with segment prod the follow code fail with lookuperror gradient registry have no entry for segmentprod python class mylayer1 layer layer def call self input segment tf constant 0 0 0 1 1 return tf transpose tf math segment prod tf transpose input segment input layer input 10 embed layer embed 20 5 input output mylayer1 embed output layer globalaveragepooling1d output model tf keras model input output x np random randint 20 size 100 10 y np random randn 100 2 model compile loss mae model fit x y replace tf math segment prod with other operation like tf math unsorted segment prod num segment 2 or tf math segment sum seem to work however colab gist
tensorflowtensorflow
invalid link in documentation to code
Bug
tensorflow documentation issue url s with the issue the link page not find 404 image object detection api under namely description of issue what need change material be move to
tensorflowtensorflow
doc code style say to install clang tidy then example use clang format
Bug
url s with the issue description of issue what need change doc say to install clang tidy but the example give say to run clang format be this intend I would have expect to run clang tidy
tensorflowtensorflow
tpu errro invalidargumenterror nodedef expect input string do not match 0 input specify
Bug
tpu be not work with any version great than tf 2 2 0 even in tf 2 2 0 it get stick indefinitely
tensorflowtensorflow
fail launch resizenearestneighbor
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 3 0 rc0 python version 3 6 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 7 6 5 32 1 cuda10 2 gpu model and memory mx150 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tensorflow python framework error impl internalerror fail launch resizenearestneighbor op resizenearestneighbor screenshot from 2020 07 03 13 07 02 tf env txt describe the expect behavior standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python tf image resize tf one 1 416 416 3 200 100 tf image resizemethod near neighbor preserve aspect ratio true other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach relate I use gpu I can t meet any bug when I run gpu kernel test
tensorflowtensorflow
tf keras layers lstm tf function fail to compute jacobian with pfor on gpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 34938 g99fea8da0d 2 3 0 rc0 python version 3 7 cuda cudnn version conda list grep cud cudatoolkit 10 1 243 h6bb024c 0 cudnn 7 6 5 cuda10 1 0 gpu model and memory nvidia geforce gtx 1080 ti describe the current behavior tensorflow crash when compute gradienttape jacobian s for an output of tf keras layers lstm within a tf function when run on gpu describe the expect behavior the graph compile correctly and efficiently compute the jacobian standalone code to reproduce the issue python import tensorflow as tf batch size sequence length 2 3 x input tf keras layers input shape sequence length 1 name input dtype tf float32 mask input tf keras layers input shape sequence length name mask dtype tf bool out tf keras layers lstm unit 8 return sequence true return state false x input mask mask input out tf keras layer dense 1 activation linear out model tf keras model x input mask input out x tf random uniform batch size sequence length x input shape 1 dtype x input dtype mask tf sequence mask tf random uniform batch size minval 0 maxval sequence length dtype tf int32 maxlen sequence length 1 tf function experimental relax shape true def compute jacobian y true tf zero batch size with tf gradienttape as tape y model x mask y tf reduce sum y axis 1 loss tf loss mse y pre y y true y true jacobian tape jacobian loss model trainable variable experimental use pfor true return jacobian jacobian compute jacobian other info log run the above code result in a huge error trace and finally output notimplementederror vectorization try to stack variant tensor tensor gradient while grad gradient grad ys 4 pfor identity 0 shape dtype variant this be likely because vectorization of that variant be not fully support yet I know that the pfor flag be experimental and set experimental use pfor false would make the code run however in that case the result graph run so slow that it s effectively unusable even for simple 2 element jacobian use parallel iteration 10 experimental use pfor false above result in the follow warning which might have something to do with the slowness 2020 07 03 08 31 26 889383 e tensorflow core grappler optimizer meta optimizer cc 581 function optimizer fail invalid argument input 0 of node while enter 15 be pass bool from functional 1 lstm partitionedcall 5 incompatible with expect int32 2020 07 03 08 31 26 933046 e tensorflow core grappler optimizer meta optimizer cc 581 layout fail out of range src output 26 but num output be only 26 2020 07 03 08 31 26 978710 e tensorflow core grappler optimizer meta optimizer cc 581 function optimizer fail invalid argument input 0 of node while enter 15 be pass bool from functional 1 lstm partitionedcall 5 incompatible with expect int32 2020 07 03 08 31 27 036554 w tensorflow core common runtime process function library runtime cc 773 ignore multi device function optimization failure invalid argument input 0 of node while enter 15 be pass bool from functional 1 lstm partitionedcall 5 incompatible with expect int32 any workaround would also be much appreciated and I d even be happy to contribute a fix for this if one would be doable without much c experience
tensorflowtensorflow
tensor object have no attribute numpy for concat
Bug
hi I m work on a custom implementation of physics inform deep learning model from this paper and while rewrite a snippet with custom gradient it turn out to be not work for I for late tensorflow version I ve rewrite this snippet l105 python3 self x u tf tf placeholder tf float32 shape none self x u shape 1 self t u tf tf placeholder tf float32 shape none self t u shape 1 self net f self x f tf self t f tf def net u self x t u self neural net tf concat x t 1 self weight self bias return u def net f self x t u self net u x t u t tf gradient u t 0 u x tf gradient u x 0 u xx tf gradient u x x 0 f u t u u x self nu u xx to the late api with gradienttape link to collab notebook with error below and it throw attributeerror tensor object have no attribute numpy for g gradient ur x it turn out that for case without concat it work so it look like concat change behavior example from the paper be operational for old version so I would be thankful for help figure out this concat issue system information os platform and distribution e g linux ubuntu 16 04 macos 10 15 5 tensorflow instal from source or binary pip install tensorflow tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 7 7 describe the current behavior gradienttape gradient throw attributeerror tensor object have no attribute numpy for a case that old tensorflow version have be work standalone code to reproduce the issue reproduce error in colab
tensorflowtensorflow
error tensorflow error have not be declare status tensorflow error code code tensorflow stringpiece msg
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution linux ubuntu 16 04 tensorflow instal from source tensorflow version 1 14 python version 3 5 bazel version if compile from source 0 24 1 gcc compiler version 5 4 0 20160609 cuda cudnn version 10 0 7 6 5 gpu model and memory gf1080 ti describe the current behavior I be a new to c I want to accomplish cnn model with c tensorflow api I follow the manual on the internet and finishe tensorflow compile and generate the libtensorflow cc so file now I want to build a test use cmake 3 13 0 I create a cmakelist txt and execute cmake make the error make 2 warning file home john tensorflow tensorflow contrib makefile download eigen eigen src core arch neon complex h have modification time 11068 s in the future 50 building cxx object cmakefile tensorflow cc test dir hello cpp o in file include from home john tensorflow tensorflow core lib core error h 21 0 from home john tensorflow tensorflow core platform env h 24 from home john tensorflow cc test hello cpp 1 home john tensorflow tensorflow core lib core status h 45 22 error tensorflow error have not be declare status tensorflow error code code tensorflow stringpiece msg home john tensorflow tensorflow core lib core status h 45 34 error expect before code status tensorflow error code code tensorflow stringpiece msg home john tensorflow tensorflow core lib core status h 56 15 error error in namespace tensorflow do not name a type tensorflow error code code const home john tensorflow tensorflow core lib core status h 90 17 error error in namespace tensorflow do not name a type tensorflow error code code in file include from home john tensorflow tensorflow core platform env h 24 0 from home john tensorflow cc test hello cpp 1 home john tensorflow tensorflow core lib core error h 30 23 error error in namespace tensorflow do not name a type typedef tensorflow error code code home john tensorflow tensorflow core lib core error h in function void tensorflow error appendtomessage tensorflow status args home john tensorflow tensorflow core lib core error h 65 15 error class tensorflow status have no member name code status code home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error cancel args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 103 1 note in expansion of macro declare error declare error cancel cancel home john tensorflow tensorflow core lib core error h in function bool tensorflow error iscancelle const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 103 1 note in expansion of macro declare error declare error cancel cancel home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 103 1 note in expansion of macro declare error declare error cancel cancel home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error invalidargument args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 104 1 note in expansion of macro declare error declare error invalidargument invalid argument home john tensorflow tensorflow core lib core error h in function bool tensorflow error isinvalidargument const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 104 1 note in expansion of macro declare error declare error invalidargument invalid argument home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 104 1 note in expansion of macro declare error declare error invalidargument invalid argument home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error notfound args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 105 1 note in expansion of macro declare error declare error notfound not find home john tensorflow tensorflow core lib core error h in function bool tensorflow error isnotfound const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 105 1 note in expansion of macro declare error declare error notfound not find home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 105 1 note in expansion of macro declare error declare error notfound not find home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error alreadyexist args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 106 1 note in expansion of macro declare error declare error alreadyexist already exist home john tensorflow tensorflow core lib core error h in function bool tensorflow error isalreadyexist const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 106 1 note in expansion of macro declare error declare error alreadyexist already exist home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 106 1 note in expansion of macro declare error declare error alreadyexist already exist home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error resourceexhauste args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 107 1 note in expansion of macro declare error declare error resourceexhauste resource exhaust home john tensorflow tensorflow core lib core error h in function bool tensorflow error isresourceexhauste const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 107 1 note in expansion of macro declare error declare error resourceexhauste resource exhaust home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 107 1 note in expansion of macro declare error declare error resourceexhauste resource exhaust home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error unavailable args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 108 1 note in expansion of macro declare error declare error unavailable unavailable home john tensorflow tensorflow core lib core error h in function bool tensorflow error isunavailable const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 108 1 note in expansion of macro declare error declare error unavailable unavailable home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 108 1 note in expansion of macro declare error declare error unavailable unavailable home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error failedprecondition args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 109 1 note in expansion of macro declare error declare error failedprecondition fail precondition home john tensorflow tensorflow core lib core error h in function bool tensorflow error isfailedprecondition const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 109 1 note in expansion of macro declare error declare error failedprecondition fail precondition home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 109 1 note in expansion of macro declare error declare error failedprecondition fail precondition home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error outofrange args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 110 1 note in expansion of macro declare error declare error outofrange out of range home john tensorflow tensorflow core lib core error h in function bool tensorflow error isoutofrange const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 110 1 note in expansion of macro declare error declare error outofrange out of range home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 110 1 note in expansion of macro declare error declare error outofrange out of range home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error unimplemented args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 111 1 note in expansion of macro declare error declare error unimplemented unimplemented home john tensorflow tensorflow core lib core error h in function bool tensorflow error isunimplemente const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 111 1 note in expansion of macro declare error declare error unimplemented unimplemented home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 111 1 note in expansion of macro declare error declare error unimplemented unimplemented home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error internal args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 112 1 note in expansion of macro declare error declare error internal internal home john tensorflow tensorflow core lib core error h in function bool tensorflow error isinternal const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 112 1 note in expansion of macro declare error declare error internal internal home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 112 1 note in expansion of macro declare error declare error internal internal home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error aborted args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 113 1 note in expansion of macro declare error declare error abort abort home john tensorflow tensorflow core lib core error h in function bool tensorflow error isaborte const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 113 1 note in expansion of macro declare error declare error abort abort home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 113 1 note in expansion of macro declare error declare error abort abort home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error deadlineexceede args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 114 1 note in expansion of macro declare error declare error deadlineexceede deadline exceed home john tensorflow tensorflow core lib core error h in function bool tensorflow error isdeadlineexceede const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 114 1 note in expansion of macro declare error declare error deadlineexceede deadline exceed home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 114 1 note in expansion of macro declare error declare error deadlineexceede deadline exceed home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error dataloss args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 115 1 note in expansion of macro declare error declare error dataloss data loss home john tensorflow tensorflow core lib core error h in function bool tensorflow error isdataloss const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 115 1 note in expansion of macro declare error declare error dataloss data loss home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 115 1 note in expansion of macro declare error declare error dataloss data loss home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error unknown args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 116 1 note in expansion of macro declare error declare error unknown unknown home john tensorflow tensorflow core lib core error h in function bool tensorflow error isunknown const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 116 1 note in expansion of macro declare error declare error unknown unknown home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 116 1 note in expansion of macro declare error declare error unknown unknown home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error permissiondenie args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 117 1 note in expansion of macro declare error declare error permissiondenie permission deny home john tensorflow tensorflow core lib core error h in function bool tensorflow error ispermissiondenie const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 117 1 note in expansion of macro declare error declare error permissiondenie permission deny home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 117 1 note in expansion of macro declare error declare error permissiondenie permission deny home john tensorflow tensorflow core lib core error h in function tensorflow status tensorflow error unauthenticated args home john tensorflow tensorflow core lib core error h 95 9 error error be not a member of tensorflow tensorflow error const home john tensorflow tensorflow core lib core error h 118 1 note in expansion of macro declare error declare error unauthenticated unauthenticated home john tensorflow tensorflow core lib core error h in function bool tensorflow error isunauthenticate const tensorflow status home john tensorflow tensorflow core lib core error h 100 19 error const class tensorflow status have no member name code return status code tensorflow error const home john tensorflow tensorflow core lib core error h 118 1 note in expansion of macro declare error declare error unauthenticated unauthenticated home john tensorflow tensorflow core lib core error h 100 43 error tensorflow error have not be declare return status code tensorflow error const home john tensorflow tensorflow core lib core error h 118 1 note in expansion of macro declare error declare error unauthenticated unauthenticated home john tensorflow tensorflow core lib core error h at global scope home john tensorflow tensorflow core lib core error h 158 21 error tensorflow error have not be declare use tensorflow error ok cmakefile tensorflow cc test dir build make 62 recipe for target cmakefile tensorflow cc test dir hello cpp o fail make 2 cmakefile tensorflow cc test dir hello cpp o error 1 cmakefile makefile2 72 recipe for target cmakefile tensorflow cc test dir all fail make 1 cmakefile tensorflow cc test dir all error 2 makefile 83 recipe for target all fail make all error 2 I can not search internet for anythin about this can somebody help
tensorflowtensorflow
tf estimator predict can not run consecutively on colab tpu
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tpu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary colab jupyter notebook tensorflow version use command below 1 15 2 python version 3 6 bert tensorflow version 1 0 1 describe the current behavior I don t know if this be a good place to report the tf estimator predict api bug I be try to use bert tensorflow to do fine tuning after the estimator train I use tf estimator predict to predict the test dataset twice in sequence with 2 checkpoint both different or same checkpoint the two predict call be in 2 separate jupyter cell for the first predict it run ok the prediction probability be return as expect for the second predict it stick in the log info shut down infeedcontroller thread and not move far I ve try several method to make the 2nd predict run as the 1st one but it just not work out to provide more information I just paste the log where the 2nd predict stick info tensorflow name bert encoder layer 23 output layernorm beta 0 shape 1024 init from ckpt info tensorflow name bert encoder layer 23 output layernorm gamma 0 shape 1024 init from ckpt info tensorflow name bert pooler dense kernel 0 shape 1024 1024 init from ckpt info tensorflow name bert pooler dense bias 0 shape 1024 init from ckpt info tensorflow name output weight 0 shape 2 1024 info tensorflow name output bias 0 shape 2 info tensorflow do calling model fn info tensorflow tpu job name worker info tensorflow graph be finalize info tensorflow restore parameter from gs sr2pr liuyi test early stop early stop ckpt model ckpt 853 info tensorflow run local init op info tensorflow do run local init op info tensorflow init tpu system info tensorflow initialize tpu in 0 second info tensorflow start infeed thread controller info tensorflow start outfeed thread controller info tensorflow initialize dataset iterator in 0 second info tensorflow enqueue next 1 batch es of datum to infeed info tensorflow dequeue next 1 batch es of datum from outfeed info tensorflow outfeed finish for iteration 0 0 info tensorflow stop infeed thread controller info tensorflow shut down infeedcontroller thread I do know the estimator predict return a iterator generator which prevent interleaving call as indicate in the official tensorflow document I think my case be not interleave operation I don t know if this be cause by colab or the predict api and any idea to work around this bug currently I have no way to run predict twice unless I shutdown the colab close the browser tab upload the jupyter notebook and reinitialize everything and configure everything again any possible discussion that may help are appreciate
tensorflowtensorflow
tf where doc on args x seem incorrect
Bug
url s with the issue please provide a link to the documentation entry for example args 1 description of issue what need change for args x original if provide a tensor which be of the same type as y and have a shape broadcastable with condition and y should be if provide a tensor which be of the same type as x and have a shape broadcastable with condition and y
tensorflowtensorflow
a function that be wrap with tf function give wrong gradient on loss function that be themselves wrap by the same decorator
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 2 0 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I have two version of loss function one decorate with tf function and the other not and I have two function that compute the gradient use these two loss function one decorate with tf function and the other not when I use the gradient function that be not decorate I get the same result for both the loss function however when I use the gradient function that be decorate I get different result and in particular I get a wrong result for the loss function that be decorate describe the expect behavior I would expect all of the above to give the same result standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach
tensorflowtensorflow
query result not show aliase function
Bug
if we search tf argmax in tensorflow r1 15 documentation no result get for tf 1 x however if search tf math argmax you will get it but they re both annation python pylint disable redefine builtin tf export v1 math argmax argmax this line deprecation deprecate args none use the axis argument instead dimension set doc gen math op arg max doc replace dimension axis replace dimension axis def argmax input axis none name none dimension none output type dtype int64 axis deprecation deprecate argument lookup axis axis dimension dimension return argmax v2 input axis output type name why only show the first aliase annotation in documentation
tensorflowtensorflow
distribute keras sparse feature training error
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 16 04 6 lts xenial xerus tensorflow instal from binary tensorflow version use command below 2 2 0 python version 3 7 3 cuda cudnn version gpu model and memory tesla p40 22919mib standalone code to reproduce the issue only part of the code python import tensorflow as tf import tensorflow kera backend as k import sys from datetime import date timedelta from tensorflow keras layer import from tensorflow keras import model sequential from meta import modelmeta from dataset import train input fn from layer import embedlayerforsparse from deepctr layer import innerproductlayer predictionlayer batch thread number 15 batch size 8192 dimension 16 strategy tf distribute experimental centralstoragestrategy with strategy scope input output for k v in capacity map item input k input shape v sparse true name k output k embedlayerforsparse vocabulary size v embed size dimension embed initializer tf keras initializer he normal n ame sparselayer k input k linear list output value inner innerproductlayer reshape 1 dimension f for f in output value inner reshaped reshape inner get shape as list 1 inner concat concatenate axis 1 linear inner reshaped output relu batchnormalization concat output dense 512 output output relu batchnormalization output output dense 128 output output relu batchnormalization output output dense 1 output output predictionlayer use bias false output model tf keras model inputs list input value output output model meta modelmeta model candidate meta initial learning rate 0 001 lr schedule tf keras optimizer schedule exponentialdecay initial learning rate decay step 1000 decay rate 0 96 staircase true model compile optimizer tf keras optimizer adam learning rate lr schedule loss tf keras loss binarycrossentropy metric tf kera s metric auc num threshold 10000 print compile complete with strategy scope while start date end date dataset train input fn path to start date isoformat batch thread number batch size capacity map model fit dataset batch size batch size callback tf keras callbacks tensorboard log dir log start date isoformat profile batch 0 model save model start date isoformat start date timedelta day 1 break other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach work log hon op sparse op be deprecate and will be remove in a future version instruction for update no similar op available at this time warn tensorflow from opt conda lib python3 7 site package tensorflow python op parse config py 719 sparse merge from tensorflow pyt hon op sparse op be deprecate and will be remove in a future version instruction for update no similar op available at this time 2020 07 01 16 16 14 067956 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 6689 527 217950 be repeat 2020 07 01 16 16 14 123837 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 8964 527 217950 be repeat 2020 07 01 16 16 14 556197 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 6689 527 217950 be repeat 2020 07 01 16 16 14 606631 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 20296 446 1263545 be repeat 2020 07 01 16 16 14 605999 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 20296 446 1263545 be repeat 2020 07 01 16 16 14 670875 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 8964 527 217950 be repeat 2020 07 01 16 16 14 806522 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 2997 104 246925 be repeat 2020 07 01 16 16 15 025014 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 27417 1409 577748 be repeat 2020 07 01 16 16 15 616607 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 121 1 827513 be repeat 2020 07 01 16 16 34 125064 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 115717 4149 1289703 be repeat 2020 07 01 16 16 34 184952 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman 2020 07 01 16 16 34 187837 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 183021 4086 3891 be repeat traceback most recent call last file pnn py line 67 in model fit dataset batch size batch size callback tf keras callbacks tensorboard log dir log start date isoformat profile ba tch 0 file opt conda lib python3 7 site package tensorflow python keras engine training py line 66 in method wrapper return method self args kwargs file opt conda lib python3 7 site package tensorflow python keras engine training py line 848 in fit tmp log train function iterator file opt conda lib python3 7 site package tensorflow python eager def function py line 580 in call result self call args kwd file opt conda lib python3 7 site package tensorflow python eager def function py line 644 in call return self stateless fn args kwd file opt conda lib python3 7 site package tensorflow python eager function py line 2420 in call return graph function filter call args kwargs pylint disable protect access file opt conda lib python3 7 site package tensorflow python eager function py line 1665 in filter call self capture input file opt conda lib python3 7 site package tensorflow python eager function py line 1746 in call flat ctx args cancellation manager cancellation manager file opt conda lib python3 7 site package tensorflow python eager function py line 598 in call ctx ctx file opt conda lib python3 7 site package tensorflow python eager execute py line 60 in quick execute input attrs num output tensorflow python framework error impl invalidargumenterror indice 6689 527 217950 be repeat node serializemanysparse 10 multideviceiteratorgetnextfromshard remotecall iteratorgetnext op inference train function 25126 function call stack train function ps log 2020 07 01 16 16 34 184952 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 183021 4086 3891 be repeat 2020 07 01 16 16 34 187837 w tensorflow core framework op kernel cc 1753 op require fail at serialize sparse op cc 382 invalid arguman t indice 183021 4086 3891 be repeat traceback most recent call last file pnn py line 67 in model fit dataset batch size batch size callback tf keras callbacks tensorboard log dir log start date isoformat profile ba tch 0 file opt conda lib python3 7 site package tensorflow python keras engine training py line 66 in method wrapper return method self args kwargs file opt conda lib python3 7 site package tensorflow python keras engine training py line 848 in fit tmp log train function iterator file opt conda lib python3 7 site package tensorflow python eager def function py line 580 in call result self call args kwd file opt conda lib python3 7 site package tensorflow python eager def function py line 644 in call return self stateless fn args kwd file opt conda lib python3 7 site package tensorflow python eager function py line 2420 in call return graph function filter call args kwargs pylint disable protect access file opt conda lib python3 7 site package tensorflow python eager function py line 1665 in filter call self capture input file opt conda lib python3 7 site package tensorflow python eager function py line 1746 in call flat ctx args cancellation manager cancellation manager file opt conda lib python3 7 site package tensorflow python eager function py line 598 in call ctx ctx file opt conda lib python3 7 site package tensorflow python eager execute py line 60 in quick execute input attrs num output tensorflow python framework error impl invalidargumenterror indice 6689 527 217950 be repeat node serializemanysparse 10 multideviceiteratorgetnextfromshard remotecall iteratorgetnext op inference train function 25126 function call stack train function
tensorflowtensorflow
how to verify tensorflow install
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue window 1 description of issue what need change the documentation give a command to verify the install but no clear indication of what should be the result I get a list of 9 warning all cuda relate intersperse with 8 information message finally there be a tf tensor 72 93745 shape dtype float32 at the end be this correct how would I know clear description see above correct link be the link to the source code correct n a parameter define be all parameter define and format correctly n a return define be return value define n a raise list and define n a usage example be there a usage example half of one it lack any output to check against or instruction on how to interpret the result request visual if applicable n a submit a pull request no I be far too new to github tensorflow to ask sensible question let alone give sensible answer
tensorflowtensorflow
tf lite tfliteconverter from keras model convert raise error with tf keras mixed precision experimental policy
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on a mobile device tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version 3 8 2 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 1 cudnn 7 6 5 gpu model and memory nvidia geforce gtx 970 4 go exact command to reproduce describe the problem step to reproduce the bug execute the code provide with mixed float16 mixed precision policy in this situation we want to build a tflite model of an already quantitize model use tf keras mixed precision experimental policy mixed float16 but when execute the follow tensorflow will crash and raise an error non broadcastable operand with some memory error converter tf lite tfliteconverter from keras model model tflite model converter convert if we pass mixed precision policy arg to none and thus deactivate mixed precision policy everything go fine thank you and have a good day source code log source code import tensorflow as tf from tensorflow keras layers import input lambda dense dropout from tensorflow keras optimizer import optimizer from tensorflow keras import backend as k from tensorflow keras application import mobilenetv2 from tensorflow keras model import model activate mixed precision policy mix precision policy arg str mixed float16 if mixed precision policy arg be not none mixed precision policy tf keras mixed precision experimental policy mix precision policy arg tf keras mixed precision experimental set policy mix precision policy size parameter for the model img height img width img depth 224 224 3 model definition inp input shape int img height int img width int img depth mobilenet model mobilenetv2 input shape int img height int img width int img depth alpha 0 35 include top false weight imagenet input tensor inp pooling avg out dense 1 activation tanh mobilenet model output build the whole model model model input inp output out convert to tflite converter tf lite tfliteconverter from keras model model tflite model converter convert save the tf lite model with tf io gfile gfile model tflite wb as f f write tflite model output log 2020 06 30 08 37 15 257384 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 06 30 08 37 15 262414 I tensorflow core grappler cluster single machine cc 356 start new session 2020 06 30 08 37 15 267142 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 08 00 0 name geforce gtx 970 computecapability 5 2 coreclock 1 253ghz corecount 13 devicememorysize 4 00gib devicememorybandwidth 208 91gib s 2020 06 30 08 37 15 276114 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 06 30 08 37 15 287313 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 06 30 08 37 15 292024 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 06 30 08 37 15 305367 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 06 30 08 37 15 310125 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 06 30 08 37 15 325054 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 06 30 08 37 15 338802 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 06 30 08 37 15 343460 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 06 30 08 37 15 356369 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 06 30 08 37 15 361262 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 06 30 08 37 15 374944 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 06 30 08 37 15 378461 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2991 mb memory physical gpu device 0 name geforce gtx 970 pci bus i d 0000 08 00 0 compute capability 5 2 2020 06 30 08 37 15 525109 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item graph to optimize 2020 06 30 08 37 15 530484 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0 003ms 2020 06 30 08 37 15 536242 I tensorflow core grappler optimizer meta optimizer cc 799 function optimizer function optimizer do nothing time 0ms 2020 06 30 08 37 17 175576 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 1 2020 06 30 08 37 17 180393 I tensorflow core grappler cluster single machine cc 356 start new session 2020 06 30 08 37 17 184777 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 08 00 0 name geforce gtx 970 computecapability 5 2 coreclock 1 253ghz corecount 13 devicememorysize 4 00gib devicememorybandwidth 208 91gib s 2020 06 30 08 37 17 193569 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 06 30 08 37 17 207166 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 06 30 08 37 17 211906 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 06 30 08 37 17 224565 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 06 30 08 37 17 240773 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 06 30 08 37 17 245702 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 06 30 08 37 17 260413 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 06 30 08 37 17 274603 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 06 30 08 37 17 278648 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 06 30 08 37 17 292135 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 06 30 08 37 17 295110 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 06 30 08 37 17 308571 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2991 mb memory physical gpu device 0 name geforce gtx 970 pci bus i d 0000 08 00 0 compute capability 5 2 2020 06 30 08 37 17 530109 I tensorflow core grappler optimizer meta optimizer cc 797 optimization result for grappler item graph to optimize 2020 06 30 08 37 17 535068 I tensorflow core grappler optimizer meta optimizer cc 799 constant folding graph size after 433 node 350 442 edge 316 time 31 294m 2020 06 30 08 37 17 540858 I tensorflow core grappler optimizer meta optimizer cc 799 constant folding graph size after 433 node 0 442 edge 0 time 7 549ms traceback most recent call last file c user mathieu code tfissue tflite issue py line 49 in tflite model converter convert file c program file python lib site package tensorflow lite python lite py line 514 in convert result toco convert impl file c program file python lib site package tensorflow lite python convert py line 491 in toco convert impl datum toco convert protos file c program file python lib site package tensorflow lite python convert py line 227 in toco convert protos raise convertererror see console for info n s n s n stdout stderr tensorflow lite python convert convertererror see console for info 2020 06 30 08 37 18 218076 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 06 30 08 37 20 341895 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 144 ignore output format 2020 06 30 08 37 20 342069 w tensorflow compiler mlir lite python graphdef to tfl flatbuffer cc 147 ignore drop control dependency 2020 06 30 08 37 20 420203 I tensorflow core platform cpu feature guard cc 143 your cpu support instruction that this tensorflow binary be not compile to use avx2 2020 06 30 08 37 20 428378 I tensorflow compiler xla service service cc 168 xla service 0x1803e4c72d0 initialize for platform host this do not guarantee that xla will be use device 2020 06 30 08 37 20 428713 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2020 06 30 08 37 20 429873 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library nvcuda dll 2020 06 30 08 37 20 453669 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 08 00 0 name geforce gtx 970 computecapability 5 2 coreclock 1 253ghz corecount 13 devicememorysize 4 00gib devicememorybandwidth 208 91gib s 2020 06 30 08 37 20 454062 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudart64 101 dll 2020 06 30 08 37 20 457145 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cublas64 10 dll 2020 06 30 08 37 20 459818 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cufft64 10 dll 2020 06 30 08 37 20 460738 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library curand64 10 dll 2020 06 30 08 37 20 464328 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusolver64 10 dll 2020 06 30 08 37 20 466349 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cusparse64 10 dll 2020 06 30 08 37 20 472167 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library cudnn64 7 dll 2020 06 30 08 37 20 472361 I tensorflow core common runtime gpu gpu device cc 1703 add visible gpu device 0 2020 06 30 08 37 21 033060 I tensorflow core common runtime gpu gpu device cc 1102 device interconnect streamexecutor with strength 1 edge matrix 2020 06 30 08 37 21 033405 I tensorflow core common runtime gpu gpu device cc 1108 0 2020 06 30 08 37 21 033543 I tensorflow core common runtime gpu gpu device cc 1121 0 n 2020 06 30 08 37 21 033777 I tensorflow core common runtime gpu gpu device cc 1247 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2854 mb memory physical gpu device 0 name geforce gtx 970 pci bus i d 0000 08 00 0 compute capability 5 2 2020 06 30 08 37 21 037082 I tensorflow compiler xla service service cc 168 xla service 0x1805bff0530 initialize for platform cuda this do not guarantee that xla will be use device 2020 06 30 08 37 21 037386 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 970 compute capability 5 2 loc callsite model conv 1 bn fusedbatchnormv3 c program file python lib site package tensorflow python eager def function py 865 0 at callsite c program file python lib site package tensorflow python eager def function py 959 0 at callsite c program file python lib site package tensorflow lite python lite py 435 0 at c user mathieu code tfissue tflite issue py 48 0 error non broadcastable operand window fatal exception access violation current thread 0x00001e54 most recent call first file c program file python lib site package tensorflow lite toco python toco from protos py line 50 in execute file c program file python lib site package absl app py line 250 in run main file c program file python lib site package absl app py line 299 in run file c program file python lib site package tensorflow python platform app py line 40 in run file c program file python lib site package tensorflow lite toco python toco from protos py line 93 in main file c program file python script toco from protos exe main py line 7 in file c program file python lib runpy py line 86 in run code file c program file python lib runpy py line 193 in run module as main
tensorflowtensorflow
use zero like in tensorflow with keras add loss lead to error
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 4 4 0 109 generic tensorflow instal from source or binary binary python version 3 6 cuda cudnn version gpu model and memory git version v2 0 0 rc2 26 g64c3d38 version 2 0 0 issue if I run the follow code I get an the error internalerror invalid tape state however if I switch tf keras backend zero like x to one like x the issue disappear it appear that the issue be arise from the zero like imgin tf keras input 112 112 3 x tf keras layers globalavgpool2d imgin x tf keras layer dense 1 activation sigmoid x x tf keras layer dense 1 activation sigmoid x model tf keras model imgin x model add loss tf keras loss binary crossentropy tf keras backend zero like x x model compile adam binary crossentropy model train on batch tf one 1 112 112 3 1
tensorflowtensorflow
error tf function tf datum dataset tensorflow gpu incompatible
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes tensorflow version use command below 2 3 0 affect all tf 2 0 0 python version 3 7 describe the current behavior I encounter an error when a tf datum dataset object be create or modify inside a tf function graph while use tensorflow gpu base on other issue it seem this be cause by tf improperly place the variantwrapper on a gpu a variantwrapper be cause when a dataset object be create or modify inside a tf function graph here be a small colab gist highlight code snippet of this issue and the related issue issue1 issue2 this error have persist since tf 2 0 0 to put it plainly do this mean we should avoid wrap any dataset operation in a tf function eg python error tf function def f dataset tf datum dataset from tensor slice 1 2 3 4 5 6 for e in tf range 3 for x y in dataset tf print x y e f python succeed outside graph dataset tf datum dataset from tensor slice 1 2 3 4 5 6 tf function def f for e in tf range 3 for x y in dataset dataset be transform into variantwrapper rather than stay as a tf datum dataset object tf print x y e f describe the expect behavior standalone code to reproduce the issue colab other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the complete error message be python no unary variant device copy function find for direction 1 and variant type index tensorflow datum anonymous namespace datasetvariantwrapper
tensorflowtensorflow
timedistribute layer do not support multiple output
Bug
system information have I write custom code yes os platform and distribution linux ubuntu 18 04 tensorflow instal from source or binary no tensorflow version 2 2 0 python version 3 6 9 cuda cudnn version 10 1 7 6 4 gpu model and memory tesla v100 16 gb describe the current behavior the timedistribute layer do not support layer with multiple output this issue be also relate to 35824 where the miss support for multiple input be mention which lead to the overall miss support for nest structure of input and output describe the expect behavior the timedistribute wrapper should support the wrapping of layer with multiple input and multiple output standalone code to reproduce the issue import tensorflow as tf class customlayer tf keras layers layer def init self name none kwargs super customlayer self init name name kwargs self conv 1 tf keras layer conv2d filter 1 kernel size 1 1 self conv 2 tf keras layer conv2d filter 1 kernel size 1 1 def call self input output 1 self conv 1 input output 2 self conv 2 input return output 1 output 2 def compute output shape self input shape output shape 1 self conv 1 compute output shape input shape output shape 2 self conv 2 compute output shape input shape return output shape 1 output shape 2 if name main input tf keras input shape none none none 1 custom layer customlayer output 1 output 2 tf keras layer timedistribute custom layer input other info log file reproduce template py line 29 in output 1 output 2 tf keras layer timedistribute custom layer input file usr local lib python3 6 dist package tensorflow python keras engine base layer py line 922 in call output call fn cast input args kwargs file usr local lib python3 6 dist package tensorflow python keras layers wrappers py line 246 in call output shape self compute output shape input shape as list file usr local lib python3 6 dist package tensorflow python keras layers wrappers py line 192 in compute output shape child output shape tensor shape tensorshape child output shape file usr local lib python3 6 dist package tensorflow python framework tensor shape py line 771 in init self dim as dimension d for d in dim iter file usr local lib python3 6 dist package tensorflow python framework tensor shape py line 771 in self dim as dimension d for d in dim iter file usr local lib python3 6 dist package tensorflow python framework tensor shape py line 716 in as dimension return dimension value file usr local lib python3 6 dist package tensorflow python framework tensor shape py line 200 in init none file line 3 in raise from typeerror dimension value must be integer or none or have an index method get tensorshape none none none 1
tensorflowtensorflow
model set input do not work for keras model
Bug
please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux cento linux release 7 7 1908 core mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 rc4 8 g2b96f3662b 2 2 0 python version python 3 6 10 anaconda inc bazel version if compile from source no gcc compiler version if compile from source no cuda cudnn version no gpu model and memory no you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior model set input do not work for keras model after set model set input the model input and model output be still all none the bug happen for tensorflow 2 2 it work fine for tensorflow 2 2 describe the expect behavior model input will tell the shape 1 64 and model output tell the shape 1 10 standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf class mymodel tf keras model def init self super mymodel self init self dense1 tf keras layer dense 10 def call self input output self dense1 input return output model keras mymodel input spec tf tensorspec 1 64 tf int32 model keras set input input spec training false keras input tf keras input 64 batch size 1 dtype tf int32 keras output model keras keras input training false model keras tf keras model keras input keras output print model keras input print model keras outputs other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach although I can set input by add an additional input layer before keras model as in the comment of source code as above I still want to know whether it be a bug
tensorflowtensorflow
check fail cudnnsetrnnmatrixmathtype rnn desc get math type cudnn status success 3 vs 0
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below late from git python version 3 8 1 bazel version if compile from source 3 1 0 gcc compiler version if compile from source 9 3 0 cuda cudnn version 11 8 0 1 gpu model and memory geforce rtx 2070 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior error out with this line when I run a sequential model f tensorflow stream executor cuda cuda dnn cc 1186 check fail cudnnsetrnnmatrixmathtype rnn desc get math type cudnn status success 3 vs 0 standalone code to reproduce the issue model keras sequential model add keras layers bidirectional keras layers lstm unit 128 input shape x train shape 1 x train shape 2 model add keras layers dropout rate 0 2 model add keras layer dense unit 1 model compile loss mean squared error optimizer adam error happen at this line history model fit x train y train epoch 30 batch size 32 validation split 0 1 shuffle false
tensorflowtensorflow
link break on tutorial image object detection api
Bug
thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue this link be break image correct link be the link to the source code correct yes this one submit a pull request no
tensorflowtensorflow
tftrt combinednms fail in in trt 6 and tf 1 15
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 15 python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cuda 10 0 cudnn 7 6 3 gpu model and memory describe the current behavior tensorrt 6 support be add by which include three change header file fall to fp16 and combine nms war last output dim however in 1 15 release only header file change be cherry pick as a result two other change be miss tftrt combinednms be no workable with trt 6 describe the expect behavior should also cherry pick other two change
tensorflowtensorflow
tf keras backend repeat element do not support negative indexing on tensor with dynamic shape
Bug
system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 and google colab tensorflow instal from source or binary binary tensorflow version use command below v2 2 0 0 g2b96f3662b 2 2 0 python version 3 7 cuda cudnn version 10 1 gpu model and memory colab gpu describe the problem the implementation of repeat element behave differently whether the input tensor have static or dynamic shape for tensor with dynamic shape it do not accept negative indexing for the axis parameter for tensor with static shape it accept negative indexing for the axis parameter tensorflow follow standard python indexing rule there be a workaround use positive indexing how to reproduce run the repeat element with axis 1 and tf config experimental run function eagerly false note the result array run with tf config experimental run function eagerly true note the result array set axis 1 and repeat 1 4 note that input signature parameter in tf function be there to reproduce a scenario of a graph with the tensor x with dynamic shape source code log import tensorflow as tf import tensorflow kera backend as k import numpy as np tf function input signature tf tensorspec shape none none dtype tf int32 def f x x k repeat element x rep 3 axis 1 return x tf config experimental run function eagerly true v tf variable 0 1 2 3 f v array 0 0 0 1 1 1 2 2 2 3 3 3 dtype int32 tf function input signature tf tensorspec shape none none dtype tf int32 def f x x k repeat element x rep 3 axis 1 return x tf config experimental run function eagerly false v tf variable 0 1 2 3 f v array 0 1 2 3 0 1 2 3 0 1 2 3 dtype int32
tensorflowtensorflow
tf io decode image img channel 3 output 4 channel when read 4 channel bmp
Bug
edit attach a sample bmp file rgb32 zip system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution window 10 tensorflow instal from source or binary binary pip tensorflow version 2 2 0 python version 3 7 7 describe the current behavior when read in a 4 channel bmp tf io decode image img channel 3 give shape 4 instead of 3 tf io decode bmp img channel 3 give the follow error traceback most recent call last file channel py line 44 in loop file channel py line 14 in loop img tf io decode bmp img channel 3 file c user mattchee miniconda3 lib site package tensorflow python op gen image op py line 899 in decode bmp op raise from not ok status e name file c user mattchee miniconda3 lib site package tensorflow python framework op py line 6653 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror channel attribute 3 do not match bit per pixel from file 4 op decodebmp I m follow this guide load use tfdata to load image efficiently so tf keras preprocesse image load img img path color mode rgb be not an option describe the expect behavior this be inconsistent with tf io decode image img channel 3 and tf io decode png img channel 3 which give shape 3 when read a 4 channel png both tf io decode image img channel 3 and tf io decode bmp img channel 3 would be expect to give shape 3 when read in a 4 channel bmp standalone code to reproduce the issue python img tf io read file img path img tf io decode image img channel 3 print img shape this print 64 127 4 or python img tf io read file img path img tf io decode bmp img channel 3 error print img shape