repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | tf1 15 distribute mode error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 centos7 6 tensorflow instal from source or binary binary tensorflow version use command below tf1 15 python version python3 6 cuda cudnn version 10 0 7 6 2 code mirror strategy tf distribute mirroredstrategy with mirror strategy scope model test net sgd tf keras optimizer sgd lr 0 01 momentum 0 9 decay 1e 4 model compile optimizer sgd loss sparse categorical crossentropy metric accuracy model fit x train y train batch size 256 epoch 10 validation datum x test y test verbose 2 the error information no register multideviceiteratorgetnextfromshard opkernel for gpu device compatible with node node multideviceiteratorgetnextfromshard register device cpu |
tensorflowtensorflow | can not export keras model to savedmodel if mixed precision policy be enable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux debian 9 tensorflow instal from source or binary binary tensorflow version use command below start from 2 0 nightly version test python version 3 5 describe the current behavior when keras mix precision policy mixed float16 be in use we can t save the keras model in savedmodel format with method keras model model save without a specific signature it seem like a mismatch between input signature infer by the model itself and the auto cast input valueerror python input incompatible with input signature input input signature tensorspec shape none none none none dtype tf float32 name none although we can use graph rewrite as mixed precision training method to bypass this autocaste issue but graph rewrite be not work in some case e g train a subclasse model with tf gradienttape thus it be not recommend by tensorflow official guide for flexibility we do hope to use mixed precision policy in mixed precision training and directly export mixed precision train model to savedmodel for deployment be straightforward in production pipeline code to reproduce the issue we can reproduce this bug by use the official image classification training example from test mix precision policy model save import log import os from absl import app as absl app import tensorflow as tf from official vision image classification resnet model import resnet50 def main argv tf compat v1 enable eager execution setup mixed precision policy the policy enable the autocaste behavior in keras layer policy tf keras mixed precision experimental policy mixed float16 loss scale 128 tf keras mixed precision experimental set policy policy model resnet50 1000 model dir temp save model test if not os path isdir model dir os makedirs model dir model save model dir save format tf log info export train model to directory format model dir if name main absl app run main |
tensorflowtensorflow | read datum from tfrecorddataset throw tensorshape error | Bug | describe the current behavior I m try to save load a numpy dataset into a tfrecorddataset in tf2 0 for training on tpu saving succeed but when read the file I and I pass the datum through a model I get an error about the shape of the tensor I compare the tensor result from read the tfrecorddataset and they be equal the dataset I get read back from the file reader be a mapdataset instead of a datasetv1adapter I have also open an issue on stack overflow here describe the expect behaviour read the file should result in a dataset identical with the one that be write run the dataset through a model should produce similar effect code to reproduce the issue a minimum reproducible example be available as a python notebook here system information tf env collect sh output check python python version 3 7 4 python branch python build version default sep 29 2019 19 47 40 python compiler version clang 10 0 1 clang 1001 0 46 4 python implementation cpython check os platform os darwin os kernel version darwin kernel version 18 7 0 tue aug 20 16 57 14 pdt 2019 root xnu 4903 271 2 2 release x86 64 os release version 18 7 0 os platform darwin 18 7 0 x86 64 i386 64bit linux distribution linux os distribution mac version 10 14 6 x86 64 uname uname result system darwin node viktor macbook pro local release 18 7 0 version darwin kernel version 18 7 0 tue aug 20 16 57 14 pdt 2019 root xnu 4903 271 2 2 release x86 64 machine x86 64 processor i386 architecture 64bit machine x86 64 be we in docker no compiler apple llvm version 10 0 1 clang 1001 0 46 4 target x86 64 apple darwin18 7 0 thread model posix installeddir library developer commandlinetools usr bin check pip numpy 1 17 2 protobuf 3 9 2 tensorflow 2 0 0 tensorflow dataset 1 2 0 tensorflow estimator 2 0 0 tensorflow metadata 0 14 0 check for virtualenv false tensorflow import tf version version 2 0 0 tf version git version v2 0 0 rc2 26 g64c3d382ca tf version compiler version 4 2 1 compatible apple llvm 10 0 0 clang 1000 11 45 5 user victor pyenv version 3 7 4 lib python3 7 site package panda compat init py 85 userwarning could not import the lzma module your instal python be incomplete attempt to use lzma compression will result in a runtimeerror warning warn msg user victor pyenv version 3 7 4 lib python3 7 site package panda compat init py 85 userwarning could not import the lzma module your instal python be incomplete attempt to use lzma compression will result in a runtimeerror warning warn msg env ld library path be unset dyld library path be unset nvidia smi tf collect sh line 147 nvidia smi command not find cuda lib tensorflow instal from info name tensorflow version 2 0 0 summary tensorflow be an open source machine learning framework for everyone home page author email license apache 2 0 location user victor pyenv version 3 7 4 lib python3 7 site package require by python version major minor micro releaselevel serial 3 7 4 final 0 bazel version |
tensorflowtensorflow | tf2 0 string dtype will cause dateset from mirroredstrategy experimental distribute dataset raise runtimeerror when use gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento linux release 7 4 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip install tf nightly gpu tensorflow version use command below v1 12 1 20829 ga3bf777 2 1 0 dev20191218 python version 3 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version cudatoolkit 10 0 130 cudnn 7 6 4 gpu model and memory describe the current behavior when iterate a dataset which be return by mirroredstrategy experimental distribute dataset and contain tf dtype string element with gpu a runtimeerror will be raise after the last iteration and it say can t copy tensor with type string to device job localhost replica 0 task 0 device gpu 0 when change to onedevicestrategy everything be fine describe the expect behavior iteration over the dataset should end successfully no matter which kind of distribute strategy be use and no matter what kind of dtype element it contain code to reproduce the issue python import tensorflow as tf distribute strategy tf distribute mirroredstrategy gpu 0 onedevicestrategy be fine distribute strategy tf distribute onedevicestrategy gpu 0 ds tf datum dataset from tensor slice a c f d a b a ds ds batch 1 ds distribute strategy experimental distribute dataset ds for I input in enumerate ds print step input format I input other info log warn tensorflow fall back to tensorflow client its recommend to install the cloud tpu client directly with pip install cloud tpu client 2019 12 23 19 13 59 324449 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 12 23 19 14 04 348064 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 83 00 0 name titan x pascal computecapability 6 1 coreclock 1 531ghz corecount 28 devicememorysize 11 91gib devicememorybandwidth 447 48gib s 2019 12 23 19 14 04 354811 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2019 12 23 19 14 04 357770 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2019 12 23 19 14 04 368166 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2019 12 23 19 14 04 374694 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2019 12 23 19 14 04 395497 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2019 12 23 19 14 04 401722 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2019 12 23 19 14 04 421294 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 23 19 14 09 099740 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 2019 12 23 19 14 09 101649 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 12 23 19 14 09 120678 I tensorflow core platform profile util cpu util cc 101 cpu frequency 2097350000 hz 2019 12 23 19 14 09 177180 I tensorflow compiler xla service service cc 168 xla service 0x55733782fe30 initialize for platform host this do not guarantee that xla will be use device 2019 12 23 19 14 09 177244 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2019 12 23 19 14 09 311769 I tensorflow compiler xla service service cc 168 xla service 0x5573378979c0 initialize for platform cuda this do not guarantee that xla will be use device 2019 12 23 19 14 09 311822 I tensorflow compiler xla service service cc 176 streamexecutor device 0 titan x pascal compute capability 6 1 2019 12 23 19 14 09 313461 I tensorflow core common runtime gpu gpu device cc 1555 find device 0 with property pcibusid 0000 83 00 0 name titan x pascal computecapability 6 1 coreclock 1 531ghz corecount 28 devicememorysize 11 91gib devicememorybandwidth 447 48gib s 2019 12 23 19 14 09 313511 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2019 12 23 19 14 09 313527 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2019 12 23 19 14 09 313552 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2019 12 23 19 14 09 313565 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2019 12 23 19 14 09 313579 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2019 12 23 19 14 09 313592 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2019 12 23 19 14 09 313606 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 23 19 14 09 316271 I tensorflow core common runtime gpu gpu device cc 1697 add visible gpu device 0 2019 12 23 19 14 09 316310 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 1 2019 12 23 19 14 09 318078 I tensorflow core common runtime gpu gpu device cc 1096 device interconnect streamexecutor with strength 1 edge matrix 2019 12 23 19 14 09 318110 I tensorflow core common runtime gpu gpu device cc 1102 0 2019 12 23 19 14 09 318141 I tensorflow core common runtime gpu gpu device cc 1115 0 n 2019 12 23 19 14 09 320811 I tensorflow core common runtime gpu gpu device cc 1241 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 11448 mb memory physical gpu device 0 name titan x pascal pci bus i d 0000 83 00 0 compute capability 6 1 step 0 input b a b c b f step 1 input b b d b a step 2 input b b b a b traceback most recent call last file test py line 46 in for I input in enumerate ds file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 249 in next return self get next file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 281 in get next global have value replicas get next as optional self self strategy file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 177 in get next as optional iterator iterator I get next as list new name pylint disable protect access file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 905 in get next as list strict true file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python util deprecation py line 507 in new func return func args kwargs file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python op control flow op py line 1207 in cond result false fn file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 904 in lambda dummy tensor fn data value structure file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 818 in dummy tensor fn return nest map structure create dummy tensor value structure file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python util nest py line 568 in map structure structure 0 func x for x in entry file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python util nest py line 568 in structure 0 func x for x in entry file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python distribute input lib py line 808 in create dummy tensor dummy tensor array op zeros tensor shape tensorshape dim feature type file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python op array op py line 2716 in zero output fill shape constant zero dtype dtype name name file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python framework constant op py line 258 in constant allow broadcast true file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python framework constant op py line 266 in constant impl t convert to eager tensor value ctx dtype file home anaconda3 envs tf21 not lib python3 6 site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype runtimeerror can t copy tensor with type string to device job localhost replica 0 task 0 device gpu 0 |
tensorflowtensorflow | tensorflow lite gpu output be corrupt when use opencl backend | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 android version 9 ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device oneplus3 gpu adreno 530 tensorflow instal from source or binary tensorflow lite gpu 1 15 tensorflow lite gpu 0 0 0 nightly from tensorflow version use command below 1 14 1 15 describe the current behavior I m use tflite gpu in my android application for semantic segmentation use my tflite model all gpu support op I be able to get proper output with cpu version and tensorflow lite gpu 1 14 but when I use nightly or 1 15 it load up opencl backend and give corrupt output this backend seem to take long time to start up 5 10 however it seem to be fast than correspond opengl version when I run the model to get a image output float 0 1 there seem to be random rectangular blank within the output the input be 256x256x3 float and output os 256x256x1 float however I be not face this issue use a different model with input size 128 even though I use the same back end describe the expect behavior the tflite model should produce correct output with opencl backend like the opengl version regardless of input size other info log I be get correct output for model with 128 input size regardless of backend and device but for the model with 256 output size I be not get proper output with opencl backend gpu nightly and gpu 1 15 model zip only the opencl gpu delegate with this 256 input sized model produce this corrupt output other version cpu opengl gpu 128 input model with opencl etc seem to produce correct result without the rectangular blank tmap54 |
tensorflowtensorflow | no clear document explain how to use pre train model | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 in docker tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d38 python version 3 5 cuda cudnn version 10 0 7 gpu model and memory gtx 1080ti 11175mib describe the current behavior hi author and developer I notice that tensorflow doesn t provide a clear document explain how to use pre train model so I write a benchmark which show the accuracy of pre train model with apply imagenet validation set the follow be the result testing pixel vale be from 0 255 model resnet50 loss 2 711 accuracy 0 457 testing pixel vale be from 0 255 model densenet121 loss 39 000 accuracy 0 006 testing pixel vale be from 0 255 model mobilenetv2 loss 9 979 accuracy 0 003 testing pixel vale be from 0 1 model resnet50 loss 8 535 accuracy 0 001 testing pixel vale be from 0 1 model densenet121 loss 1 895 acc 0 599 testing pixel vale be from 0 1 model mobilenetv2 loss 2 283 accuracy 0 523 testing pixel vale be normalize from 1 1 model resnet50 loss 8 313 acc 0 001 testing pixel vale be normalize from 1 1 model densenet121 loss 1 896 acc 0 599 testing pixel vale be normalize from 1 1 model mobilenetv2 loss 2 287 acc 0 524 first we can see the accuracy be not comparable with the original result top 1 accuracy be 70 up I think that this issue be I m not sure which crop and pad method be apply in the original result therefore I define a custom function centercrop to fit the model s input size but we can skip this issue there what I want to mention be normalization issue if I don t apply any normalization run aug 1 in code pixel s value be define in 0 255 all model accuracy be near 0 001 except for resnet50 which achieve a meaningful accuracy if I do normalization run aug 2 in code pixel s value be define in 0 1 this time densenet121 and mobilenetv2 have a meaningful accuracy if I do standard normalization run aug 3 in code pixel s value be define in 1 1 the result be similar to previous case but I m sure why those two case have same accuracy those behavior let I confuse before apply pre train model I have to which normalization method should be apply after read the source code I find that those application be import from keras application in tensorflow keras application weight download I didn t test other model such as resnet50v2 inceptionv3 and xception because their input size be 299 instead of 244 and this be a time consume task however anyone can modify the test case and do the benchmark because of licence issue for imagenet I can t provide imagenet in public but the following be the minimal test case python pip install tensorflow gpu 1 14 0 pip panda import time import numpy as np import panda as pd import tensorflow as tf from glob import glob input image dimension img h 224 img w 224 channel 3 information for dataset dataset path dataset imagenet num class 1000 num testing 50000 class datagenerator def init self dataframe batch size run aug true self total len len dataframe index self batch size batch size self run aug run aug self dataframe dataframe self on epoch end def build pipeline self file path labely mapping function in tf def preprocess fn file path labely def fn x img array img array img array numpy if self run aug 1 image s range be 0 255 image img array if self run aug 2 image s range be 0 1 image img array 255 0 if self run aug 3 std normalization image 0 0 485 image 1 0 456 image 2 0 406 image 0 0 229 image 1 0 224 image 2 0 225 return image def fn y label return tf keras util to categorical label num class read image from file image tf io read file file path image tf image decode image image channel channel aug size 256 imagex tf compat v1 image resize image with pad image aug size aug size imagex tf image resize with crop or pad image img h img w do normalizarion imagex tf py function fn x imagex tf float32 imagex set shape img h img w channel imagex tf image random flip leave right imagex labely tf py function fn y labely tf float32 labely set shape num class return imagex labely dataset tf datum dataset from tensor slice file path labely dataset dataset shuffle batch size 8 dataset dataset repeat dataset dataset map preprocess fn num parallel call tf data experimental autotune dataset dataset batch self batch size dataset dataset prefetch tf datum experimental autotune self dataset dataset def len self return self total len self batch size def on epoch end self cleanx np array self dataframe file totaly np array self dataframe one hot run permutation rand idx np random permutation self total len cleanx cleanx rand idx totaly totaly rand idx self build pipeline cleanx totaly def build clf model name if model name resnet50 clf model tf keras application resnet50 include top true pool max weight imagenet if model name densenet121 clf model tf keras application densenet121 include top true pool max weight imagenet if model name mobilenetv2 clf model tf keras application mobilenetv2 include top true pool max weight imagenet if model name inceptionv3 clf model tf keras application inceptionv3 include top true weight imagenet clf model compile loss categorical crossentropy optimizer adam metric accuracy return clf model def list testing data class file path onehot map try testing datum pd read pickle imagenet test list pkl print successful testing datum load from pickle except test image info for iter class in class file glob os path join file path iter class jpeg for iter img in file datum info iter img iter class testing image info append data info testing datum pd dataframe testing image info column file class testing datum one hot testing data class replace onehot map inplace false testing datum to pickle imagenet test list pkl assert testing datum shape 0 num testing fatal mismatch total length of testing datum return testing datum if name main set gpu import os if os environ get cuda visible device be none os environ cuda visible device 0 hyperparameter batch size 100 epoch 5 load one hot label file path dataset path val class os listdir file path list class sort list set class onehot map dict zip list class list range 0 num class load list of validation datum those datum should be consider as testing datum testing datum list testing data class file path onehot map build data generator gen type1 datagenerator testing datum batch size run aug 1 gen type2 datagenerator testing datum batch size run aug 2 gen type3 datagenerator testing datum batch size run aug 3 gen list gen type1 gen type2 gen type3 build model model list resnet50 densenet121 mobilenetv2 print result for type1 test gen gen type1 for model name in model list model build clf model name meta string testing pixel vale be from 0 255 model s format model name prefix string output model evaluate test gen dataset step test gen len for ii in range len model metric name meta string meta string s s 3f format prefix string model metrics name ii output ii print meta string print result for type2 test gen gen type2 for model name in model list model build clf model name meta string testing pixel vale be from 0 1 model s format model name prefix string output model evaluate test gen dataset step test gen len for ii in range len model metric name meta string meta string s s 3f format prefix string model metrics name ii output ii print meta string print result for type3 test gen gen type3 for model name in model list model build clf model name meta string testing pixel vale be normalize from 1 1 model s format model name prefix string output model evaluate test gen dataset step test gen len for ii in range len model metric name meta string meta string s s 3f format prefix string model metrics name ii output ii print meta string |
tensorflowtensorflow | dataset scan lose variable modification | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes provide source os platform and distribution e g linux ubuntu 16 04 mac os 10 15 2 most likely irrelevant tensorflow instal from source or binary binary from pip tensorflow version use command below v1 12 1 21171 g9798f84fa9 2 1 0 dev20191221 instal via pip install tf nightly 2 1 0dev20191221 python version 3 7 2 cuda cudnn version use cpu only describe the current behavior while write a unit test I create a function that iterate a tf data dataset and accumulate the value in a local variable this work fine use eager mode but then I notice that the return result be zero when use tf function I ve produce a small simple code that reproduce the problem in particular return the accumulator variable produce a result of 0 but access the variable directly work fine also use tf print on the accumulator while iterate the dataset show the correct value but print it after the iteration still within the method show 0 suggest perhaps some kind of scope problem please see the attach source to understand well what I mean describe the expect behavior the result should be the same when use eager mode and tf function also when use tf function the result should be the same when return the variable and when access it directly code to reproduce the issue tf function variable py txt |
tensorflowtensorflow | keras backend function not work as intend | Bug | system information have I write custom code yes os platform and distribution 4 4 0 18362 microsoft tensorflow instal from anaconda default source tensorflow version 1 15 python version 3 7 5 cuda cudnn version 10 0 gpu model and memory geforce rtx 2060 describe the current behavior I m try to implement a custom loss function base on a custom accuracy function that I m already use to evaluate my model prediction on the test dataset the conversion can t be 1 1 because I use numpy great and equal function that be not differentiable I create thefore custom function that approximate the latter but their behavior have some problem describe the expect behavior in particular I can test if everything be fine by compare the result obtain by my original custom accuracy f and the new loss f give the same input my input be tensorflow prediction I just inglobe they inside k constant to convert they in tensor what I notice be that this line of code ep sys float info epsilon return 0 5 y 5 k sqrt k pow y 5 2 eps be problematic in particular y be an array of float32 value in 1 10 range and the return array let s call it ret should have ret I max 5 y I but sometimes the value of 5 become 4 9999995 instead the next portion of my code be base on how many 5 be present and therefore I can t ignore this problem the fact be that let s say a problematic index be w so that ret w 4 9999995 instead of 5 if I use the same code with y now equal to only y w the return array be correctly 5 this mean that somehow if y be a batch of prediction and not just one something isn t work this should not be the case because both k sqrt and k pow work element wise it should not matter if y be an array of 1 or multiple value out of almost 20k prediction around 1k have this same problem and it be deterministic always the same be problematic I also try to use eps sys float info epsilon return 0 5 y 5 np sqrt np pow y 5 2 eps and the problem be go thefore it be relate to keras backend last info I try to use also eps sys float info epsilon return tf math ceil 0 5 y 5 k sqrt k pow y 5 2 eps but this completely ruin the return value sometimes real number such as 4 5 be round to 6 instead of 5 if more information be need I can provide they |
tensorflowtensorflow | error can not convert auto to eagertensor of dtype float | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change I intend to build up a custom loss function as follow from future import absolute import division print function unicode literal import functool import numpy as np import tensorflow as tf class generaldiceloss tf keras loss loss def init self reduction tf keras loss reduction auto name generaldiceloss super init reduction reduction name name self epsilon 1e 16 def get config self config super generaldiceloss self get config return config def call self ypred ytrue ytrue tf dtype cast ytrue dtype ypred dtype dot product ypred and ytrue and sum they up for each datum and class crossprod tf multiply ypre ytrue crossprodsum tf math reduce sum crossprod axis np arange 2 ytrue ndim calculate weight for each datum and class weight tf math reduce sum ytrue axis np arange 2 ytrue ndim weight tf math divide 1 tf math square weight self epsilon weight sum over class numerator 2 tf math reduce sum tf multiply crossprodsum weight axis 1 saquare summation yysum tf math reduce sum tf math square ypre tf math square ytrue axis np arange 2 ytrue ndim weight sum over class denominator tf math reduce sum tf multiply weight yysum axis 1 loss 1 tf math divide numerator denominator self epsilon loss tf math reduce mean 1 tf math divide numerator denominator self epsilon return loss then I create variable to have it test generaldiceloss ypre tf random uniform shape 16 3 4 4 4 ytrue tf round tf random uniform shape 16 3 4 4 4 loss generaldiceloss ypre ytrue but I get an error file keras gpu lib site package tensorflow core python framework constant op py line 96 in convert to eager tensor return op eagertensor value ctx device name dtype typeerror can not convert auto to eagertensor of dtype float in the doc above 1 there be no clear indication or warn about conversion issue not to mention there be no dtype conversion in my code at all 2 there be no clear example indicate which option auto or sum over batch size should be adopt in one s minbatch size be great than 1 in my case assume my batch be 16 as exhibte in ypred and ytrue above shall I use loss 1 tf math divide numerator denominator self epsilon or loss tf math reduce mean 1 tf math divide numerator denominator self epsilon and for which option build up a custom layer loss function be already a tough task for many practitioner so could the doc provide more detailed explanation and example so as to make user life a little bit easy many thank |
tensorflowtensorflow | tf math sigmoid precision issue on gpu | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 1 0 dev20191219 python version 3 6 8 cuda cudnn version cuda 10 1 cudnn 7 6 3 gpu model and memory gtx 1060 6 gb describe the current behavior we compare tensorflow version 2 1 0 dev20191203 and 2 1 0 dev20191219 and find some precision difference when use tf math sigmoid be that expect and what be the related commit some result be improve see last section but we also find some inconsistent value on gpu when the tensor size be change describe the expect behavior sigmoid result should not depend on the tensor size code to reproduce the issue on gpu go from 3 to 4 element change the result python tf sigmoid 34 0 0 0 0 0 tf sigmoid 34 0 0 0 0 0 0 0 tf sigmoid 34 0 0 0 0 0 tf sigmoid 34 0 0 0 0 0 0 0 other info log here be an example of improve precision in 2 1 0 dev20191203 python tf sigmoid 20 0 in 2 1 0 dev20191219 python tf sigmoid 20 0 |
tensorflowtensorflow | a possible bug in convrnn2d call | Bug | refer to convrnn2d call l294 l341 l308 kwargs initial state initial state and l317 kwargs constant constant should be add in the else block at l340 the current situation contradict with the use of full input at l337 I be not sure if we can simply replicate the code from its parent rnn call l640 l700 I go ahead and try it but after load the save model weight in a complete new python session the result on validation datum doesn t match at all I would appreciate a quick fix for local edit at least thank |
tensorflowtensorflow | training fail when a multi output keras model have one output without a loss function | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see minimal example os platform and distribution ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary specifically tensorflow tensorflow nightly py3 docker image tensorflow version use command below 2 1 0 dev20191216 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior a multi output keras model compile so that one output doesn t have a loss function raise an exception when call fit describe the expect behavior training should minimise the loss define for the other output s code to reproduce the issue import numpy as np import tensorflow as tf import tensorflow kera as keras input a kera layer input shape 10 name input a input b keras layer input shape 20 name input b output a kera layer dense 1 name output a input a output b kera layer dense 1 name output b input b model keras model input input a input b output output a output b model compile optimizer sgd loss output a none output b mse n 128 input a np one n 10 input b np one n 20 output a np one n 1 output b np one n 1 dataset tf datum dataset from tensor slice input a input b output a output b batch 64 model fit dataset raise valueerror error when check model target the list of numpy array that you be pass to your model be not the size the model expect expect to see 1 array s for input output b but instead get the follow list of 2 array |
tensorflowtensorflow | tensorflow 2 0 tf linalg normalize yield nan | Bug | tf linalg normalize np zeros 10 4 ord 1 axis 1 yield nan as below I know this be cause by divide by 0 so in the future tensorflow should make this operation more numerically stable |
tensorflowtensorflow | convert to tflite invalid quantization param for op maximum at index 4 in subgraph 0 | Bug | system information os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version or github sha if from source 1 15 0 gpu command use to run the converter or code if you re use the python api import os import tensorflow as tf import numpy as np from pil import image dataset directory image datum test directory save model for img in os listdir directory image datum image open os path join directory image img datum np asarray datum dtype np float32 np newaxis dataset append datum def representative dataset gen for input value in dataset yield input value converter tf lite tfliteconverter from save model directory save save converter optimization tf lite optimize default converter representative dataset representative dataset gen converter target spec support op tf lite opsset tflite builtins int8 converter inference input type tf uint8 converter inference output type tf uint8 tflite model converter convert name directory save tflite model open name tflite wb write tflite model the output from the converter invocation traceback most recent call last file saved2lite py line 34 in tflite model converter convert file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python lite py line 993 in convert inference output type file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python lite py line 239 in calibrate quantize model inference output type allow float file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python optimize calibrator py line 78 in calibrate and quantize np dtype output type as numpy dtype num allow float file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python optimize tensorflow lite wrap calibration wrapper py line 115 in quantizemodel return tensorflow lite wrap calibration wrapper calibrationwrapper quantizemodel self input py type output py type allow float runtimeerror invalid quantization param for op maximum at index 4 in subgraph 0 also please include a link to the save model or graphdef failure detail conversion fail any other info log full log 2019 12 19 11 49 10 789835 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2019 12 19 11 49 10 810583 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 811068 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 392 pcibusid 0000 01 00 0 2019 12 19 11 49 10 811245 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 10 812226 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 19 11 49 10 813094 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 12 19 11 49 10 813351 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 12 19 11 49 10 814559 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 12 19 11 49 10 815596 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 12 19 11 49 10 817880 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 19 11 49 10 818002 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 818618 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 819072 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 12 19 11 49 10 819465 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use avx2 fma 2019 12 19 11 49 10 843162 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3408000000 hz 2019 12 19 11 49 10 843971 I tensorflow compiler xla service service cc 168 xla service 0x55724cd4c080 initialize for platform host this do not guarantee that xla will be use device 2019 12 19 11 49 10 844031 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2019 12 19 11 49 10 926515 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 927095 I tensorflow compiler xla service service cc 168 xla service 0x55724cdae1a0 initialize for platform cuda this do not guarantee that xla will be use device 2019 12 19 11 49 10 927112 I tensorflow compiler xla service service cc 176 streamexecutor device 0 geforce gtx 1050 ti compute capability 6 1 2019 12 19 11 49 10 927282 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 927748 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 392 pcibusid 0000 01 00 0 2019 12 19 11 49 10 927774 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 10 927782 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 19 11 49 10 927790 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 12 19 11 49 10 927797 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 12 19 11 49 10 927804 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 12 19 11 49 10 927811 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 12 19 11 49 10 927819 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 19 11 49 10 927857 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 928283 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 928683 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 12 19 11 49 10 928703 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 10 929442 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 12 19 11 49 10 929452 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 12 19 11 49 10 929458 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 12 19 11 49 10 929674 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 930285 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 10 930708 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2934 mb memory physical gpu device 0 name geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 warning tensorflow from home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python convert save model py 60 load from tensorflow python save model loader impl be deprecate and will be remove in a future version instruction for update this function will only be available through the v1 compatibility library as tf compat v1 save model loader load or tf compat v1 save model load there will be a new function for import savedmodel in tensorflow 2 0 2019 12 19 11 49 13 849255 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 13 849669 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 392 pcibusid 0000 01 00 0 2019 12 19 11 49 13 849711 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 13 849721 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 19 11 49 13 849729 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 12 19 11 49 13 849737 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 12 19 11 49 13 849744 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 12 19 11 49 13 849752 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 12 19 11 49 13 849759 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 19 11 49 13 849796 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 13 850140 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 13 850456 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 12 19 11 49 13 850478 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 12 19 11 49 13 850484 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 12 19 11 49 13 850488 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 12 19 11 49 13 850595 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 13 850985 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 13 851332 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2934 mb memory physical gpu device 0 name geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 2019 12 19 11 49 16 749682 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 750076 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 12 19 11 49 16 750135 I tensorflow core grappler cluster single machine cc 356 start new session 2019 12 19 11 49 16 750521 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 750902 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 392 pcibusid 0000 01 00 0 2019 12 19 11 49 16 750927 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 16 750936 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 19 11 49 16 750945 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 12 19 11 49 16 750952 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 12 19 11 49 16 750959 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 12 19 11 49 16 750966 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 12 19 11 49 16 750974 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 19 11 49 16 751007 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 751348 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 751663 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 12 19 11 49 16 751682 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 12 19 11 49 16 751688 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 12 19 11 49 16 751692 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 12 19 11 49 16 751831 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 752173 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 16 752495 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2934 mb memory physical gpu device 0 name geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 2019 12 19 11 49 17 182263 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 12 19 11 49 17 182289 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0 003ms 2019 12 19 11 49 17 182737 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0ms warn tensorflow from home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python util py 249 convert variable to constant from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util convert variable to constant warn tensorflow from home ds017 pyenv version takehome lib python3 6 site package tensorflow core python framework graph util impl py 277 extract sub graph from tensorflow python framework graph util impl be deprecate and will be remove in a future version instruction for update use tf compat v1 graph util extract sub graph 2019 12 19 11 49 18 713005 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 713562 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 12 19 11 49 18 713618 I tensorflow core grappler cluster single machine cc 356 start new session 2019 12 19 11 49 18 714027 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 714386 I tensorflow core common runtime gpu gpu device cc 1618 find device 0 with property name geforce gtx 1050 ti major 6 minor 1 memoryclockrate ghz 1 392 pcibusid 0000 01 00 0 2019 12 19 11 49 18 714424 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudart so 10 0 2019 12 19 11 49 18 714433 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 19 11 49 18 714441 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 0 2019 12 19 11 49 18 714448 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 0 2019 12 19 11 49 18 714455 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 0 2019 12 19 11 49 18 714462 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 0 2019 12 19 11 49 18 714470 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 19 11 49 18 714501 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 714882 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 715214 I tensorflow core common runtime gpu gpu device cc 1746 add visible gpu device 0 2019 12 19 11 49 18 715235 I tensorflow core common runtime gpu gpu device cc 1159 device interconnect streamexecutor with strength 1 edge matrix 2019 12 19 11 49 18 715242 I tensorflow core common runtime gpu gpu device cc 1165 0 2019 12 19 11 49 18 715261 I tensorflow core common runtime gpu gpu device cc 1178 0 n 2019 12 19 11 49 18 715411 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 715760 I tensorflow stream executor cuda cuda gpu executor cc 983 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2019 12 19 11 49 18 716119 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 2934 mb memory physical gpu device 0 name geforce gtx 1050 ti pci bus i d 0000 01 00 0 compute capability 6 1 2019 12 19 11 49 20 750755 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 12 19 11 49 20 750831 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 1230 node 2206 1329 edge 2360 time 1332 40198m 2019 12 19 11 49 20 750850 I tensorflow core grappler optimizer meta optimizer cc 788 constant folding graph size after 1230 node 0 1329 edge 0 time 370 552ms traceback most recent call last file saved2lite py line 34 in tflite model converter convert file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python lite py line 993 in convert inference output type file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python lite py line 239 in calibrate quantize model inference output type allow float file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python optimize calibrator py line 78 in calibrate and quantize np dtype output type as numpy dtype num allow float file home ds017 pyenv version takehome lib python3 6 site package tensorflow core lite python optimize tensorflow lite wrap calibration wrapper py line 115 in quantizemodel return tensorflow lite wrap calibration wrapper calibrationwrapper quantizemodel self input py type output py type allow float runtimeerror invalid quantization param for op maximum at index 4 in subgraph 0 |
tensorflowtensorflow | wrong accuracy value for training datum in tutorial | Bug | url s with the issue description of issue what need change under train the model in build the model the accuracy of the model on training datum after 10 epoch be 0 91 91 while it be mention as 0 88 88 clear description since it be already mention in the tutorial that the model overfit the training datum thus the accuracy on training datum should be more than that on testing datum 88 3 submit a pull request if this issue be alright I ll be glad the submit a pr right away thank for the help |
tensorflowtensorflow | crash on hexagon delegate | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 android mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device mi a2 pixel3 tensorflow version use command below 1 15 0 python version bazel version if compile from source 1 1 0 gcc compiler version if compile from source describe the current behavior I ve build the dsp delegate aar but on my mia2 I always get the follow crash in native code be there anything I can do to debug with the cc file also I try on pixel3 it return this device do not support hexagon delegate it doesn t seem to be normal for a snapdragon 845 device 2019 12 18 18 30 38 484 18124 18510 com ivuu I tflite create tensorflow lite delegate for hexagon 2019 12 18 18 30 38 491 18124 18510 com ivuu I tflite initialize tensorflow lite runtime 2019 12 18 18 30 38 952 18124 18510 com ivuu a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0x2 in tid 18510 thread 130 pid 18124 |
tensorflowtensorflow | tf load op library bug | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 1604 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 1 13 2 python version 3 7 bazel version if compile from source gcc compiler version if compile from source pip cuda cudnn version 10 0 gpu model and memory 1060 6 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior in a process use full path twice load tf load op library os path join dname build libcompute depth so the second time op list be empty but with relative path twice tf load op library not empty describe the expect behavior use full path twice tf load op library op list should not empty code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I compile meet this problem but I think this be a general problem reproduce just in a python file load the library twice with full path and relative path other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach compute depth grad module dict name a11935c229913616b7b14d8da52f01ac doc python wrapper around tensorflow op n nthis file be machine generate do not edit n lib handle op list op name computedepth input arg name input type dt float input arg name focal type dt float output arg name depth type dt float attr name upratio type int op name computedepthgrad input arg name depth grad type dt float input arg name input type dt float input arg name focal type dt float output arg name grad input type dt float output arg name grad focal type dt float name a11935c229913616b7b14d8da52f01ac doc python wrapper around tensorflow op n nthis file be machine generate do not edit n package none lib handle op list op name computedepth input arg name input type dt float input arg name focal type dt float output arg name depth type dt float attr name upratio type int op name computedepthgrad input arg name depth grad type dt float input arg name input type dt float input arg name focal type dt float output arg name grad input type dt float output arg name grad focal type dt float but with full name second will print compute depth grad module dict name 670cc8cfec5b6d3b8635f39bd583d769 doc python wrapper around tensorflow op n nthis file be machine generate do not edit n package none lib handle op list |
tensorflowtensorflow | softmax activation don t get convert to softmax tflite operator if ndim 2 | Bug | system information os platform and distribution e g linux ubuntu 16 04 macos 10 15 1 tensorflow instal from source or binary binary tensorflow version or github sha if from source tf nightly 2 1 0 dev20191203 command use to run the converter or code if you re use the python api import pathlib inpt tf keras layers input shape 256 256 3 out tf keras layers lambda lambda x tf keras activation softmax x inpt out tf keras layers lambda lambda x tf nn softmax x out model tf keras model inpt out converter tf lite tfliteconverter from keras model model tflite model converter convert pathlib path out tflite write bytes tflite model failure detail image this graph show the difference between the different softmax method when use tf keras activation softmax there be code l43 l79 with a workaround for multiple dimension it look like this be write before the tensorflow op have multi dimension support |
tensorflowtensorflow | automatic mixed precision and xla not work with model fit generator | Bug | system information os platform and distribution tensorflow tensorflow nightly gpu py3 nvidia docker image ec33d38d1b43 tensorflow version use command below 2 1 0 dev20191106 python version python 3 6 8 cuda cudnn version 10 0 gpu model and memory titan rtx 24 gb describe the problem when use model fit generator automatic mixed precision amp and compile xla doesn t seem to work however it all work fine with model fit please see below for a complete code sample to reproduce this source code log import tensorflow as tf from tensorflow keras import sequential from tensorflow keras layer import dense flatten from tensorflow keras optimizer import adam from tensorflow keras util import sequence import numpy as np import random def mixed precision test use generator use xla tf config optimizer set jit true input shape 100 100 100 n sample 1000 build the model model sequential model add dense 16 input shape input shape activation relu model add flatten model add dense 8 activation relu model add dense 1 activation sigmoid optimiser adam lr 0 001 use amp optimiser tf train experimental enable mixed precision graph rewrite optimiser model compile optimizer optimiser loss binary crossentropy metric accuracy if use generator class datagen sequence def len self return n sample def getitem self index x np random rand 1 input shape x np array x dtype uint8 y np array random choice 0 1 dtype uint8 return x y model fit generator generator datagen else x np random rand n sample input shape x np array x dtype uint8 y np array random choice 0 1 for in range len x dtype uint8 model fit x y if name main mixed precision test use generator false mixed precision test use generator true when set use generator false tensorflow print out the follow log 2019 12 17 15 51 43 549128 I tensorflow core grappler optimizer auto mixed precision cc 1857 convert 26 409 node to float16 precision use 2 cast s to float16 exclude const and variable cast 2019 12 17 15 51 43 755273 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 17 15 51 45 048967 I tensorflow compiler jit xla compilation cache cc 242 compile cluster use xla this line be log at most once for the lifetime of the process indicate that amp and xla be work as intend when set use generator true those log be not present and gpu memory consumption be high suggest that no casting to fp16 be perform |
tensorflowtensorflow | dropoutwrapper and explode gradient behaviour for recurrent neural network | Bug | dear all I have a point about dropoutwrapper and its use with recurrent neural network due to the possibility that the dropout can be apply to the state or the output state keep prob and output keep prob I find that during the recurrent process the state propagate through the time can take value not bound in the interval 1 1 this be probably due to the way in which the dropout be implement at training time with a scaling instead of test time with expectation since the dropout be apply after the activation I e tanh the feature value will range between inf and inf this point be a bit strange for I since the current implementation can induce explode gradient issue in the gru lstm process while such cell be introduce to deal with vanish as well as explode gradient please could you supply I some feedback about my issue since practically it can impact people that commonly employ such wrapper that induce behaviour that be divergent w r t the theoretical behaviour of rnn gru lstm all the good |
tensorflowtensorflow | I do not get the from the file or definition of gen nn op in the place tensorflow python op | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | provide lstm example doesn t work | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow tensorflow version commit sha if source 1 15 0 describe the problem the example provide here doesn t work please provide the exact sequence of command step when you run into the problem if I put together the example it give this note this need to happen before import tensorflow import os os environ tf enable control flow v2 1 import sys from absl import app import argparse import tensorflow as tf class mnistlstmmodel object build a simple lstm base mnist model attribute time step the maximum length of the time step but since we re just use the width dimension as time step it s actually a fix number input size the lstm layer input size num lstm layer number of lstm layer for the stack lstm cell case num lstm unit number of unit in the lstm cell unit the unit for the last layer num class number of class to predict def init self time step input size num lstm layer num lstm unit unit num class self time step time step self input size input size self num lstm layer num lstm layer self num lstm unit num lstm unit self unit unit self num class num class def build model self build the model use the give config return x the input placehoder tensor logit the logit of the output output class the prediction x tf placeholder float32 none self time step self input size name input lstm layer for in range self num lstm layer lstm layer append important note here we use tf lite experimental nn tflitelstmcell ophinte lstmcell tf lite experimental nn tflitelstmcell self num lstm unit forget bias 0 weight and bias for output softmax layer out weight tf variable tf random normal self unit self num class out bias tf variable tf zeros self num class transpose input x to make it time major lstm input tf transpose x perm 1 0 2 lstm cell tf keras layers stackedrnncells lstm layers important note here we use tf lite experimental nn dynamic rnn and time major be set to true output tf lite experimental nn dynamic rnn lstm cell lstm input dtype float32 time major true transpose the output back to batch time output output tf transpose output perm 1 0 2 output tf unstack output axis 1 logit tf matmul output 1 out weight out bias output class tf nn softmax logit name output class return x logit output class def train model model dir batch size 20 learning rate 0 001 train step 200 eval step 50 save every n step 100 train save the mnist recognition model train test dataset x train y train x test y test tf keras datasets mnist load datum train dataset tf datum dataset from tensor slice x train y train train iterator train dataset shuffle buffer size 1000 batch batch size repeat make one shot iterator x logit output class model build model test dataset tf datum dataset from tensor slice x test y test test iterator test dataset batch batch size repeat make one shot iterator input label placeholder y tf placeholder tf int32 none one hot label tf one hot y depth model num class loss function loss tf reduce mean tf nn softmax cross entropy with logit logit logit label one hot label correct tf nn in top k output class y 1 accuracy tf reduce mean tf cast correct tf float32 optimization opt tf train adamoptimizer learning rate learning rate minimize loss initialize variable init tf global variable initializer saver tf train saver batch x batch y train iterator get next batch test x batch test y test iterator get next with tf session as sess sess run init for I in range train step batch x value batch y value sess run batch x batch y loss value sess run opt loss feed dict x batch x value y batch y value if I 100 0 tf log info training step d loss be f I loss value if I 0 and I save every n step 0 accuracy sum 0 0 for in range eval step test x value test y value sess run batch test x batch test y accuracy value sess run accuracy feed dict x test x value y test y value accuracy sum accuracy value tf log info training step d accuracy be f I accuracy sum eval step 1 0 saver save sess model dir def export model model dir tflite model file use post training quantize true export train model to tflite model tf reset default graph x output class model build model saver tf train saver sess tf session saver restore sess model dir convert to tflite model converter tf lite tfliteconverter from session sess x output class converter post training quantize use post training quantize tflite converter convert with open tflite model file w as f f write tflite def train and export parse flag train the mnist lstm model and export to tflite model mnistlstmmodel time step 28 input size 28 num lstm layer 2 num lstm unit 64 unit 64 num class 10 tf log info start training train model parse flag model dir tf log info finish training start export to tflite to s parse flag tflite model file export model parse flag model dir parse flag tflite model file parse flag use post training quantize tf log info finish export model be s parse flag tflite model file def run main main in the tflite lstm tutorial parser argparse argumentparser description train a mnist recognition model then export to tflite parser add argument model dir type str help directory where the model will store require true parser add argument tflite model file type str help full filepath to the export tflite model file require true parser add argument use post training quantize action store true default true help whether or not to use post training quantize parse flag parser parse know args train and export parse flag def main app run main run main argv sys argv 1 if name main main when run simply like python example py model dir lstms tflite model file lstms model tflite I get the follow error message info tensorflow start train i1216 23 55 41 555022 140529868711744 doc example py 159 start train info tensorflow training step 0 loss be 2 657418 i1216 23 55 45 383375 140529868711744 doc example py 120 training step 0 loss be 2 657418 info tensorflow training step 100 loss be 0 867711 i1216 23 55 47 319205 140529868711744 doc example py 120 training step 100 loss be 0 867711 info tensorflow training step 100 accuracy be 0 540000 i1216 23 55 47 966933 140529868711744 doc example py 132 training step 100 accuracy be 0 540000 info tensorflow finished training start export to tflite to lstm doc model tflite i1216 23 55 50 603394 140529868711744 doc example py 162 finish training start export to tflite to lstm doc model tflite info tensorflow restore parameter from lstm doc i1216 23 55 50 832539 140529868711744 saver py 1284 restore parameter from lstm doc 2019 12 16 23 55 50 880481 I tensorflow core grappler device cc 55 number of eligible gpu core count 8 compute capability 0 0 0 2019 12 16 23 55 50 880555 I tensorflow core grappler cluster single machine cc 356 start new session 2019 12 16 23 55 50 900838 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item graph to optimize 2019 12 16 23 55 50 900867 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer graph size after 412 node 0 507 edge 0 time 3 325ms 2019 12 16 23 55 50 900872 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer graph size after 412 node 0 507 edge 0 time 5 794ms 2019 12 16 23 55 50 900875 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item hey rnn while body 8743 2019 12 16 23 55 50 900878 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0 002ms 2019 12 16 23 55 50 900882 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0ms 2019 12 16 23 55 50 900884 I tensorflow core grappler optimizer meta optimizer cc 786 optimization result for grappler item hey rnn while cond 8742 2019 12 16 23 55 50 900888 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0ms 2019 12 16 23 55 50 900891 I tensorflow core grappler optimizer meta optimizer cc 788 function optimizer function optimizer do nothing time 0ms info tensorflow freeze 26 variable i1216 23 55 50 942811 140529868711744 graph util impl py 334 freeze 26 variable info tensorflow convert 26 variable to const op i1216 23 55 50 946878 140529868711744 graph util impl py 394 convert 26 variable to const op home mparient virtualenvs decibel lib python3 6 site package tensorflow core lite python lite py 846 userwarning property post training quantize be deprecate please use optimization optimize default instead instead name convertererror traceback most recent call last code perso decibel light prod doc example py in 195 196 if name main 197 main code perso decibel light prod doc example py in main 191 192 def main 193 app run main run main argv sys argv 1 194 195 virtualenvs decibel lib python3 6 site package absl app py in run main argv flag parser 297 callback 298 try 299 run main main args 300 except usageerror as error 301 usage shorthelp true detailed error error exitcode error exitcode virtualenvs decibel lib python3 6 site package absl app py in run main main argv 248 sys exit retval 249 else 250 sys exit main argv 251 252 code perso decibel light prod doc example py in run main 187 help whether or not to use post training quantize 188 parse flag parser parse know args 189 train and export parse flag 190 191 code perso decibel light prod doc example py in train and export parse flag 162 parse flag tflite model file 163 export model parse flag model dir parse flag tflite model file 164 parse flag use post training quantize 165 tf log info 166 finish export model be s parse flag tflite model file code perso decibel light prod doc example py in export model model dir tflite model file use post training quantize 144 converter tf lite tfliteconverter from session sess x output class 145 converter post training quantize use post training quantize 146 tflite converter convert 147 with open tflite model file w as f 148 f write tflite virtualenvs decibel lib python3 6 site package tensorflow core lite python lite py in convert self 981 input tensor self input tensor 982 output tensor self output tensor 983 converter kwargs 984 else 985 result toco convert graph def virtualenvs decibel lib python3 6 site package tensorflow core lite python convert py in toco convert impl input data input tensor output tensor enable mlir converter args kwargs 447 input datum serializetostre 448 debug info str debug info str 449 enable mlir converter enable mlir converter 450 return datum 451 virtualenvs decibel lib python3 6 site package tensorflow core lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 198 stdout try convert to unicode stdout 199 stderr try convert to unicode stderr 200 raise convertererror see console for info n s n s n stdout stderr 201 finally 202 must manually cleanup file convertererror see console for info 2019 12 16 23 55 52 418001 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418032 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418038 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418043 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418047 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418051 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418056 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418061 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418066 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418071 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418076 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418081 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418146 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418153 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418158 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418162 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418166 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418171 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418175 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418179 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418184 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418188 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418193 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418199 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation readvariableop 2019 12 16 23 55 52 418227 I tensorflow lite toco import tensorflow cc 659 convert unsupported operation tensorlistreserve 2019 12 16 23 55 52 418236 I tensorflow lite toco import tensorflow cc 193 unsupported datum type in placeholder op 21 2019 12 16 23 55 52 418479 f tensorflow lite toco tooling util cc 1074 check fail name substr colon pos 1 find first not of 0123456789 string npos 0 vs 18446744073709551615 array stack rnn cell inputhint unidirectionalsequencelstm 34aa74ee205711ea831bad6e4148a879 12 none input bias readvariableop value have non digit character after colon fatal python error abort current thread 0x00007f7ffc1c8740 most recent call first file home mparient virtualenvs decibel lib python3 6 site package tensorflow core lite toco python toco from protos py line 52 in execute file home mparient virtualenvs decibel lib python3 6 site package absl app py line 250 in run main file home mparient virtualenvs decibel lib python3 6 site package absl app py line 299 in run file home mparient virtualenvs decibel lib python3 6 site package tensorflow core python platform app py line 40 in run file home mparient virtualenvs decibel lib python3 6 site package tensorflow core lite toco python toco from protos py line 89 in main file home mparient virtualenvs decibel bin toco from protos line 8 in aborted |
tensorflowtensorflow | low performance in tf2 x distribute mirror strategy with 4 v100 gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary binary tensorflow version use command below v2 0 0 rc2 26 g64c3d38 2 0 0 python version python 3 6 8 cuda cudnn version driver version 440 33 01 cuda version 10 2 cudnn 7 6 2 gpu model and memory tesla v100 sxm2 16 gb describe the current behavior with 4 v100 gpu in distribute mirror gpu strategy training single step be around 3x slow than with single v100 gpu describe the expect behavior single step should be less than 2x slow code to reproduce the issue training loop hierarchical vae in the current configuration l1121 the code be adapt from tf1 x repository l1045 and be compile use tf2 x tf function annotation it use a dry run of the model to pre create variable use tf compat v1 variable scope scope reuse tf compat v1 auto reuse l1144 and then run the actual training step s l1180 the total number of mirror parameter be around 500 mb with 4 v100 gpu training step be around 3x slow than with single v100 gpu command nohup python3 main py dataset celebamask hq img height 256 img width 256 ch 16 img ch 3 phase train save freq 10000 batch size 18 gan type hinge code gan type gan n critic 1 code num layer 4 code dist num layer 0 sn false train main true train nondet false lr 0 0002 print freq 100 train celebamask hq log other info log with cuda visible device 0 startup time 9 min gpu utilization 90 training step 0 7 second log file train celebamask hq 1xgpu log with cuda visible device 0 1 2 3 startup time 30min build variable dry run 10min build model 20min gpu utilization 50 training step 2 second log file train celebamask hq 4xgpu log train celebamask hq 1xgpu log train celebamask hq 4xgpu log |
tensorflowtensorflow | replacement for experimental run tf function after removal from tf keras model compile | Bug | it look like experimental run tf function be remove from tf keras model compile in this commit a few day ago diff de9b96ac2d81503324cbbbe21732031fr1159 in horovod this flag graph mode be necessary in order for optimizer get gradient to be call which aggregate gradient across worker since this flag have be remove distribute training in horovod with tf keras be not work in our nightly build be there a workaround to achieve the same behavior with the late change on master note that we can not perform the allreduce aggregation in apply gradient due to interaction with gradient clip and loss scaling see |
tensorflowtensorflow | typo on doc | Bug | line 194 be miss the square bracket wrong use distribution to create a linear combination of value with shape batch size tq dim correct use distribution to create a linear combination of value with shape batch size tq dim link to the line code l194 |
tensorflowtensorflow | lite micro micro speech example input tensor lifetime assumption invalid | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary source tensorflow version commit sha if source e12ba3de80d9315b7174037081adb482689bc6d6 target platform e g arm mbe os arduino nano 33 etc all describe the problem the feature provider accumulate feature slice use the input tensor in the arena as a buffer however the lifetime of the input buffer be only the first operation of the model as such the feature buffer may be overwrite when the memory be reuse for tensor with different lifetime this be the case with the current model and the current greedy memory planner as only the front of the feature buffer be currently overwrite and the front feature slice be never reuse this do not currently impact the example however use the example as a base of more complex model would trigger this problem I will submit a pr to add a buffer to the feature provider alternatively the model could be change to pass the input through to the output to keep the tensor alive please provide the exact sequence of command step when you run into the problem code review |
tensorflowtensorflow | tf keras layers batchnormalization may not work in tf 2 0 and eager model be disable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 in docker tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d38 python version 3 5 cuda cudnn version 10 0 7 gpu model and memory gtx 1080ti 11175mib describe the current behavior hi author and developer I be develop our project in tf 2 0 0 and eager mode be disable the main reason be tf 1 x will not be maintain but third party library have not be ready for tf 2 0 yet this issue be a separate issue from 35050 issuecomment 565395512 this potential issue be somethine wrong if user do custom training with level api which include tf keras layer batchnormalization in tf 2 0 and eager model be disable I summary the testcaset as the follow python import tensorflow as tf tf compat v1 disable eager execution tf compat v1 disable v2 behavior import numpy as np batch size 100 def download datum get raw data trainx trainy testx testy tf keras datasets cifar10 load datum trainx trainx astype np float32 testx testx astype np float32 ont hot trainy tf keras util to categorical trainy 10 testy tf keras util to categorical testy 10 get validation set training size 45000 validx trainx training size validy trainy training size trainx trainx training size trainy trainy training size return trainx trainy validx validy testx testy def datum pipeline datax datay dataset tf datum dataset from tensor slice datax datay dataset dataset shuffle batch size 8 dataset dataset repeat dataset dataset batch batch size dataset dataset prefetch tf datum experimental autotune return dataset class custom model def init self def acc acc tf keras metric categorical accuracy label ref clf out return tf math reduce mean acc def c loss loss tf keras loss categorical crossentropy label ref clf out loss tf math reduce mean loss return loss create model clf input tf keras layers input shape 32 32 3 name model input model tf keras application resnet v2 resnet50v2 include top true weight none input tensor clf input pool max class 10 model tf keras application vgg16 vgg16 include top true weight none input tensor clf input pool max class 10 model compile loss categorical crossentropy optimizer sgd metric accuracy label ref tf keras layers input shape 10 name label ref clf out model clf input use tf keras optimizer nadam would get error optimizer tf keras optimizers nadam lr 0 0005 optimizer tf compat v1 train adamoptimizer learning rate 0 01 self train op optimizer minimize c loss var list model trainable variable self clf model model self clf input clf input self label ref label ref self op acc acc self c loss c loss if name main set gpu import os if os environ get cuda visible device be none os environ cuda visible device 0 reset tf session tf compat v1 kera backend clear session gpu option tf compat v1 gpuoption allow growth true sess tf compat v1 session config tf compat v1 configproto gpu option gpu option tf compat v1 keras backend set session sess prepare datum trainx trainy validx validy testx testy download datum train gen datum pipeline trainx trainy valid gen datum pipeline validx validy test gen datum pipeline testx testy build targeted model model tf keras application resnet v2 resnet50v2 include top true weight none input shape 32 32 3 pool max class 10 model tf keras application vgg16 vgg16 include top true weight none input shape 32 32 3 pool none class 10 model compile loss categorical crossentropy optimizer sgd metric accuracy fit and evalutate model fit train gen step per epoch trainy shape 0 batch size validation datum valid gen validation step validy shape 0 batch size epoch 5 verbose 2 model evaluate testx testy verbose 2 batch size batch size create a new model print make sure that we create a new model model custom model sess run tf compat v1 global variable initializer model clf model evaluate testx testy verbose 2 batch size batch size train model num epoch 5 total len trainy shape 0 batch size tf iter tf compat v1 datum make initializable iterator train gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss acc 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc model train op feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss c loss b c loss acc acc b acc c loss c loss total len acc acc total len print training epoch d d loss 3f acc 3f format epoch 1 num epoch c loss acc print show loss and accuracy with keras api model clf model evaluate trainx trainy verbose 2 batch size batch size model clf model evaluate validx validy verbose 2 batch size batch size model clf model evaluate testx testy verbose 2 batch size batch size print show loss and accuracy with low level api evaluate num epoch 1 total len validy shape 0 batch size tf iter tf compat v1 datum make initializable iterator valid gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss t acc t c loss f acc f 0 0 0 0 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss t c loss t b c loss acc t acc t b acc b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 0 c loss f c loss f b c loss acc f acc f b acc c loss t c loss t total len c loss f c loss f total len acc t acc t total len acc f acc f total len print validation learning phase 1 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss t acc t print validation learning phase 0 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss f acc f evaluate num epoch 1 total len testy shape 0 batch size tf iter tf compat v1 datum make initializable iterator test gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss t acc t c loss f acc f 0 0 0 0 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss t c loss t b c loss acc t acc t b acc b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 0 c loss f c loss f b c loss acc f acc f b acc c loss t c loss t total len c loss f c loss f total len acc t acc t total len acc f acc f total len print testing learning phase 1 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss t acc t print testing learning phase 0 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss f acc f the first part of testing case be training model with high leval api and the result be as expect 450 450 39 loss 1 9658 accuracy 0 2993 val loss 1 7215 val accuracy 0 3738 epoch 2 5 450 450 28 loss 1 5722 accuracy 0 4334 val loss 1 5897 val accuracy 0 4152 epoch 3 5 450 450 27 loss 1 3876 accuracy 0 4993 val loss 1 4867 val accuracy 0 4770 epoch 4 5 450 450 28 loss 1 2564 accuracy 0 5477 val loss 1 3498 val accuracy 0 5060 epoch 5 5 450 450 27 loss 1 1488 accuracy 0 5888 val loss 1 3380 val accuracy 0 5232 10000 10000 3s loss 1 3523 accuracy 0 5289 I get a strange loss and the ourput can be see the follow make sure that we create a new model 10000 10000 3s loss 10 2004 accuracy 0 1048 training epoch 1 5 loss 2 288 acc 0 268 training epoch 2 5 loss 1 513 acc 0 448 training epoch 3 5 loss 1 285 acc 0 537 training epoch 4 5 loss 1 426 acc 0 487 training epoch 5 5 loss 1 306 acc 0 535 show loss and accuracy with keras api 45000 45000 9s loss nan accuracy 0 1002 5000 5000 1s loss nan accuracy 0 0986 10000 10000 2s loss nan accuracy 0 1000 show loss and accuracy with low level api validation learning phase 1 epoch 1 1 loss 1 163 acc 0 585 validation learning phase 0 epoch 1 1 loss nan acc 0 099 testing learning phase 1 epoch 1 1 loss 1 179 acc 0 587 testing learning phase 0 epoch 1 1 loss nan acc 0 100 obviously after train custom model with low level api the result would be wrong when set tf keras backend learning phase 0 also the result from keras api be wrong too tf keras backend learning phase 0 may affect the behavior of tf keras layer batchnormalization but I m not sure whether this be root cause I have try a small custom model without tf keras layer batchnormalization for mnist dataset and the result be normal the testcase for mnist as show in the follow python import tensorflow as tf tf compat v1 disable eager execution tf compat v1 disable v2 behavior import numpy as np batch size 100 def download datum get raw data trainx trainy testx testy tf keras datasets mnist load datum trainx trainx astype np float32 testx testx astype np float32 ont hot trainy tf keras util to categorical trainy 10 testy tf keras util to categorical testy 10 get validation set training size 55000 validx trainx training size validy trainy training size trainx trainx training size trainy trainy training size expand dimesion trainx np expand dim trainx axis 3 validx np expand dim validx axis 3 testx np expand dim testx axis 3 return trainx trainy validx validy testx testy def datum pipeline datax datay dataset tf datum dataset from tensor slice datax datay dataset dataset shuffle batch size 8 dataset dataset repeat dataset dataset batch batch size dataset dataset prefetch tf datum experimental autotune return dataset class custom model def init self def acc acc tf keras metric categorical accuracy label ref clf out return tf math reduce mean acc def c loss loss tf keras loss categorical crossentropy label ref clf out loss tf math reduce mean loss return loss declare variable self init op tf compat v1 kera initializer he normal model layer tf keras layer conv2d 16 3 3 padding same activation relu kernel initializer self init op name clf c1 tf keras layer conv2d 32 3 3 padding same activation relu kernel initializer self init op name clf c2 tf keras layer maxpooling2d pool size 2 2 name clf p1 tf keras layer conv2d 32 3 3 padding same activation relu kernel initializer self init op name clf c3 tf keras layer conv2d 64 3 3 padding same activation relu kernel initializer self init op name clf c4 tf keras layer maxpooling2d pool size 2 2 name clf p2 tf keras layer flatten name clf f1 tf keras layer dense 256 activation relu kernel initializer self init op name clf d1 tf keras layer dense 10 activation none kernel initializer self init op name clf d2 tf keras layers activation softmax name clf a1 clf model clf input tf keras layers input shape 28 28 1 name model input clf out clf input for ii in model layer clf out ii clf out clf model tf keras model model input clf input output clf out name clf model clf model compile loss categorical crossentropy optimizer nadam metric accuracy label ref tf keras layers input shape 10 name label ref clf out clf model clf input use tf keras optimizer nadam would get error optimizer tf keras optimizers nadam lr 0 0005 optimizer tf compat v1 train adamoptimizer learning rate 0 01 self train op optimizer minimize c loss var list clf model trainable variable self clf model clf model self clf input clf input self label ref label ref self op acc acc self c loss c loss if name main set gpu import os if os environ get cuda visible device be none os environ cuda visible device 0 reset tf session tf compat v1 kera backend clear session gpu option tf compat v1 gpuoption allow growth true sess tf compat v1 session config tf compat v1 configproto gpu option gpu option tf compat v1 keras backend set session sess prepare datum trainx trainy validx validy testx testy download datum train gen datum pipeline trainx trainy valid gen datum pipeline validx validy test gen datum pipeline testx testy create a new model print make sure that we create a new model model custom model sess run tf compat v1 global variable initializer model clf model evaluate testx testy verbose 2 batch size batch size train model num epoch 5 total len trainy shape 0 batch size tf iter tf compat v1 datum make initializable iterator train gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss acc 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc model train op feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss c loss b c loss acc acc b acc c loss c loss total len acc acc total len print training epoch d d loss 3f acc 3f format epoch 1 num epoch c loss acc print show loss and accuracy with keras api model clf model evaluate trainx trainy verbose 2 batch size batch size model clf model evaluate validx validy verbose 2 batch size batch size model clf model evaluate testx testy verbose 2 batch size batch size print show loss and accuracy with low level api evaluate num epoch 1 total len validy shape 0 batch size tf iter tf compat v1 datum make initializable iterator valid gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss t acc t c loss f acc f 0 0 0 0 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss t c loss t b c loss acc t acc t b acc b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 0 c loss f c loss f b c loss acc f acc f b acc c loss t c loss t total len c loss f c loss f total len acc t acc t total len acc f acc f total len print validation learning phase 1 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss t acc t print validation learning phase 0 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss f acc f evaluate num epoch 1 total len testy shape 0 batch size tf iter tf compat v1 datum make initializable iterator test gen tf next tf iter get next sess run tf iter initializer for epoch in range num epoch c loss t acc t c loss f acc f 0 0 0 0 0 0 0 0 for ii in range total len x y sess run tf next b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 1 c loss t c loss t b c loss acc t acc t b acc b c loss b acc sess run model c loss model op acc feed dict model clf input x model label ref y tf keras backend learning phase 0 c loss f c loss f b c loss acc f acc f b acc c loss t c loss t total len c loss f c loss f total len acc t acc t total len acc f acc f total len print testing learning phase 1 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss t acc t print testing learning phase 0 epoch d d loss 3f acc 3f format epoch 1 num epoch c loss f acc f definitely we get a very normal output make sure that we create a new model 10000 10000 1s loss 398 0696 acc 0 1151 training epoch 1 5 loss 11 997 acc 0 558 training epoch 2 5 loss 0 474 acc 0 849 training epoch 3 5 loss 0 282 acc 0 914 training epoch 4 5 loss 0 213 acc 0 935 training epoch 5 5 loss 0 181 acc 0 945 show loss and accuracy with keras api 55000 55000 1s loss 0 1555 acc 0 9535 5000 5000 0s loss 0 1501 acc 0 9584 10000 10000 0s loss 0 1687 acc 0 9539 show loss and accuracy with low level api validation learning phase 1 epoch 1 1 loss 0 150 acc 0 958 validation learning phase 0 epoch 1 1 loss 0 150 acc 0 958 testing learning phase 1 epoch 1 1 loss 0 169 acc 0 954 testing learning phase 0 epoch 1 1 loss 0 169 acc 0 954 describe the expect behavior it should work properly code to reproduce the issue please see the section of describe the current behavior other info log skip |
tensorflowtensorflow | autograph could not transform bind method toplevelfeature decode example of featuresdict | Bug | system information os platform and distribution arch linux 5 4 2 arch1 1 arch tensorflow instal from binary tensorflow version 2 1 0rc0 1 keras version 2 2 4 tf python version 3 8 gpu model and memory 2x gtx 1080 ti 11 gb describe the current behavior execute tensorflow s mnist handwriting example produce warn warn tensorflow autograph could not transform image image shape 28 28 1 dtype tf uint8 label classlabel shape dtype tf int64 num class 10 and will run it as be please report this to the tensorflow team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause argument object have no attribute default code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds from tensorflow keras optimizer import adam def build model filter 48 unit 24 kernel size 7 learning rate 1e 4 model tf keras sequential tf keras layer conv2d filter filter kernel size kernel size kernel size activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense unit activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer adam learning rate metric accuracy return model dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test num train example info split train num example num test example info split test num example buffer size 10000 batch size 32 def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale shuffle buffer size repeat batch batch size prefetch buffer size tf data experimental autotune eval dataset mnist test map scale repeat batch batch size prefetch buffer size tf data experimental autotune model build model epoch 2 model fit train dataset validation datum eval dataset step per epoch num train example epoch validation step num test example epoch epochs epoch |
tensorflowtensorflow | error occur when finalize generatordataset iterator | Bug | system information os platform and distribution arch linux 5 4 2 arch1 1 arch tensorflow instal from binary tensorflow version 2 1 0rc0 1 keras version 2 2 4 tf python version 3 8 gpu model and memory 2x gtx 1080 ti 11 gb describe the current behavior execute tensorflow s mnist handwriting example produce error the error dissapear if the code doesn t use onedevicestrategy or mirroredstrategy w tensorflow core kernel data generator dataset op cc 103 error occur when finalize generatordataset iterator cancel operation be cancel code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds import time from tensorflow keras optimizer import adam def build model filter 48 unit 24 kernel size 7 learning rate 1e 4 model tf keras sequential tf keras layer conv2d filter filter kernel size kernel size kernel size activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense unit activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer adam learning rate metric accuracy return model dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test num train example info split train num example num test example info split test num example strategy tf distribute onedevicestrategy device gpu 0 buffer size 10000 batch size 32 def scale image label image tf cast image tf float32 image 255 return image label train dataset mnist train map scale shuffle buffer size repeat batch batch size prefetch buffer size tf data experimental autotune eval dataset mnist test map scale repeat batch batch size prefetch buffer size tf data experimental autotune with strategy scope model build model epoch 5 start time perf counter model fit train dataset validation datum eval dataset step per epoch num train example epoch validation step num test example epoch epochs epoch elapse time perf counter start print elapse 0 3f format elapse |
tensorflowtensorflow | custom loss may not work when run keras model with tf distribute mirroredstrategy | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 in docker tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d38 python version 3 5 cuda cudnn version 10 0 7 gpu model and memory gtx 1080ti 11175mib describe the current behavior hi author and developer I be develop our project in tf 2 0 0 and eager mode be disable the main reason be tf 1 x will not be maintain but third party library have not be ready for tf 2 0 yet I get bug when I be run custom loss keras model with tf distribute mirroredstrategy this bug can be reproduce by the follow minimal testcase python from distutil version import looseversion import numpy as np import tensorflow as tf disable eager model for tf 2 x tf compat v1 disable eager execution batch size 100 img h 32 img w 32 img min 0 img max 1 channel 3 num class 10 strategy tf distribute mirroredstrategy def download datum get raw data trainx trainy testx testy tf keras datasets cifar10 load datum trainx trainx astype np float32 testx testx astype np float32 ont hot trainy tf keras util to categorical trainy 10 testy tf keras util to categorical testy 10 get validation set training size 45000 validx trainx training size validy trainy training size trainx trainx training size trainy trainy training size return trainx trainy validx validy testx testy class datagenerator def init self sess datax datay total len batch size super init self total len total len self batch size batch size self cleanx datax self totaly datay self sess sess self on epoch end def build pipeline self datax datay create dataset api def preprocess fn datax datay datax tf image random flip leave right datax workaround solution if looseversion tf version looseversion 1 14 0 outputx datax else outputx datax datay return outputx datay dataset tf datum dataset from tensor slice datax datay dataset dataset shuffle batch size 8 dataset dataset repeat dataset dataset batch batch size dataset dataset map preprocess fn num parallel call tf data experimental autotune dataset dataset prefetch tf datum experimental autotune self dataset dataset def len self return self total len self batch size def on epoch end self run permutation rand idx np random permutation self total len cleanx self cleanx rand idx totaly self totaly rand idx self build pipeline cleanx totaly ref def build clf with strategy scope with tf compat v1 variable scope optimizer def resnet layer input num filter 16 kernel size 3 stride 1 activation relu batch normalization true conv first true 2d convolution batch normalization activation stack builder argument input tensor input tensor from input image or previous layer num filter int conv2d number of filter kernel size int conv2d square kernel dimensions stride int conv2d square stride dimension activation string activation name batch normalization bool whether to include batch normalization conv first bool conv bn activation true or bn activation conv false return x tensor tensor as input to the next layer conv tf keras layer conv2d num filter kernel size kernel size stride stride padding same kernel initializer he normal kernel regularizer tf keras regularizer l2 1e 4 x input if conv first x conv x if batch normalization x tf keras layer batchnormalization x if activation be not none x tf keras layer activation activation x else if batch normalization x tf keras layer batchnormalization x if activation be not none x tf keras layer activation activation x x conv x return x def cw loss y true y pre label mask label ref pre softmax x if looseversion tf version looseversion 1 14 0 correct logit tf reduce sum label mask pre softmax axis 1 keep dim true else correct logit tf reduce sum label mask pre softmax axis 1 keepdim true distance tf nn relu pre softmax correct logit 1 label mask 10 inactivate tf cast tf less equal distance 1e 9 dtype tf float32 weight tf keras layers activation softmax 1e9 inactivate distance loss tf reduce sum 1 label mask distance weight axis 1 loss tf math reduce mean loss return loss set model s parameter depth n 6 2 n 8 num filter 16 clf input tf keras layers input shape img h img w channel name model input label ref tf keras layers input shape num class name label ref input list clf input label ref x resnet layer input clf input for stack in range 3 for re block in range n stride 1 if stack 0 and re block 0 first layer but not first stack stride 2 downsample y resnet layer input x num filter num filter stride stride y resnet layer input y num filter num filter activation none if stack 0 and re block 0 first layer but not first stack linear projection residual shortcut connection to match change dim x resnet layer input x num filter num filter kernel size 1 stride stride activation none batch normalization false x tf keras layer add x y x tf keras layers activation relu x num filter 2 x tf keras layer averagepooling2d pool size 8 x x tf keras layer flatten x x tf keras layer dense num class kernel initializer he normal activation none x y tf keras layers activation softmax x optimizer tf keras optimizer adam lr 0 001 clf model tf keras model model input input list output y name clf model clf model compile loss categorical crossentropy optimizer optimizer metric accuracy cw loss clf model summary return clf model if name main set gpu import os if os environ get cuda visible device be none os environ cuda visible device 0 reset tf session tf compat v1 kera backend clear session gpu option tf compat v1 gpuoption allow growth true sess tf compat v1 session config tf compat v1 configproto gpu option gpu option tf compat v1 keras backend set session sess hyperparameter batch size 100 epoch 1 prepare datum trainx trainy validx validy testx testy download datum train gen datagenerator sess trainx trainy trainy shape 0 batch size valid gen datagenerator sess validx validy validy shape 0 batch size test gen datagenerator sess testx testy testy shape 0 batch size build model model build clf train model model fit train gen dataset epoch epoch step per epoch train gen len validation datum valid gen dataset validation step valid gen len verbose 1 print result meta string testing prefix string output model evaluate test gen dataset step test gen len for ii in range len model metric name meta string meta string s s 3f format prefix string model metrics name ii output ii print meta string first this testing case look good without enable tf distribute mirroredstrategy there be the output for normal case train on 450 step validate on 50 step 2019 12 13 16 20 30 625379 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 0 2019 12 13 16 20 31 217430 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2019 12 13 16 20 33 007150 w tensorflow stream executor cuda redzone allocator cc 312 not find bin ptxas not find rely on driver to perform ptx compilation this message will be only log once 450 450 40 88ms step loss 1 8299 accuracy 0 4744 cw loss 9 5022 val loss 1 9870 val accuracy 0 4528 val cw loss 9 6570 100 100 3s 26ms step loss 2 0089 accuracy 0 4511 cw loss 9 6708 testing loss 2 009 accuracy 0 451 cw loss 9 671 next we try to enable tf distribute mirroredstrategy so we modify the testcase by the follow patch diff def build clf with strategy scope with tf compat v1 variable scope optimizer def build clf with strategy scope with tf compat v1 variable scope optimizer and we get the error message traceback most recent call last file bug py line 233 in verbose 1 file usr local lib python3 5 dist package tensorflow core python keras engine training py line 717 in fit use multiprocesse use multiprocesse file usr local lib python3 5 dist package tensorflow core python keras engine training distribute py line 685 in fit step name step per epoch file usr local lib python3 5 dist package tensorflow core python keras engine training array py line 299 in model iteration batch out f actual input file usr local lib python3 5 dist package tensorflow core python keras backend py line 3580 in call run metadata self run metadata file usr local lib python3 5 dist package tensorflow core python client session py line 1472 in call run metadata ptr tensorflow python framework error impl invalidargumenterror 2 root error s find 0 invalid argument you must feed a value for placeholder tensor model input with dtype float and shape 32 32 3 node model input batch normalization 9 cond else 325 fusedbatchnormv3 readvariableop 2529 1 invalid argument you must feed a value for placeholder tensor model input with dtype float and shape 32 32 3 node model input 0 successful operation 0 derive error ignore this error be very similar to the previous issue 34866 I guess that those two issue may have some strong connection describe the expect behavior it should work properly code to reproduce the issue please see the section of describe the current behavior other info log the follow message be the result generate by tf env collect sh check python python version 3 5 2 python branch python build version default oct 8 2019 13 06 37 python compiler version gcc 5 4 0 20160609 python implementation cpython check os platform os linux os kernel version 40 18 04 1 ubuntu smp thu nov 14 12 06 39 utc 2019 os release version 5 0 0 37 generic os platform linux 5 0 0 37 generic x86 64 with ubuntu 16 04 xenial linux distribution ubuntu 16 04 xenial linux os distribution ubuntu 16 04 xenial mac version uname uname result system linux node f7f509f1dacf release 5 0 0 37 generic version 40 18 04 1 ubuntu smp thu nov 14 12 06 39 utc 2019 machine x86 64 processor x86 64 architecture 64bit elf machine x86 64 be we in docker yes compiler c ubuntu 5 4 0 6ubuntu1 16 04 12 5 4 0 20160609 copyright c 2015 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 17 4 protobuf 3 11 1 tensorflow estimator 2 0 1 tensorflow gpu 2 0 0 tensorflow probability 0 8 0 check for virtualenv false tensorflow import tf version version 2 0 0 tf version git version v2 0 0 rc2 26 g64c3d38 tf version compiler version 7 3 1 20180303 sanity check array 1 dtype int32 443 find library libpthread so 0 0 search 443 search path usr local nvidia lib tls x86 64 usr local nvidia lib tls usr local nvidia lib x86 64 usr local nvidia lib usr local nvidia lib64 tls x86 64 usr local nvidia lib64 tls usr local nvidia lib64 x86 64 usr local nvidia lib64 ld library path 443 try file usr local nvidia lib tls x86 64 libpthread so 0 443 try file usr local nvidia lib tls libpthread so 0 443 try file usr local nvidia lib x86 64 libpthread so 0 443 try file usr local nvidia lib libpthread so 0 443 try file usr local nvidia lib64 tls x86 64 libpthread so 0 443 try file usr local nvidia lib64 tls libpthread so 0 443 try file usr local nvidia lib64 x86 64 libpthread so 0 443 try file usr local nvidia lib64 libpthread so 0 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libpthread so 0 443 443 find library libc so 6 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libc so 6 443 443 find library libdl so 2 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libdl so 2 443 443 find library libutil so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libutil so 1 443 443 find library libexpat so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libexpat so 1 443 443 find library libz so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libz so 1 443 443 find library libm so 6 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libm so 6 443 443 443 call init lib x86 64 linux gnu libpthread so 0 443 443 443 call init lib x86 64 linux gnu libc so 6 443 443 443 call init lib x86 64 linux gnu libm so 6 443 443 443 call init lib x86 64 linux gnu libz so 1 443 443 443 call init lib x86 64 linux gnu libexpat so 1 443 443 443 call init lib x86 64 linux gnu libutil so 1 443 443 443 call init lib x86 64 linux gnu libdl so 2 443 443 443 initialize program usr local bin python 443 443 443 transfer control usr local bin python 443 443 443 call init usr lib python3 5 lib dynload opcode cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload ctype cpython 35 m x86 64 linux gnu so 443 443 find library libopenblasp r0 34a18dc3 3 7 so 0 search 443 search path usr local lib python3 5 dist package numpy core lib tls x86 64 usr local lib python3 5 dist package numpy core libs tls usr local lib python3 5 dist package numpy core lib x86 64 usr local lib python3 5 dist package numpy core libs rpath from file usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package numpy core lib tls x86 64 libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib tls libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib x86 64 libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 443 443 find library libgfortran ed201abd so 3 0 0 0 search 443 search path usr local lib python3 5 dist package numpy core libs rpath from file usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 443 443 443 call init usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 443 443 443 call init usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 443 443 443 call init usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy core multiarray test cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy linalg lapack lite cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy linalg umath linalg cpython 35 m x86 64 linux gnu so 443 443 find library libbz2 so 1 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libbz2 so 1 0 443 443 443 call init lib x86 64 linux gnu libbz2 so 1 0 443 443 443 call init usr lib python3 5 lib dynload bz2 cpython 35 m x86 64 linux gnu so 443 443 find library liblzma so 5 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu liblzma so 5 443 443 443 call init lib x86 64 linux gnu liblzma so 5 443 443 443 call init usr lib python3 5 lib dynload lzma cpython 35 m x86 64 linux gnu so 443 443 find library libmpdec so 2 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file usr lib x86 64 linux gnu libmpdec so 2 443 443 443 call init usr lib x86 64 linux gnu libmpdec so 2 443 443 443 call init usr lib python3 5 lib dynload decimal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy fft pocketfft internal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random mtrand cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random common cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random bound integer cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random mt19937 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random bit generator cpython 35 m x86 64 linux gnu so 443 443 find library libcrypto so 1 0 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libcrypto so 1 0 0 443 443 443 call init lib x86 64 linux gnu libcrypto so 1 0 0 443 443 443 call init usr lib python3 5 lib dynload hashlib cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random philox cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random pcg64 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random sfc64 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random generator cpython 35 m x86 64 linux gnu so 443 443 find library libtensorflow framework so 2 0 search 443 search path usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow usr local lib python3 5 dist package tensorflow core python tls x86 64 usr local lib python3 5 dist package tensorflow core python tls usr local lib python3 5 dist package tensorflow core python x86 64 usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python tls x86 64 usr local lib python3 5 dist package tensorflow core python tls usr local lib python3 5 dist package tensorflow core python x86 64 usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 443 find library librt so 1 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python librt so 1 443 try file usr local lib python3 5 dist package tensorflow core python librt so 1 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu librt so 1 443 443 find library libstdc so 6 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libstdc so 6 443 try file usr local lib python3 5 dist package tensorflow core python libstdc so 6 443 search path ld library path 443 search cache etc ld so cache 443 try file usr lib x86 64 linux gnu libstdc so 6 443 443 find library libgcc s so 1 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libgcc s so 1 443 try file usr local lib python3 5 dist package tensorflow core python libgcc s so 1 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libgcc s so 1 443 443 443 call init lib x86 64 linux gnu libgcc s so 1 443 443 443 call init usr lib x86 64 linux gnu libstdc so 6 443 443 443 call init lib x86 64 linux gnu librt so 1 443 443 443 call init usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 443 find library libhdfs so 0 search 443 search path usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 search path ld library path 443 search cache etc ld so cache 443 search path lib x86 64 linux gnu tls x86 64 lib x86 64 linux gnu tls lib x86 64 linux gnu x86 64 lib x86 64 linux gnu usr lib x86 64 linux gnu tls x86 64 usr lib x86 64 linux gnu tls usr lib x86 64 linux gnu x86 64 usr lib x86 64 linux gnu lib tls x86 64 lib tls lib x86 64 lib usr lib tls x86 64 usr lib tls usr lib x86 64 usr lib system search path 443 try file lib x86 64 linux gnu tls x86 64 libhdfs so 443 try file lib x86 64 linux gnu tls libhdfs so 443 try file lib x86 64 linux gnu x86 64 libhdfs so 443 try file lib x86 64 linux gnu libhdfs so 443 try file usr lib x86 64 linux gnu tls x86 64 libhdfs so 443 try file usr lib x86 64 linux gnu tls libhdfs so 443 try file usr lib x86 64 linux gnu x86 64 libhdfs so 443 try file usr lib x86 64 linux gnu libhdfs so 443 try file lib tls x86 64 libhdfs so 443 try file lib tls libhdfs so 443 try file lib x86 64 libhdfs so 443 try file lib libhdfs so 443 try file usr lib tls x86 64 libhdfs so 443 try file usr lib tls libhdfs so 443 try file usr lib x86 64 libhdfs so 443 try file usr lib libhdfs so 443 443 443 call init usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 443 443 call init usr local lib python3 5 dist package google protobuf internal api implementation cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package google protobuf pyext message cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload csv cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload termio cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package tensorflow core python framework fast tensor util so 443 443 find library libuuid so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libuuid so 1 443 443 443 call init lib x86 64 linux gnu libuuid so 1 443 443 443 call init usr local lib python3 5 dist package wrapt wrapper cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload json cpython 35 m x86 64 linux gnu so 443 443 find library libssl so 1 0 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libssl so 1 0 0 443 443 443 call init lib x86 64 linux gnu libssl so 1 0 0 443 443 443 call init usr lib python3 5 lib dynload ssl cpython 35 m x86 64 linux gnu so 443 443 find library libhdf5 49599f4e so 103 0 0 0 search 443 search path usr local lib python3 5 dist package h5py lib tls x86 64 usr local lib python3 5 dist package h5py lib tls usr local lib python3 5 dist package h5py lib x86 64 usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package h5py lib tls x86 64 libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib tls libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib x86 64 libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 443 find library libhdf5 hl db841637 so 100 1 1 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 443 443 find library libsz 1c7dd0cf so 2 0 1 0 search 443 search path usr local lib python3 5 dist package h5py lib tls x86 64 usr local lib python3 5 dist package h5py lib tls usr local lib python3 5 dist package h5py lib x86 64 usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib tls x86 64 libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib tls libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib x86 64 libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 443 443 find library libaec 2147abcd so 0 0 4 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 443 443 find library libz a147dcb0 so 1 2 3 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 443 443 443 call init usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 443 443 443 call init usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 443 443 443 call init usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 443 443 443 call init usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 443 443 call init usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 443 443 443 call init usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py def cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py object cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py conv cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5r cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 t cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py util cpython 35 m x86 64 linux gnu so 443 452 find library libc so 6 0 search 452 search path usr local nvidia lib tls x86 64 usr local nvidia lib tls usr local nvidia lib x86 64 usr local nvidia lib usr local nvidia lib64 tls x86 64 usr local nvidia lib64 tls usr local nvidia lib64 x86 64 usr local nvidia lib64 ld library path 452 try file usr local nvidia lib tls x86 64 libc so 6 452 try file usr local nvidia lib tls libc so 6 452 try file usr local nvidia lib x86 64 libc so 6 452 try file usr local nvidia lib libc so 6 452 try file usr local nvidia lib64 tls x86 64 libc so 6 452 try file usr local nvidia lib64 tls libc so 6 452 try file usr local nvidia lib64 x86 64 libc so 6 452 try file usr local nvidia lib64 libc so 6 452 search cache etc ld so cache 452 try file lib x86 64 linux gnu libc so 6 452 452 452 call init lib x86 64 linux gnu libc so 6 452 452 452 initialize program bin sh 452 452 452 transfer control bin sh 452 443 443 call init usr local lib python3 5 dist package h5py h5z cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5a cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5s cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5p cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5ac cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py proxy cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5d cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5ds cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5f cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 g cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5i cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5fd cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5pl cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5o cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5l cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy lib ccallback c cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse sparsetool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csparsetool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph short path cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph tool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph traversal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph min span tree cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph reorder cpython 35 m x86 64 linux gnu so 443 443 find library libjpeg 3b10b538 so 9 3 0 0 search 443 search path usr local lib python3 5 dist package pil libs tls x86 64 usr local lib python3 5 dist package pil libs tls usr local lib python3 5 dist package pil lib x86 64 usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs tls x86 64 libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil lib tls libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil lib x86 64 libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 443 443 find library libopenjp2 b3d7668a so 2 3 1 0 search 443 search path usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 443 443 find library libtiff 8267adfe so 5 4 0 0 search 443 search path usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 443 find library liblzma 6cd627ed so 5 2 4 0 search 443 search path usr local lib python3 5 dist package pil libs tls x86 64 usr local lib python3 5 dist package pil libs tls usr local lib python3 5 dist package pil lib x86 64 usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 try file usr local lib python3 5 dist package pil libs tls x86 64 liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil libs tls liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil lib x86 64 liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 443 443 443 call init usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 443 443 443 call init usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 443 443 443 call init usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 443 443 call init usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 443 443 443 call init usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy ndimage nd image cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy ndimage ni label cpython 35 m x86 64 linux gnu so 443 443 find library libopenblasp r0 2ecf47d5 3 7 dev so 0 search 443 search path usr local lib python3 5 dist package scipy linalg lib tls x86 64 usr local lib python3 5 dist package scipy linalg lib tls usr local lib python3 5 dist package scipy linalg lib x86 64 usr local lib python3 5 dist package scipy linalg lib rpath from file usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package scipy linalg lib tls x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib tls libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 443 443 443 call init usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 443 443 443 call init usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg flapack cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg flinalg cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg solve toeplitz cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg decomp update cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg cython blas cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg cython lapack cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package tensorflow core lite experimental microfrontend python op audio microfrontend op so 443 443 443 call fini usr local bin python 0 443 443 443 call fini lib x86 64 linux gnu libutil so 1 0 443 443 443 call fini lib x86 64 linux gnu libexpat so 1 0 443 443 443 call fini lib x86 64 linux gnu libz so 1 0 443 443 443 call fini usr lib python3 5 lib dynload opcode cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload ctype cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core multiarray test cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy linalg lapack lite cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy linalg umath linalg cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 0 443 443 443 call fini usr lib python3 5 lib dynload bz2 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu libbz2 so 1 0 0 443 443 443 call fini usr lib python3 5 lib dynload lzma cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu liblzma so 5 0 443 443 443 call fini usr lib python3 5 lib dynload decimal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib x86 64 linux gnu libmpdec so 2 0 443 443 443 call fini usr local lib python3 5 dist package numpy fft pocketfft internal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random mtrand cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random common cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random bound integer cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random mt19937 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random bit generator cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload hashlib cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random philox cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random pcg64 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random sfc64 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random generator cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 0 443 443 443 call fini usr local lib python3 5 dist package google protobuf internal api implementation cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package google protobuf pyext message cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload csv cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload termio cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python framework fast tensor util so 0 443 443 443 call fini lib x86 64 linux gnu libuuid so 1 0 443 443 443 call fini usr local lib python3 5 dist package wrapt wrapper cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload json cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload ssl cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu libssl so 1 0 0 0 443 443 443 call fini lib x86 64 linux gnu libcrypto so 1 0 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py def cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py object cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py conv cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5r cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 t cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py util cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5z cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5a cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5s cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5p cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5ac cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py proxy cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5d cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5ds cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5f cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 g cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5i cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5fd cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5pl cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5o cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5l cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 0 443 443 443 call fini usr local lib python3 5 dist package scipy lib ccallback c cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse sparsetool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csparsetool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph short path cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph tool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph traversal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph min span tree cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph reorder cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 0 443 443 443 call fini usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 0 443 443 443 call fini usr local lib python3 5 dist package scipy ndimage nd image cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy ndimage ni label cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg flapack cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg flinalg cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg solve toeplitz cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg decomp update cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg cython blas cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg cython lapack cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core lite experimental microfrontend python op audio microfrontend op so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 0 443 443 443 call fini usr lib x86 64 linux gnu libstdc so 6 0 443 443 443 call fini lib x86 64 linux gnu libgcc s so 1 0 443 443 443 call fini lib x86 64 linux gnu librt so 1 0 443 443 443 call fini lib x86 64 linux gnu libm so 6 0 443 443 443 call fini lib x86 64 linux gnu libdl so 2 0 443 443 443 call fini lib x86 64 linux gnu libpthread so 0 0 443 |
tensorflowtensorflow | subsequent call to tf datum dataset map with seed random operation give same random sequence | Bug | system information have I write custom code yes os platform and distribution cento 7 tensorflow instal from source or binary binary tensorflow version 1 14 0 python version 2 7 17 cuda cudnn version 10 0 describe the current behavior in eager mode when set a random seed two call to the same mapping function in tf data dataset map would result in two time the same generate random value describe the expect behavior as I haven t explicitly close or restart a session I would expect both call to output different generate random value be tf datum dataset map create its own session and run the mapping function subgraph in it if so be there a way to force the map to take the current session as input code to reproduce the issue import numpy as np import tensorflow as tf tf compat v1 enable eager execution seed 88 tf compat v1 random set random seed seed ds train tf datum dataset range 0 4 ds val tf datum dataset range 0 4 ds train ds train map lambda x tf random uniform 1 0 2 5 0 ds val ds val map lambda x tf random uniform 1 0 2 5 0 for el in ds train print el numpy 0 44027787 1 7892183 2 8793733 3 3438706 for el in ds val print el numpy why the same here 0 44027787 1 7892183 2 8793733 3 3438706 |
tensorflowtensorflow | no attribute op in keras loss sparsecategoricalcrossentropy | Bug | system information os platform and distribution jupyter on windows 10 tensorflow instal from anaconda in venv tensorflow version 2 0 0 python version 3 7 describe the current behavior I m run the example from the tensorflow documentation for keras loss sparsecategoricalcrossentropy and get the error list object have no attribute op if we set from logit to true it be work code to reproduce the issue cce tf keras loss sparsecategoricalcrossentropy loss cce 0 1 2 9 05 05 5 89 6 05 01 94 |
tensorflowtensorflow | doc use in documentation be incorrectly format on website | Bug | url s with the issue description of issue what need change the example code l152 use but on the website this be incorrectly convert to gt gt gt tf docs link submit a pull request this be more of a problem with how documentation be generate from comment in the python file I don t mind take a look if someone can point out where to get start |
tensorflowtensorflow | add usage example for tf audio apis | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change currently there be no usage example for tf audio apis which make it difficult for new user to implement the same clear description for example why should someone use this method how be it useful audio be an area not really explore in machine learning to extent image and text have while tensorflow do provide a good amount of documentation for the general args and return of the various function under tf audio since most new user will have very little experience with audio as compare to tf image correct link be the link to the source code correct yes parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define for example raise no usage example be there a usage example no see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content format code block be present which be satisfactory submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide yes I think I can provide a detailed usage example |
tensorflowtensorflow | explain how int8 input and output quantization conversion work in tensorflow lite | Bug | we ve have feedback from multiple developer that it s hard to figure out how to calculate the right int8 value for quantize input and understand what int8 value mean as output for example when feed an image to uint8 quantize input the value can be leave as in their source 0 to 255 range for int8 input the developer will typically need to subtract 128 from each value but this knowledge and how the offset value be calculate be not document in the same way user will need to map the 128 to 127 output value to the actual real number range of their output but this process be unclear tag the tensorflow micro team |
tensorflowtensorflow | bug in documentation of tf while loop parallel iteration doesn t seem to affect performance | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes and no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts tensorflow instal from source or binary I use pip install tensorflow gpu 2 0 0 tensorflow version use command below 2 0 0 python version 3 7 5 cuda cudnn version 10 1 gpu model and memory 1080 ti and 11170 mib describe the current behavior first as discuss in this issue there be a bug in the first example of the documentation of tf while loop then the parallel iteration argument doesn t seem to parallelize the loop there be no difference between the run time with parallel iteration 1 or parallel iteration 10 I have a question open on stackoverflow describe the expect behavior if the function in iteration n doesn t depend on previous iteration then I expect that by set parallel iteration 10 the loop should finish about 10 time fast than set parallel iteration 1 code to reproduce the issue the code be post on the stackoverflow |
tensorflowtensorflow | no example provide for use tf nn ctc loss | Bug | url with the issue description of issue there s no example provide for use this loss and I can not make it work follow the parameter definition I create this toy example in tf2 0 0 import functool import tensorflow as tf import numpy as np from tensorflow keras model import model from tensorflow keras layers import input conv2d lambda from tensorflow keras optimizer import adam input input input shape 128 64 1 batch size 32 frame num label channel label input shape 128 batch size 32 dtype tf int32 label length tf constant np one 32 dtype tf int32 logit length tf constant np one 32 dtype tf int32 model x conv2d 1 kernel size 5 5 padding same input logit lambda lambda z tf squeeze z 1 x model model input logit model compile optimizer adam lr 0 001 loss tf nn ctc loss label label logit logit label length label length logit length logit length logit time major false blank index 1 which rise virtualenvs phd lib python3 7 site package tensorflow core python framework op def library py in apply op helper self op type name name keyword 469 dtype dtype if dtype else none 470 prefer dtype default dtype 471 as ref input arg be ref 472 if input arg number attr and len 473 set v dtype base dtype for v in value 1 virtualenvs phd lib python3 7 site package tensorflow core python framework op py in internal convert n to tensor value dtype name as ref prefer dtype ctx 1363 as ref as ref 1364 prefer dtype prefer dtype 1365 ctx ctx 1366 return ret 1367 virtualenvs phd lib python3 7 site package tensorflow core python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept composite tensor 1262 graph get default graph 1263 if not graph building function 1264 raise runtimeerror attempt to capture an eagertensor without 1265 build a function 1266 return graph capture value name name runtimeerror attempt to capture an eagertensor without build a function then I try to use it as a handle ctc loss functool partial tf nn ctc loss label label logit logit label length label length logit length logit length false logit time major 1 blank index model compile optimizer adam lr 0 001 loss ctc loss which rise typeerror traceback most recent call last in 1 model compile optimizer adam lr 0 001 loss ctc loss virtualenvs phd lib python3 7 site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 371 372 create the model loss and weight metric sub graph 373 self compile weight loss and weight metric 374 375 function for train test and predict will virtualenvs phd lib python3 7 site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in compile weight loss and weight metric self sample weight 1651 loss weight 2 output 2 loss fn 1652 layer loss 1653 self total loss self prepare total loss mask 1654 1655 def prepare skip target mask self virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in prepare total loss self mask 1732 differentiate between use case where a custom optimizer 1733 expect a vector loss value vs unreduced per sample loss value 1734 output loss loss fn y true y pre sample weight sample weight 1735 loss reduction loss util reductionv2 sum over batch size 1736 typeerror ctc loss v2 get an unexpected keyword argument sample weight then I try to embed it def my ctc loss label logit label length logit length logit time major blank index sample weight return tf nn ctc loss label label logit logit label length label length logit length logit length logit time major logit time major blank index blank index ctc loss emb functool partial my ctc loss label label logit logit label length label length logit length logit length false logit time major 1 blank index none sample weight model compile optimizer adam lr 0 001 loss ctc loss emb which rise typeerror traceback most recent call last in 9 none sample weight 10 11 model compile optimizer adam lr 0 001 loss ctc loss emb virtualenvs phd lib python3 7 site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in compile self optimizer loss metric loss weight sample weight mode weight metric target tensor distribute kwargs 371 372 create the model loss and weight metric sub graph 373 self compile weight loss and weight metric 374 375 function for train test and predict will virtualenvs phd lib python3 7 site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in compile weight loss and weight metric self sample weight 1651 loss weight 2 output 2 loss fn 1652 layer loss 1653 self total loss self prepare total loss mask 1654 1655 def prepare skip target mask self virtualenvs phd lib python3 7 site package tensorflow core python keras engine training py in prepare total loss self mask 1732 differentiate between use case where a custom optimizer 1733 expect a vector loss value vs unreduced per sample loss value 1734 output loss loss fn y true y pre sample weight sample weight 1735 loss reduction loss util reductionv2 sum over batch size 1736 typeerror my ctc loss get multiple value for argument sample weight usage example not provide since it seem that ctc loss have to be use differently from other loss it will helpful to have an example that show how to use it raise list and define not define thank |
tensorflowtensorflow | tf dataset may out of memory | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 in docker tensorflow instal from source or binary pip install tensorflow version use command below v2 0 0 rc2 26 g64c3d38 python version 3 5 cuda cudnn version 10 0 7 gpu model and memory gtx 1080ti 11175mib describe the current behavior hi author and developer I be develop our project in tf 2 0 0 and eager mode be disable the main reason be tf 1 x will not be maintain but third party library have not be ready for tf 2 0 yet for some resaon we have to re generate trainx at the end of each epoch in our custom model in tf 1 x version tensorflow provide placeholder api so we can feed new trainx to tf datum and it work very well however placeholder api be deprecate in tf 2 0 or above I have to re generate tf datum again and again at the end of each epoch finally our program will be kill eventually because it be out of memory describe the expect behavior it should work properly code to reproduce the issue python import tensorflow as tf tf compat v1 disable eager execution tf compat v1 disable v2 behavior import numpy as np batch size 100 def download datum get raw data trainx trainy testx testy tf keras datasets cifar10 load datum trainx trainx astype np float32 testx testx astype np float32 ont hot trainy tf keras util to categorical trainy 10 testy tf keras util to categorical testy 10 get validation set training size 45000 validx trainx training size validy trainy training size trainx trainx training size trainy trainy training size return trainx trainy validx validy testx testy def datum pipeline datax datay create dataset api def preprocess fn datax datay datax tf image random flip leave right datax return datax datay dataset tf datum dataset from tensor slice datax datay dataset dataset shuffle batch size 8 dataset dataset repeat dataset dataset batch batch size dataset dataset map preprocess fn num parallel call tf data experimental autotune dataset dataset prefetch tf datum experimental autotune return dataset if name main set gpu import os if os environ get cuda visible device be none os environ cuda visible device 0 reset tf session tf compat v1 kera backend clear session gpu option tf compat v1 gpuoption allow growth true sess tf compat v1 session config tf compat v1 configproto gpu option gpu option tf compat v1 keras backend set session sess prepare datum trainx trainy validx validy testx testy download datum train gen datum pipeline trainx trainy valid gen datum pipeline validx validy test gen datum pipeline testx testy build targeted model model tf keras application resnet v2 resnet50v2 include top true weight none input shape 32 32 3 pool max class 10 model compile loss categorical crossentropy optimizer sgd metric accuracy fit and evalutate num epoch 20 for ii in range num epoch model fit train gen step per epoch trainy shape 0 batch size validation datum valid gen validation step validy shape 0 batch size epoch 1 verbose 2 model evaluate testx testy verbose 2 batch size batch size update trainx and re generate train gen trainx trainx 0 train gen datum pipeline trainx trainy the follow be the output 450 450 37 loss 1 9472 accuracy 0 3077 val loss 1 7661 val accuracy 0 3764 10000 10000 3s loss 1 7696 accuracy 0 3729 train on 450 step validate on 50 step 450 450 37 loss 1 5704 accuracy 0 4347 val loss 1 6101 val accuracy 0 4224 10000 10000 3s loss 1 6036 accuracy 0 4274 train on 450 step validate on 50 step 450 450 37 loss 1 4119 accuracy 0 4903 val loss 1 4621 val accuracy 0 4728 10000 10000 3s loss 1 4667 accuracy 0 4759 train on 450 step validate on 50 step 450 450 38 loss 1 3042 accuracy 0 5313 val loss 1 3688 val accuracy 0 5060 10000 10000 3s loss 1 3773 accuracy 0 5024 train on 450 step validate on 50 step 450 450 36 loss 1 2168 accuracy 0 5671 val loss 1 3069 val accuracy 0 5330 10000 10000 3s loss 1 3197 accuracy 0 5284 train on 450 step validate on 50 step 450 450 36 loss 1 1384 accuracy 0 5935 val loss 1 2692 val accuracy 0 5462 10000 10000 3s loss 1 2831 accuracy 0 5437 train on 450 step validate on 50 step 450 450 36 loss 1 0762 accuracy 0 6156 val loss 1 3297 val accuracy 0 5320 10000 10000 3s loss 1 3435 accuracy 0 5324 train on 450 step validate on 50 step 450 450 38 loss 1 0080 accuracy 0 6396 val loss 1 3039 val accuracy 0 5404 10000 10000 3s loss 1 3260 accuracy 0 5351 train on 450 step validate on 50 step 450 450 37 loss 0 9562 accuracy 0 6609 val loss 1 1603 val accuracy 0 5926 10000 10000 3s loss 1 1833 accuracy 0 5848 train on 450 step validate on 50 step 450 450 38 loss 0 8957 accuracy 0 6823 val loss 1 2314 val accuracy 0 5728 10000 10000 3s loss 1 2559 accuracy 0 5720 kill other info log the follow message be the result generate by tf env collect sh check python python version 3 5 2 python branch python build version default oct 8 2019 13 06 37 python compiler version gcc 5 4 0 20160609 python implementation cpython check os platform os linux os kernel version 40 18 04 1 ubuntu smp thu nov 14 12 06 39 utc 2019 os release version 5 0 0 37 generic os platform linux 5 0 0 37 generic x86 64 with ubuntu 16 04 xenial linux distribution ubuntu 16 04 xenial linux os distribution ubuntu 16 04 xenial mac version uname uname result system linux node f7f509f1dacf release 5 0 0 37 generic version 40 18 04 1 ubuntu smp thu nov 14 12 06 39 utc 2019 machine x86 64 processor x86 64 architecture 64bit elf machine x86 64 be we in docker yes compiler c ubuntu 5 4 0 6ubuntu1 16 04 12 5 4 0 20160609 copyright c 2015 free software foundation inc this be free software see the source for copy condition there be no warranty not even for merchantability or fitness for a particular purpose check pip numpy 1 17 4 protobuf 3 11 1 tensorflow estimator 2 0 1 tensorflow gpu 2 0 0 tensorflow probability 0 8 0 check for virtualenv false tensorflow import tf version version 2 0 0 tf version git version v2 0 0 rc2 26 g64c3d38 tf version compiler version 7 3 1 20180303 sanity check array 1 dtype int32 443 find library libpthread so 0 0 search 443 search path usr local nvidia lib tls x86 64 usr local nvidia lib tls usr local nvidia lib x86 64 usr local nvidia lib usr local nvidia lib64 tls x86 64 usr local nvidia lib64 tls usr local nvidia lib64 x86 64 usr local nvidia lib64 ld library path 443 try file usr local nvidia lib tls x86 64 libpthread so 0 443 try file usr local nvidia lib tls libpthread so 0 443 try file usr local nvidia lib x86 64 libpthread so 0 443 try file usr local nvidia lib libpthread so 0 443 try file usr local nvidia lib64 tls x86 64 libpthread so 0 443 try file usr local nvidia lib64 tls libpthread so 0 443 try file usr local nvidia lib64 x86 64 libpthread so 0 443 try file usr local nvidia lib64 libpthread so 0 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libpthread so 0 443 443 find library libc so 6 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libc so 6 443 443 find library libdl so 2 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libdl so 2 443 443 find library libutil so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libutil so 1 443 443 find library libexpat so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libexpat so 1 443 443 find library libz so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libz so 1 443 443 find library libm so 6 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libm so 6 443 443 443 call init lib x86 64 linux gnu libpthread so 0 443 443 443 call init lib x86 64 linux gnu libc so 6 443 443 443 call init lib x86 64 linux gnu libm so 6 443 443 443 call init lib x86 64 linux gnu libz so 1 443 443 443 call init lib x86 64 linux gnu libexpat so 1 443 443 443 call init lib x86 64 linux gnu libutil so 1 443 443 443 call init lib x86 64 linux gnu libdl so 2 443 443 443 initialize program usr local bin python 443 443 443 transfer control usr local bin python 443 443 443 call init usr lib python3 5 lib dynload opcode cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload ctype cpython 35 m x86 64 linux gnu so 443 443 find library libopenblasp r0 34a18dc3 3 7 so 0 search 443 search path usr local lib python3 5 dist package numpy core lib tls x86 64 usr local lib python3 5 dist package numpy core libs tls usr local lib python3 5 dist package numpy core lib x86 64 usr local lib python3 5 dist package numpy core libs rpath from file usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package numpy core lib tls x86 64 libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib tls libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib x86 64 libopenblasp r0 34a18dc3 3 7 so 443 try file usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 443 443 find library libgfortran ed201abd so 3 0 0 0 search 443 search path usr local lib python3 5 dist package numpy core libs rpath from file usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 443 443 443 call init usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 443 443 443 call init usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 443 443 443 call init usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy core multiarray test cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy linalg lapack lite cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy linalg umath linalg cpython 35 m x86 64 linux gnu so 443 443 find library libbz2 so 1 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libbz2 so 1 0 443 443 443 call init lib x86 64 linux gnu libbz2 so 1 0 443 443 443 call init usr lib python3 5 lib dynload bz2 cpython 35 m x86 64 linux gnu so 443 443 find library liblzma so 5 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu liblzma so 5 443 443 443 call init lib x86 64 linux gnu liblzma so 5 443 443 443 call init usr lib python3 5 lib dynload lzma cpython 35 m x86 64 linux gnu so 443 443 find library libmpdec so 2 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file usr lib x86 64 linux gnu libmpdec so 2 443 443 443 call init usr lib x86 64 linux gnu libmpdec so 2 443 443 443 call init usr lib python3 5 lib dynload decimal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy fft pocketfft internal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random mtrand cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random common cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random bound integer cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random mt19937 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random bit generator cpython 35 m x86 64 linux gnu so 443 443 find library libcrypto so 1 0 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libcrypto so 1 0 0 443 443 443 call init lib x86 64 linux gnu libcrypto so 1 0 0 443 443 443 call init usr lib python3 5 lib dynload hashlib cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random philox cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random pcg64 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random sfc64 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package numpy random generator cpython 35 m x86 64 linux gnu so 443 443 find library libtensorflow framework so 2 0 search 443 search path usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow usr local lib python3 5 dist package tensorflow core python tls x86 64 usr local lib python3 5 dist package tensorflow core python tls usr local lib python3 5 dist package tensorflow core python x86 64 usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python tls x86 64 usr local lib python3 5 dist package tensorflow core python tls usr local lib python3 5 dist package tensorflow core python x86 64 usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python solib local u s stensorflow spython c upywrap utensorflow uinternal so utensorflow libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python tls libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python x86 64 libtensorflow framework so 2 443 try file usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 443 find library librt so 1 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python librt so 1 443 try file usr local lib python3 5 dist package tensorflow core python librt so 1 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu librt so 1 443 443 find library libstdc so 6 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libstdc so 6 443 try file usr local lib python3 5 dist package tensorflow core python libstdc so 6 443 search path ld library path 443 search cache etc ld so cache 443 try file usr lib x86 64 linux gnu libstdc so 6 443 443 find library libgcc s so 1 0 search 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libgcc s so 1 443 try file usr local lib python3 5 dist package tensorflow core python libgcc s so 1 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libgcc s so 1 443 443 443 call init lib x86 64 linux gnu libgcc s so 1 443 443 443 call init usr lib x86 64 linux gnu libstdc so 6 443 443 443 call init lib x86 64 linux gnu librt so 1 443 443 443 call init usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 443 443 find library libhdfs so 0 search 443 search path usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 search path usr local lib python3 5 dist package tensorflow core python usr local lib python3 5 dist package tensorflow core python rpath from file usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 try file usr local lib python3 5 dist package tensorflow core python libhdfs so 443 search path ld library path 443 search cache etc ld so cache 443 search path lib x86 64 linux gnu tls x86 64 lib x86 64 linux gnu tls lib x86 64 linux gnu x86 64 lib x86 64 linux gnu usr lib x86 64 linux gnu tls x86 64 usr lib x86 64 linux gnu tls usr lib x86 64 linux gnu x86 64 usr lib x86 64 linux gnu lib tls x86 64 lib tls lib x86 64 lib usr lib tls x86 64 usr lib tls usr lib x86 64 usr lib system search path 443 try file lib x86 64 linux gnu tls x86 64 libhdfs so 443 try file lib x86 64 linux gnu tls libhdfs so 443 try file lib x86 64 linux gnu x86 64 libhdfs so 443 try file lib x86 64 linux gnu libhdfs so 443 try file usr lib x86 64 linux gnu tls x86 64 libhdfs so 443 try file usr lib x86 64 linux gnu tls libhdfs so 443 try file usr lib x86 64 linux gnu x86 64 libhdfs so 443 try file usr lib x86 64 linux gnu libhdfs so 443 try file lib tls x86 64 libhdfs so 443 try file lib tls libhdfs so 443 try file lib x86 64 libhdfs so 443 try file lib libhdfs so 443 try file usr lib tls x86 64 libhdfs so 443 try file usr lib tls libhdfs so 443 try file usr lib x86 64 libhdfs so 443 try file usr lib libhdfs so 443 443 443 call init usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 443 443 443 call init usr local lib python3 5 dist package google protobuf internal api implementation cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package google protobuf pyext message cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload csv cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload termio cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package tensorflow core python framework fast tensor util so 443 443 find library libuuid so 1 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libuuid so 1 443 443 443 call init lib x86 64 linux gnu libuuid so 1 443 443 443 call init usr local lib python3 5 dist package wrapt wrapper cpython 35 m x86 64 linux gnu so 443 443 443 call init usr lib python3 5 lib dynload json cpython 35 m x86 64 linux gnu so 443 443 find library libssl so 1 0 0 0 search 443 search path ld library path 443 search cache etc ld so cache 443 try file lib x86 64 linux gnu libssl so 1 0 0 443 443 443 call init lib x86 64 linux gnu libssl so 1 0 0 443 443 443 call init usr lib python3 5 lib dynload ssl cpython 35 m x86 64 linux gnu so 443 443 find library libhdf5 49599f4e so 103 0 0 0 search 443 search path usr local lib python3 5 dist package h5py lib tls x86 64 usr local lib python3 5 dist package h5py lib tls usr local lib python3 5 dist package h5py lib x86 64 usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package h5py lib tls x86 64 libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib tls libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib x86 64 libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 443 find library libhdf5 hl db841637 so 100 1 1 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 443 443 find library libsz 1c7dd0cf so 2 0 1 0 search 443 search path usr local lib python3 5 dist package h5py lib tls x86 64 usr local lib python3 5 dist package h5py lib tls usr local lib python3 5 dist package h5py lib x86 64 usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib tls x86 64 libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib tls libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib x86 64 libsz 1c7dd0cf so 2 0 1 443 try file usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 443 443 find library libaec 2147abcd so 0 0 4 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 443 443 find library libz a147dcb0 so 1 2 3 0 search 443 search path usr local lib python3 5 dist package h5py lib rpath from file usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 try file usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 443 443 443 call init usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 443 443 443 call init usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 443 443 443 call init usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 443 443 443 call init usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 443 443 443 call init usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 443 443 443 call init usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py def cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py object cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py conv cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5r cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 t cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py util cpython 35 m x86 64 linux gnu so 443 452 find library libc so 6 0 search 452 search path usr local nvidia lib tls x86 64 usr local nvidia lib tls usr local nvidia lib x86 64 usr local nvidia lib usr local nvidia lib64 tls x86 64 usr local nvidia lib64 tls usr local nvidia lib64 x86 64 usr local nvidia lib64 ld library path 452 try file usr local nvidia lib tls x86 64 libc so 6 452 try file usr local nvidia lib tls libc so 6 452 try file usr local nvidia lib x86 64 libc so 6 452 try file usr local nvidia lib libc so 6 452 try file usr local nvidia lib64 tls x86 64 libc so 6 452 try file usr local nvidia lib64 tls libc so 6 452 try file usr local nvidia lib64 x86 64 libc so 6 452 try file usr local nvidia lib64 libc so 6 452 search cache etc ld so cache 452 try file lib x86 64 linux gnu libc so 6 452 452 452 call init lib x86 64 linux gnu libc so 6 452 452 452 initialize program bin sh 452 452 452 transfer control bin sh 452 443 443 call init usr local lib python3 5 dist package h5py h5z cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5a cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5s cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5p cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5ac cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py proxy cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5d cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5ds cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5f cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5 g cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5i cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5fd cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5pl cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5o cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package h5py h5l cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy lib ccallback c cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse sparsetool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csparsetool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph short path cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph tool cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph traversal cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph min span tree cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy sparse csgraph reorder cpython 35 m x86 64 linux gnu so 443 443 find library libjpeg 3b10b538 so 9 3 0 0 search 443 search path usr local lib python3 5 dist package pil libs tls x86 64 usr local lib python3 5 dist package pil libs tls usr local lib python3 5 dist package pil lib x86 64 usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs tls x86 64 libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil lib tls libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil lib x86 64 libjpeg 3b10b538 so 9 3 0 443 try file usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 443 443 find library libopenjp2 b3d7668a so 2 3 1 0 search 443 search path usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 443 443 find library libtiff 8267adfe so 5 4 0 0 search 443 search path usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 443 find library liblzma 6cd627ed so 5 2 4 0 search 443 search path usr local lib python3 5 dist package pil libs tls x86 64 usr local lib python3 5 dist package pil libs tls usr local lib python3 5 dist package pil lib x86 64 usr local lib python3 5 dist package pil libs rpath from file usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 try file usr local lib python3 5 dist package pil libs tls x86 64 liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil libs tls liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil lib x86 64 liblzma 6cd627ed so 5 2 4 443 try file usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 443 443 443 call init usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 443 443 443 call init usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 443 443 443 call init usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 443 443 443 call init usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 443 443 443 call init usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy ndimage nd image cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy ndimage ni label cpython 35 m x86 64 linux gnu so 443 443 find library libopenblasp r0 2ecf47d5 3 7 dev so 0 search 443 search path usr local lib python3 5 dist package scipy linalg lib tls x86 64 usr local lib python3 5 dist package scipy linalg lib tls usr local lib python3 5 dist package scipy linalg lib x86 64 usr local lib python3 5 dist package scipy linalg lib rpath from file usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 443 try file usr local lib python3 5 dist package scipy linalg lib tls x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib tls libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib x86 64 libopenblasp r0 2ecf47d5 3 7 dev so 443 try file usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 443 443 443 call init usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 443 443 443 call init usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg flapack cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg flinalg cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg solve toeplitz cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg decomp update cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg cython blas cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package scipy linalg cython lapack cpython 35 m x86 64 linux gnu so 443 443 443 call init usr local lib python3 5 dist package tensorflow core lite experimental microfrontend python op audio microfrontend op so 443 443 443 call fini usr local bin python 0 443 443 443 call fini lib x86 64 linux gnu libutil so 1 0 443 443 443 call fini lib x86 64 linux gnu libexpat so 1 0 443 443 443 call fini lib x86 64 linux gnu libz so 1 0 443 443 443 call fini usr lib python3 5 lib dynload opcode cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload ctype cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core multiarray umath cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core multiarray test cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy linalg lapack lite cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy linalg umath linalg cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core lib libopenblasp r0 34a18dc3 3 7 so 0 443 443 443 call fini usr lib python3 5 lib dynload bz2 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu libbz2 so 1 0 0 443 443 443 call fini usr lib python3 5 lib dynload lzma cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu liblzma so 5 0 443 443 443 call fini usr lib python3 5 lib dynload decimal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib x86 64 linux gnu libmpdec so 2 0 443 443 443 call fini usr local lib python3 5 dist package numpy fft pocketfft internal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random mtrand cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random common cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random bound integer cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random mt19937 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random bit generator cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload hashlib cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random philox cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random pcg64 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random sfc64 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package numpy random generator cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python pywrap tensorflow internal so 0 443 443 443 call fini usr local lib python3 5 dist package google protobuf internal api implementation cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package google protobuf pyext message cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload csv cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload termio cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python framework fast tensor util so 0 443 443 443 call fini lib x86 64 linux gnu libuuid so 1 0 443 443 443 call fini usr local lib python3 5 dist package wrapt wrapper cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload json cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr lib python3 5 lib dynload ssl cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini lib x86 64 linux gnu libssl so 1 0 0 0 443 443 443 call fini lib x86 64 linux gnu libcrypto so 1 0 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py error cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py def cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py object cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py conv cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5r cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 t cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py util cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5z cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5a cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5s cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5p cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5ac cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py proxy cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5d cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5ds cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5f cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5 g cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5i cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5fd cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5pl cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5o cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py h5l cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libhdf5 hl db841637 so 100 1 1 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libhdf5 49599f4e so 103 0 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libsz 1c7dd0cf so 2 0 1 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libaec 2147abcd so 0 0 4 0 443 443 443 call fini usr local lib python3 5 dist package scipy lib ccallback c cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse sparsetool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csparsetool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph short path cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph tool cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph traversal cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph min span tree cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy sparse csgraph reorder cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package pil image cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libopenjp2 b3d7668a so 2 3 1 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libtiff 8267adfe so 5 4 0 0 443 443 443 call fini usr local lib python3 5 dist package pil libs libjpeg 3b10b538 so 9 3 0 0 443 443 443 call fini usr local lib python3 5 dist package h5py lib libz a147dcb0 so 1 2 3 0 443 443 443 call fini usr local lib python3 5 dist package pil lib liblzma 6cd627ed so 5 2 4 0 443 443 443 call fini usr local lib python3 5 dist package scipy ndimage nd image cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy ndimage ni label cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg fblas cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg flapack cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg flinalg cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg solve toeplitz cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg decomp update cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg cython blas cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg cython lapack cpython 35 m x86 64 linux gnu so 0 443 443 443 call fini usr local lib python3 5 dist package scipy linalg lib libopenblasp r0 2ecf47d5 3 7 dev so 0 443 443 443 call fini usr local lib python3 5 dist package numpy core lib libgfortran ed201abd so 3 0 0 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core lite experimental microfrontend python op audio microfrontend op so 0 443 443 443 call fini usr local lib python3 5 dist package tensorflow core python libtensorflow framework so 2 0 443 443 443 call fini usr lib x86 64 linux gnu libstdc so 6 0 443 443 443 call fini lib x86 64 linux gnu libgcc s so 1 0 443 443 443 call fini lib x86 64 linux gnu librt so 1 0 443 443 443 call fini lib x86 64 linux gnu libm so 6 0 443 443 443 call fini lib x86 64 linux gnu libdl so 2 0 443 443 443 call fini lib x86 64 linux gnu libpthread so 0 0 443 |
tensorflowtensorflow | cubla failure for large convolution in v100 gpu | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version use command below 1 15 0 or 2 0 0 python version 3 7 3 cuda cudnn version 10 0 and 7 6 4 gpu model and memory tesla v100 with either 16 gb or 32 gb exact command to reproduce python tf v100 cubla crash tf1 py from the gist below describe the problem if I run a large 1x1 convolution with fp16 on a tesla v100 cubla crash with the follow kind of message tensorflow python framework error impl internalerror 2 root error s find 0 internal bla sgemm launch fail m 9584640 n 17 k 17 node graph final part conv conv2d conv2d define at virtualenvs tf lib python3 7 site package tensorflow core python framework op py 1748 expanddim 9 3631 1 internal bla sgemm launch fail m 9584640 n 17 k 17 node graph final part conv conv2d conv2d define at virtualenvs tf lib python3 7 site package tensorflow core python framework op py 1748 this come from this place in the tf code l698 the root cause be such tensorflow code for example conv op tf layer conv2d filter 17 kernel size 1 1 in t tf one shape 10 1152 832 17 dtype tf float16 it work again if I make the input tensor small use fp32 instead of fp16 run on a cpu instead of the v100 gpu use a geforce titan x gpu instead of a v100 use tf 1 14 on a v100 gpu but not tf 1 15 0 or 2 0 0 source code log a small gist with a fully work example of only 10 line can be find here file tf v100 cubla crash tf1 py tf 1 15 0 version file tf v100 cubla crash tf2 py tf 2 0 0 version I even try to mimic the cubla call but that didn t crash so it seem a tensorflow bug rather than cubla unless I make a mistake of course file tf v100 cubla crash mimic cu |
tensorflowtensorflow | log break with python 3 8 findcaller take from 1 to 2 positional argument but 3 be give | Bug | system information system information os platform and distribution arch linux 5 4 2 arch1 1 arch tensorflow instal from binary tensorflow version 2 1 0rc0 1 keras version 2 2 4 tf python version 3 8 gpu model and memory 2x gtx 1080 ti 11 gb describe the current behavior execution of mnist example fail with error typeerror findcaller take from 1 to 2 positional argument but 3 be give code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds from tensorflow keras optimizer import adam def scale image label image tf cast image tf float32 image 255 return image label def build model filter 56 unit 24 kernel size 5 learning rate 1e 2 model tf keras sequential tf keras layer conv2d filter filter kernel size kernel size kernel size activation relu input shape 28 28 1 tf keras layer maxpooling2d tf keras layer flatten tf keras layer dense unit activation relu tf keras layer dense 10 activation softmax model compile loss sparse categorical crossentropy optimizer adam learning rate metric accuracy return model dataset info tfds load name mnist with info true as supervise true mnist train mnist test dataset train dataset test num train example info split train num example num test example info split test num example buffer size 10000 batch size 128 train dataset mnist train map scale shuffle buffer size repeat batch batch size prefetch buffer size tf data experimental autotune eval dataset mnist test map scale shuffle buffer size repeat batch batch size prefetch buffer size tf data experimental autotune model build model epoch 5 model fit train dataset validation datum eval dataset step per epoch num train example epoch validation step num test example epoch epochs epochs other info log traceback download and prepare dataset mnist 11 06 mib to home graemer tensorflow dataset mnist 1 0 0 traceback most recent call last file train py line 23 in dataset info tfds load name mnist with info true as supervise true file usr lib python3 8 site package tensorflow dataset core api util py line 52 in disallow positional args dec return fn args kwargs file usr lib python3 8 site package tensorflow dataset core register py line 302 in load dbuilder download and prepare download and prepare kwargs file usr lib python3 8 site package tensorflow dataset core api util py line 52 in disallow positional args dec return fn args kwargs file usr lib python3 8 site package tensorflow dataset core dataset builder py line 316 in download and prepare log warn gcs host msg self name file usr lib python3 8 site package absl log init py line 322 in warning log warn msg args kwargs file usr lib python3 8 site package absl log init py line 485 in log absl logger log standard level msg args kwargs file usr lib python3 8 site package absl log init py line 1047 in log super absllogger self log level msg args kwargs file usr lib python3 8 logging init py line 1500 in log self log level msg args kwargs file usr lib python3 8 logging init py line 1565 in log fn lno func sinfo self findcaller stack info stacklevel typeerror findcaller take from 1 to 2 positional argument but 3 be give |
tensorflowtensorflow | mlir tf opt can not run with debug option | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 3 7 bazel version if compile from source 1 0 1 gcc compiler version if compile from source 6 4 0 cuda cudnn version 10 gpu model and memory v100 describe the current behavior in current tensorflow dialect implement in tensorflow compiler mlir we use llvm debug in various place to print some debug information during compilation and if we run tf opt it s expect to print these information with debug option add this be also the behavior with mlir opt but in current master branch if I pass a debug option to tf opt an error occur bazel bin tensorflow compiler mlir tf opt debug tf opt unknown command line argument debug try bazel bin tensorflow compiler mlir tf opt help tf opt do you mean help but if I run mlir opt with debug bin mlir opt debug args bin mlir opt debug I ve also compare the main function between tf opt and mlir opt and see no apparent difference between the two file describe the expect behavior tf opt can run with debug so I can use the debug info to debug the compilation process |
tensorflowtensorflow | attention additiveattention issue | Bug | I have the follow function to return two model def get question model self embed question input input shape none name question input question embed embed question input cnn 1d conv1d 128 4 padding same activation relu stride 1 question embed cnn 1d averagepooling1d pool size 3 cnn 1d model model question input cnn 1d return model def get sentence model self embed question model sentence input input shape none name sentence input sentence embed embed sentence input cnn 1d conv1d 128 4 padding same activation relu stride 1 sentence embed cnn 1d averagepooling1d pool size 3 cnn 1d sentence attention additiveattention cnn 1d question model model model sentence input sentence attention model shape model output shape return model and the function be call self question model self get question model embed self sentence model self get sentence model embed self question model I get the follow issue 2019 12 10 18 17 01 848676 I tensorflow core common runtime gpu gpu device cc 1304 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 9371 mb memory physical gpu device 0 name titan v pci bus i d 0000 83 00 0 compute capability 7 0 traceback most recent call last file d development competition kaggle tensorflow 2 0 q a a tensorflow 2 0 questionanswercontextattention py line 376 in qa model fit input input output y file d development competition kaggle tensorflow 2 0 q a a tensorflow 2 0 questionanswercontextattention py line 152 in fit self model self get model file d development competition kaggle tensorflow 2 0 q a a tensorflow 2 0 questionanswercontextattention py line 123 in get model self sentence model self get sentence model embed self question model file d development competition kaggle tensorflow 2 0 q a a tensorflow 2 0 questionanswercontextattention py line 84 in get sentence model sentence attention additiveattention cnn 1d question model file c development python python37 lib site package tensorflow core python keras engine base layer py line 887 in call self maybe build input file c development python python37 lib site package tensorflow core python keras engine base layer py line 2141 in maybe build self build input shape file c development python python37 lib site package tensorflow core python keras layer dense attention py line 406 in build v shape tensor shape tensorshape input shape 1 typeerror nonetype object be not subscriptable |
tensorflowtensorflow | dataset from tfrecord have unknown shape | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 debian gnu linux 8 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary pip tensorflow version use command below 2 0 0 python version 3 7 4 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory 1080ti 12 g 4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I follow the tf2 0 tutorial to generate and read the dataset into tf dataset however the read serialize record datum have unknown shape and unable to call strategy experimental distribute dataset to use mirror strategy note that I can load datum normally use for f0 f1 f2 f3 in train dataset for single gpu training describe the expect behavior the load tf dataset should have correct shape and be able to be distribute use strategy experimental distribute dataset the feature all have fix shape but I don t know how to define they code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem raw train dataset tf datum tfrecorddataset path to the record def read tfrecord serialize example feature description feature0 tf io fixedlenfeature tf string feature1 tf io fixedlenfeature tf string feature2 tf io fixedlenfeature tf string feature3 tf io fixedlenfeature tf string example tf io parse single example serialize example feature description f0 tf io parse tensor example feature0 tf uint8 f1 tf io parse tensor example feature1 tf float32 f2 tf io parse tensor example feature2 tf int16 f3 tf io parse tensor example feature3 tf uint8 return f0 f1 f2 f3 train dataset raw train dataset map read tfrecord train dataset strategy experimental distribute dataset train dataset other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach before run the last line of code the print train dataset be type tf uint8 tf float32 tf int16 tf uint8 and after run the last line to distribute the dataset for mirror strategy it raise an error valueerror can not take the length of shape with unknown rank |
tensorflowtensorflow | the quantize model have low accuracy on android | Bug | I use these code train mobilenet v2 model then I freeze graph mobilenet v2 pb the last run this command to get tensorflow lite model shell bazel build tensorflow lite toco toco bazel bin tensorflow lite toco toco input file mnt d tmp mobilenet v2 pb output file mnt d tmp mobilenet v2 tflite inference type quantize uint8 input array input output array mobilenetv2 prediction reshape 1 input shape 1 224 224 3 mean value 128 std value 128 change concat input range false allow custom op java imgdata bytebuffer allocatedirect ddim 2 ddim 3 3 interpreter option option new interpreter option option setnumthread num thread tflite new interpreter file option private bytebuffer getscaledmatrix bitmap bitmap imgdata rewind bitmap getpixel intvalue 0 ddim 2 0 0 ddim 2 ddim 3 for int I 0 I ddim 2 I for int j 0 j ddim 3 j int pixelvalue intvalue I ddim 2 j imgdata put byte pixelvalue 16 0xff imgdata put byte pixelvalue 8 0xff imgdata put byte pixelvalue 0xff if bitmap isrecycle bitmap recycle return imgdata bytebuffer inputdata getscaledmatrix bmp byte labelprobarray new byte 1 num class tflite run inputdata labelprobarray but the accuracy of the result be very low the accuracy of the non quantize model do not decrease |
tensorflowtensorflow | miss symbol from c api | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow stock example os platform and distribution e g linux ubuntu 16 04 archlinux x86 64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version use command below 2 1 0 rc0 python version 3 8 bazel version if compile from source 1 2 0 gcc compiler version if compile from source 9 2 cuda cudnn version 10 2 89 7 6 5 32 gpu model and memory nvidia gtx 980 describe the current behavior try to compile an example result in broken link c basic cpp tensorflow cc example example cc include tensorflow cc client client session h include tensorflow cc ops standard op h include tensorflow core framework tensor h int main namespace tf tensorflow namespace tfo tensorflow op tf scope root tf scope newrootscope matrix a 3 2 1 0 auto a tfo const root 3 f 2 f 1 f 0 f vector b 3 5 auto b tfo const root 3 f 5 f v ab t auto v tfo matmul root withopname v a b tfo matmul transposeb true std vector output tf clientsession session root run and fetch v tf check ok session run v output expect output 0 19 3 log info output 0 matrix return 0 try to compile with g basic cpp I usr include tensorflow ltensorflow cc ltensorflow framework o basic fail bash make c tensorflow basic run make 1 enter directory home gizdov git arch package test tensorflow basic g basic cpp I usr include tensorflow ltensorflow cc ltensorflow framework o basic usr bin ld tmp ccidgpwx o in function main basic cpp text 0x363 undefined reference to tensorflow clientsession clientsession tensorflow scope const usr bin ld basic cpp text 0x3ea undefined reference to tensorflow clientsession run std vector const std vector const usr bin ld basic cpp text 0x52b undefined reference to tensorflow clientsession clientsession usr bin ld basic cpp text 0x70d undefined reference to tensorflow clientsession clientsession collect2 error ld return 1 exit status make 1 makefile 6 basic error 1 make 1 leave directory home gizdov git arch package test tensorflow basic make makefile 11 tensorflow basic error 2 describe the expect behavior correct symbol should be expose in libtensorflow cc so so that linking can work properly code to reproduce the issue source include above makefile provide below makefile cxx g bin basic incl dir shell pkg config tensorflow cc cflag lib shell pkg config tensorflow cc libs bin bin cpp cxx incl dir lib o phony clean run run bin clean rm rf bin other info log provide be nm dump with all available symbol that be expose in libtensorflow so libtensorflow cc and libtensorflow framework libtensorflow so 2 1 0rc0 nm symbol dump log libtensorflow cc so 2 1 0rc0 nm symbol dump log libtensorflow framework so 2 1 0rc0 nm symbol dump log |
tensorflowtensorflow | tf 2 0 api docs tf keras callbacks learningratescheduler very small update | Bug | url s with the issue please provide a link to the documentation entry for example doc link code link l1311 l1358 description of issue what need change parameter the next api for scheduler parameter of learningratescheduler take in 2 parameter epoch and lr learning rate instead of just epoch this be evident in the on epoch begin of the learningratescheduler method the documentation for this method be still outdate the doc and the example code still show the scheduler function take in only epoch instead of both epoch and lr I think the doc should be update to reflect the new api propose change to the doc 1 update the description of scheduler schedule a function that take an epoch index as input integer index from 0 and current learning rate and return a new learning rate as output float copy from the doc from keras io 2 update the example usage to include a scheduler that utilize the current learning rate as well I hope this be helpful happy to contribute if need submit a pull request yes if this should be update be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | classification signature on tf serve | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary no tensorflow version use command below 2 0 0 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory describe the current behavior it seem currently not possible to export a classify signature to use within tensorflow serve describe the expect behavior it should be possible to export a classify signature to use within tensorflow serve I be able to experience the issue also by export directly a tf module and a subclasse tf model in this issuecomment 552475340 comment I be also wonder about both the inference and classify signature be deprecate since it look like in generate signature l466 the method name be assign to signature constant predict method name code to reproduce the issue python import tensorflow as tf import numpy as np model tf keras sequential tf keras layer dense 2 activation softmax model compile optimizer adam loss categorical crossentropy metric accuracy x 0 42 y 0 1 model fit x y epoch 1 pre before model predict x print f pre pre before model save mymodel 1 save format tf model tf keras model load model mymodel 1 model predict x print f signature model signature pre after model x np testing assert almost equal pre before pre after print model predict x bash docker run p 8501 8501 v path to mymodel model mymodel e model name mymodel name serve tmp tensorflow serve curl xpost d example error expect classification signature method name to be tensorflow serve classify be tensorflow serve predict other info log error expect classification signature method name to be tensorflow serve classify be tensorflow serve predict if I modify tensorflow core python save model save py 469 python from method name signature constant predict method name to method name signature constant classify method name I m then able to get what expect curl xpost d example error no classification input find in signaturedef input n key text n value n name serve default text 0 n dtype dt float n tensor shape n dim n size 1 n n dim n size 1 n n n n noutput n key probability n value n name statefulpartitionedcall 2 0 n dtype dt float n tensor shape n dim n size 1 n n dim n size 99 n n n n nmethod name tensorflow serve classify n |
tensorflowtensorflow | document about masking do not have previous mask information | Bug | greeting while the document about masking be super good I find it miss an important point how the mask associate with the previous mask in compute mask input previous mask specifically let we assume we have two input a and b I write a custom add layer class customaddingwithmaske tf keras layers layer def init self mask boolean kwargs super customaddingwithmaske self init kwargs def call self input return input 0 input 1 def compute mask self input mask none return mask here I want to compute the sum of two tensor let we also assume that a and b have their own mask which could be different to each other because we technically have two previous mask from a and b separately I don t know how the mask parameter in compute mask be receive be it the or or and operation between mask of a and mask of b those thing be not clear as well as not document well |
tensorflowtensorflow | keras callback log entry wrong document | Bug | url s with the issue on epoch end log include acc and loss and optionally include val loss if validation be enable in fit and val acc if validation and accuracy monitoring be enable and on batch end log include loss and optionally acc this be correct for the original keras implementation however in tf2 keras callback get accuracy and val accuracy instead of the short document version acc val acc either the implementation be wrong or the documentation |
tensorflowtensorflow | a question about psnr implementation in tensorflow 2 0 | Bug | this template be for miscellaneous issue not cover by the other issue category for question on how to work with tensorflow or support for problem that be not verify bug in tensorflow please go to stackoverflow if you be report a vulnerability please use the dedicated reporting process for high level discussion about tensorflow please post to for question about the development or internal working of tensorflow or if you would like to know how to contribute to tensorflow please post to I be work on some image process use psnr tf image psnr but always I get result large than what I expect then I check the source code and I find psnr val math op subtract 20 math op log max val math op log 10 0 np float32 10 np log 10 math op log mse name psnr compare to the standard psnr algorithm its formula as follow image I wonder why there be np log 10 in the tensorflow implementation rather than np log10 10 or be there anything I get wrong thank for any help |
tensorflowtensorflow | symbolicexception input to eager execution function can not be keras symbolic tensor but find | Bug | hi I be write encoder decoder architecture with bahdanau attention use tf kera with tensorflow 2 0 below be my code this be work with tensorflow 1 15 but get the error in 2 0 you can check the code in colab notebook here import numpy as np import panda as pd import matplotlib pyplot as plt from tensorflow keras layers import input dense conv2d batchnormalization activation dropout gru embed from tensorflow keras model import model from tensorflow keras import activation from tensorflow keras layer import layer from tensorflow keras import layer import tensorflow as tf from tensorflow keras layers import gru concatenate lambda encoder seq len 30 decoder seq len 20 vocab size 500 unit 16 tf keras backend clear session class encoder model def init self vocab size embed dim input length unit super encoder self init self vocab size vocab size self embed dim embed dim self input length input length self unit unit self embed embed input dim vocab size output dim 50 input length self input length mask zero false name embed layer encoder self gru gru self unit return state true return sequence true name encoder gru tf function def call self input training true x embedd self embed input gru output gru state self gru x embedd return gru output gru state class bahdanauattention tf keras layers layer def init self unit super bahdanauattention self init self w1 tf keras layer dense unit self w2 tf keras layer dense unit self v tf keras layer dense 1 def call self query value hide shape batch size hide size hide with time axis shape batch size 1 hide size we be do this to perform addition to calculate the score hide with time axis tf expand dim query 1 score shape batch size max length 1 we get 1 at the last axis because we be apply score to self v the shape of the tensor before apply self v be batch size max length unit score self v tf nn tanh self w1 value self w2 hide with time axis attention weight shape batch size max length 1 attention weight tf nn softmax score axis 1 context vector shape after sum batch size hide size context vector attention weight value context vector tf reduce sum context vector axis 1 return context vector class onestepdecoder model def init self vocab size embed dim dec unit att unit super onestepdecoder self init self vocab size vocab size self embed dim embed dim self dec unit dec unit self att unit att unit self embedd embed input dim self vocab size output dim self embed dim input length 1 mask zero false name decoder embed layer self att layer bahdanauattention unit self att unit name attention self dense dense self vocab size activation softmax name denseout self gru gru unit self dec unit return state true name decgru tf function def call self input decoder input state encoder output training true x embedd self embedd input decoder context vector self att layer input state encoder output concat tf concat tf expand dim context vector 1 x embedd axis 1 decoder output decoder state self gru concat initial state input state output self dense decoder output return output decoder state class decoder model def init self vocab size embed dim dec unit att unit super decoder self init self vocab size vocab size self embed dim embed dim self dec unit dec unit self att unit att unit self stepdec onestepdecoder self vocab size self embed dim self dec unit self att unit tf function def call self input decoder input state encoder output all output tf tensorarray tf float32 size input decoder shape 1 name output array for timestep in range input decoder shape 1 output input state self stepdec input decoder timestep timestep 1 input state encoder output all output all output write timestep output all output tf transpose all output stack 1 0 2 return all output encoder input input shape encoder seq len name encoder input final decoder input input shape decoder seq len name decoder inout final encoder encoder vocab size vocab size embed dim 50 input length encoder seq len unit 16 x gru out x gru state encoder encoder input decoder decoder vocab size vocab size embed dim 50 dec unit 16 att unit 20 all output decoder decoder input x gru state x gru out encoder decoder model encoder input decoder input output all output encoder decoder compile optimizer adam loss sparse categorical crossentropy x np random randint 0 499 size 2000 encoder seq len y np random randint 0 499 size 2000 decoder seq len encoder decoder fit x x y y y epoch 1 verbose 1 batch size 32 error typeerror traceback most recent call last usr local lib python3 6 dist package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 60 op name input attrs 61 num output 62 except core notokstatusexception as e typeerror an op outside of the function building code be be pass a graph tensor it be possible to have graph tensor leak out of the function building context by include a tf init scope in your function build code for example the follow function will fail tf function def have init scope my constant tf constant 1 with tf init scope add my constant 2 the graph tensor have name keras learning phase 0 during handling of the above exception another exception occur symbolicexception traceback most recent call last 11 frame usr local lib python3 6 dist package tensorflow core python eager execute py in quick execute op name num output input attrs ctx name 73 raise core symbolicexception 74 input to eager execution function can not be keras symbolic 75 tensor but find format keras symbolic tensor 76 raise e 77 pylint enable protect access symbolicexception input to eager execution function can not be keras symbolic tensor but find |
tensorflowtensorflow | tutorial for datum augmentation use tf image | Bug | hi my name be rachin kalakheti and I be a participant of google code in 2019 I feel overwhelmed to know tensorflow be also one of the organization for this year so there be a task to create a notebook tutorial on datum augmentation use tf image I see that currently there be no tutorial regard the same topic so I would like to contribute to the community by add my tutorial to the tensorflow repo therefore I be seek guidance as to discuss this further link to my notebook tutorial thank you |
tensorflowtensorflow | severe tpu cpu behaviour discrepency | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 no mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip tensorflow version use command below 2 1 0 dev20191203 python version 3 5 gpu model and memory tpu nightly 2 x describe the current behavior when training use a tpu backend if a tf function function code be define before connect to a tpu cluster call the function as dataset map function result in a python kernel crash without any error or other info this be especially severe imho since as the training code base grow more autograph function be define in module instead of on the main program as it s natural to import module at the start of the program if the tpu connection be initiate after the import use the tf fucntion code define in the module result in a kernel crash if the same code be however be run just on the cpu it work as expect this lead to a somewhat frustrating experience of everything work on a cpu dev env and then crash inexplicably when connect to a tpu the unintuitive solution be to run the tpu connection boilerplate before any import describe the expect behavior 1 connect to a tpu shouldn t create an implicit scope if a scope be require it should be with a with idiom 2 it should be well document if all autograph function definition should be define after connect to a tpu 3 if a code execute within a tpu scope depend on a code define outside it should fail gracefully and informatively not crash the kernel code to reproduce the issue fail case import tensorflow as tf tf function def test func a return a 3 train test tf keras datasets fashion mnist load datum image label train image image 255 ds tf datum dataset from tensor slice image tpu ip 10 0 3 2 this require a work nightly 2 x tpu cluster 2v 8 tpu address grpc tpu ip 8470 resolver tf distribute cluster resolver tpuclusterresolver tpu tpu address tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver dsf ds map test func working case import tensorflow as tf tpu ip 10 0 3 2 this require a work nightly 2 x tpu cluster 2v 8 tpu address grpc tpu ip 8470 resolver tf distribute cluster resolver tpuclusterresolver tpu tpu address tf config experimental connect to cluster resolver tf tpu experimental initialize tpu system resolver strategy tf distribute experimental tpustrategy resolver train test tf keras datasets fashion mnist load datum image label train image image 255 ds tf datum dataset from tensor slice image tf function def test func a return a 3 dsf ds map test func the reproduction code doesn t use import but tf function def test func a return a 3 would typically run as part of the import code and not the main program other info log no error log produce |
tensorflowtensorflow | bad example in sparse categorical crossentropy | Bug | url s with the issue please provide a link to the documentation entry for example l493 description of issue what need change the example will cause error cce tf keras loss sparsecategoricalcrossentropy loss cce 0 1 2 9 05 05 5 89 6 05 01 94 print loss loss numpy loss 0 3239 need to change to cce tf keras loss sparsecategoricalcrossentropy loss cce 0 1 2 tf constant 9 05 05 5 89 6 05 01 94 print loss loss numpy loss 0 3239 in addition 5 89 6 should be 0 05 0 89 0 06 to be consistent with similar example thus loss should be update to 0 0945 |
tensorflowtensorflow | tensorflow lite build issue on window | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 10 0 18362 sp0 tensorflow instal from source or binary tensorflow version 2 0 0 python version 3 6 8 bazel version if compile from source 1 1 0 gcc compiler version if compile from source 8 1 0 describe the problem I be try to build tensorflow lite from source with bazel build tensorflow lite libtensorflowlite so after build error I ve try to add feature window export all symbol to build def bzl but nothing change any other info log error without verbose with this command bazel build tensorflow lite libtensorflowlite so link warn lnk4044 s error c user hell document tensorflow tensorflow lite build 452 1 output tensorflow lite libtensorflowlite so if lib be not create error c user hell document tensorflow tensorflow lite build 452 1 not all output be create or valid target tensorflow lite libtensorflowlite so fail to build info elapse time 0 943s critical path 0 38 info 1 process 1 local fail build do not complete successfully and error run bazel build c opt verbose failure tensorflow lite libtensorflowlite so skip bad target pattern package name may contain a z a z 0 9 or any of most 7 bit ascii character except 0 31 127 or error bad target pattern package name may contain a z a z 0 9 or any of most 7 bit ascii character except 0 31 127 or info elapse time 0 272s info 0 process fail build do not complete successfully 0 package load thank you |
tensorflowtensorflow | cann t transform tf eagertensor to python datatype in map fun | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 win10 x64 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 gpu model and memory gtx1060 6 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I want to use dataset tf datum dataset list file mat to create a dataset when I use next iter dataset and map fun to parse the datum it be ok the map fun be as below python def map fun filename filename filename numpy datum sio loadmat filename x tf cast datum x dtype tf float32 label tf cast data label reshape 1 dtype tf int8 return x label because tensorflow doesn t support to read mat file directly so l need to use scipy io loadmat to load mat file but when I use the map method to process lot of mat file such as python db dataset map map fun it be not work due to numpy method shouldn t be use base on eager mode describe the expect behavior can use tf datum dataset map to process a lot of mat file so that can be create a dataset I think I should transfrom the tf eagertensor to python data type in my case str so that scipy io loadmat could work code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | non deterministic access to random number generator in tf datum dataset map with num parallel call 1 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 3 6 7 cuda cudnn version 10 0 gpu model and memory tesla k80 describe the current behavior currently when a tf random experimental generator be pass to an operator use in tf datum dataset map with num parallel call 1 this random number generator be access in a non deterministic order which make that the output of the dataset be non reproducible see mvce below describe the expect behavior the expect behavior be that even when parallelize inside map random number generator be call in a deterministic way so that the datum dataset overall pipeline get fully reproducible I do not know whether this be a bug a desire behavior or an unavoidable side effect of parallelize the operator pass to map but the overall consequence be that an operator which leverage random operation like datum augmentation typically can not be parallelize over tf datum dataset pipeline if one want to keep its experiment reproducible I be interested in any trick workaround which would authorize I to keep both high performance pipeline and experiment reproducibility code to reproduce the issue see below a simple mvce it be self explanatory the test sequential create a dummy dataset of length 10 map an op which pull 1 number from a rng not parallelize the 10 result value be concatenate into a 10 length vector it then repeat the overall process and compare the vector from the first draft to the one of the second draft the test parallel do exactly the same however the op pull number from the rng be parallelize over 4 thread when compare the two 10 length vector they be non equal often test parallel be run 10 time which should be enough to highlight some discrepancy shuffle value between the two 10 length generate vector the test sequential show that the rng be indeed reproducible when access to in a sequential way import tensorflow as tf print tf version tf version import numpy as np def draw sample num parallel call def mapper x rng return rng uniform shape minval 0 maxval 10 dtype tf int32 seed 12345 algo 1 state tf random experimental create rng state seed 1 rng tf random experimental generator state state alg algo ds tf datum dataset from tensor slice tf range 10 map lambda x mapper x rng num parallel call num parallel call batch 10 return next iter ds numpy def test answer sequential r draw sample num parallel call 1 for in range 2 print nsequential print n r 0 tolist n r 1 tolist def test answer parallel r draw sample num parallel call 4 for in range 2 if not np allclose r print nnon deterministic access when spread on 4 thread print tfirst draw r 0 tolist n tsecond draw r 1 tolist else print ok by chance test answer sequential for in range 10 test answer parallel |
tensorflowtensorflow | tf 2 1rc0 the predict train test on batch trace function with fix batch size | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 debian stretch mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary tensorflow version use command below 2 1 0rc0 python version 3 5 3 describe the current behavior when use the update predict on batch method wrap now in tf function the trace call have a fix batch size therefore if batch size change regularly the function be retrace all the time for example the follow python import numpy as np import tensorflow as tf model tf keras sequential tf keras layer dense 1 input shape 1 model compile optimizer tf optimizer adam loss tf loss meansquarederror for I in range 1 300 model predict on batch np one I 1 give warn tensorflow 5 out of the last 5 call to trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail warn tensorflow 6 out of the last 6 call to trigger tf function retracing tracing be expensive and the excessive number of tracing be likely due to pass python object instead of tensor also tf function have experimental relax shape true option that relax argument shape that can avoid unnecessary retracing please refer to python or tensor args and for more detail the same issue happend with train on batch and test on batch describe the expect behavior the predict train test on batch method be trace with undefined batch size alternatively the behaviour could be configurable each of predict train test on batch could take undefined batch size true false argument personally I vote for a default value of undefined batch size true and I just realize if on batch method be use with sequence for example sentence in nlp then the retracing would happen on every call so instead it would be well to allow pass experimental relax shape to the on batch or even to compile call code to reproduce the issue available above other info log |
tensorflowtensorflow | keras save model can not be load no custom layer | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below 2 0 0 python version 3 6 2 cuda cudnn version 10 0 130 7 6 1 gpu model and memory gtx 1060 max q 6 gb describe the current behavior a keras sequential model train and save can not be re load by tf keras model load model the complete error message typeerror traceback most recent call last c user kylec appdata local program python python36 lib site package tensorflow core python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 540 try 541 str value compat as bytes x for x in proto value 542 except typeerror c user kylec appdata local program python python36 lib site package tensorflow core python framework tensor util py in 0 540 try 541 str value compat as bytes x for x in proto value 542 except typeerror c user kylec appdata local program python python36 lib site package tensorflow core python util compat py in as bytes byte or text encode 70 raise typeerror expect binary or unicode string get r 71 byte or text 72 typeerror expect binary or unicode string get 1 during handling of the above exception another exception occur typeerror traceback most recent call last in 1 tf keras model load model model h5 c user kylec appdata local program python python36 lib site package tensorflow core python keras save save py in load model filepath custom object compile 144 if h5py be not none and 145 isinstance filepath h5py file or h5py be hdf5 filepath 146 return hdf5 format load model from hdf5 filepath custom object compile 147 148 if isinstance filepath six string type c user kylec appdata local program python python36 lib site package tensorflow core python keras save hdf5 format py in load model from hdf5 filepath custom object compile 166 model config json loads model config decode utf 8 167 model model config lib model from config model config 168 custom object custom object 169 170 set weight c user kylec appdata local program python python36 lib site package tensorflow core python keras saving model config py in model from config config custom object 53 sequential from config config 54 from tensorflow python keras layers import deserialize pylint disable g import not at top 55 return deserialize config custom object custom object 56 57 c user kylec appdata local program python python36 lib site package tensorflow core python keras layers serialization py in deserialize config custom object 100 module object glob 101 custom object custom object 102 printable module name layer c user kylec appdata local program python python36 lib site package tensorflow core python keras util generic util py in deserialize keras object identifi module object custom object printable module name 189 custom object dict 190 list global custom object item 191 list custom object item 192 with customobjectscope custom object 193 return cls from config cls config c user kylec appdata local program python python36 lib site package tensorflow core python keras engine sequential py in from config cls config custom object 368 layer layer module deserialize layer config 369 custom object custom object 370 model add layer 371 if not model input and build input shape 372 model build build input shape c user kylec appdata local program python python36 lib site package tensorflow core python training tracking base py in method wrapper self args kwargs 455 self self setattr track false pylint disable protect access 456 try 457 result method self args kwargs 458 finally 459 self self setattr track previous value pylint disable protect access c user kylec appdata local program python python36 lib site package tensorflow core python keras engine sequential py in add self layer 194 if the model be be build continuously on top of an input layer 195 refresh its output 196 output tensor layer self output 0 197 if len nest flatten output tensor 1 198 raise typeerror all layer in a sequential model c user kylec appdata local program python python36 lib site package tensorflow core python keras engine base layer py in call self input args kwargs 840 not base layer util be in eager or tf function 841 with auto control dep automaticcontroldependencie as acd 842 output call fn cast input args kwargs 843 wrap tensor in output in tf identity to avoid 844 circular dependency c user kylec appdata local program python python36 lib site package tensorflow core python keras layer pool py in call self input mask 641 input shape input shape as list 642 broadcast shape 1 input shape step axis 1 643 mask array op reshape mask broadcast shape 644 input mask 645 return backend sum input axis step axis math op reduce sum c user kylec appdata local program python python36 lib site package tensorflow core python op array ops py in reshape tensor shape name 129 a tensor have the same type as tensor 130 131 result gen array op reshape tensor shape name 132 tensor util maybe set static shape result shape 133 return result c user kylec appdata local program python python36 lib site package tensorflow core python ops gen array op py in reshape tensor shape name 8115 add node to the tensorflow graph 8116 op op def lib apply op helper 8117 reshape tensor tensor shape shape name name 8118 result op output 8119 input flat op input c user kylec appdata local program python python36 lib site package tensorflow core python framework op def library py in apply op helper self op type name name keyword 528 except typeerror as err 529 if dtype be none 530 raise err 531 else 532 raise typeerror c user kylec appdata local program python python36 lib site package tensorflow core python framework op def library py in apply op helper self op type name name keyword 525 dtype dtype 526 as ref input arg be ref 527 prefer dtype default dtype 528 except typeerror as err 529 if dtype be none c user kylec appdata local program python python36 lib site package tensorflow core python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept composite tensor 1294 1295 if ret be none 1296 ret conversion func value dtype dtype name name as ref as ref 1297 1298 if ret be notimplemente c user kylec appdata local program python python36 lib site package tensorflow core python framework constant op py in constant tensor conversion function v dtype name as ref 284 as ref false 285 as ref 286 return constant v dtype dtype name name 287 288 c user kylec appdata local program python python36 lib site package tensorflow core python framework constant op py in constant value dtype shape name 225 226 return constant impl value dtype shape name verify shape false 227 allow broadcast true 228 229 c user kylec appdata local program python python36 lib site package tensorflow core python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 263 tensor util make tensor proto 264 value dtype dtype shape shape verify shape verify shape 265 allow broadcast allow broadcast 266 dtype value attr value pb2 attrvalue type tensor value tensor dtype 267 const tensor g create op c user kylec appdata local program python python36 lib site package tensorflow core python framework tensor util py in make tensor proto value dtype shape verify shape allow broadcast 543 raise typeerror fail to convert object of type s to tensor 544 content s consider cast element to a 545 support type type value value 546 tensor proto string val extend str value 547 return tensor proto typeerror fail to convert object of type to tensor content 1 none 1 consider cast element to a support type also if I switch to use functional api the result remain the same describe the expect behavior a save model should be able to re load without error code to reproduce the issue python import tensorflow as tf from tensorflow keras datasets import imdb from tensorflow kera preprocesse sequence import pad sequence x train y train x test y test imdb load data path imdb npz num word none skip top 0 maxlen none seed 113 start char 1 oov char 2 index from 3 x train pad sequences x train padding post maxlen x train shape 1 vocab size x train max 1 model tf keras sequential tf keras layers input shape maxlen name sequence tf keras layer embed vocab size 32 mask zero true name word embed tf keras layers globalaveragepooling1d name doc embed tf keras layer dense 16 activation relu name relu tf keras layer dense 1 activation sigmoid name sigmoid name nn classifier model compile optimizer adam loss binary crossentropy metric accuracy metric model fit x x train y y train batch size 256 epoch 1 model save model h5 tf keras model load model model h5 fail |
tensorflowtensorflow | savedmodel format for tf estimator class in tensorflow 2 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac os mojave 10 14 6 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 0 0 python version 2 7 10 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m move to tf 2 0 with its very nice dataset functionality but I get stick when I want to save the model in the savedmodel format I m use the estimator class to do a linear regression and after train this be how I d set up the export in tf 1 column hour tf int64 domain tf string device type tf string feature placeholder name tf placeholder dtype 1 name name placeholder for name dtype in column I have three feature with different datatype and I use the placeholder method to concatenate they into a dict that be then serve use the tf estimator export build raw serve input receiver fn method and finally export use the estimator export save model to my model directory export input fn tf estimator export build raw serve input receiver fn feature placeholder estimator export save model model dir export input fn all tutorial online use this series of step but tf placeholder doesn t exist in tf 2 0 so how can I do this |
tensorflowtensorflow | can not execute the substraction op with the broadcast mechanism | Bug | python import tensorflow as tf import numpy as np x tf constant np random random 500 6 y x x 0 this code raise the issue of not support 500 6 500 in the sub op I suppose that it should compute correctly use the broadcast automatically be it a bug version tf2 0 |
tensorflowtensorflow | tf keras model sequential do not support run eagarly | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below v1 12 1 16986 g6c32a22 2 1 0 dev20191029 python version 3 6 8 describe the current behavior tf keras model sequential doesn t support run eagarly as mention in the doc run eagerly describe the expect behaviour either sequential model accept run eagarly as a param and change its behaviour or we modify the doc code to reproduce the issue python import tensorflow as tf model tf keras model sequential layer tf keras layer dense input shape 3 unit 1 run eagerly true other info log traceback most recent call last file tst py line 5 in run eagerly true file home squadrick local lib python3 6 site package tensorflow core python training tracking base py line 457 in method wrapper result method self args kwargs typeerror init get an unexpected keyword argument run eagerly |
tensorflowtensorflow | documentation | Bug | port the original website from bootstrap3 to bootstrap4 section to change alumnus html event participate html event html home html intro html open source html team html webinar html |
tensorflowtensorflow | use gpu delegate cause app to crash | Bug | system information os platform and distribution e g linux ubuntu 16 04 window 10 tensorflow instal from source or binary binary tensorflow version or github sha if from source org tensorflow tensorflow lite gpu 0 0 0 nightly command use to run the converter or code if you re use the python api model tf keras model load model my conv h5 converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter target spec support type tf float16 tflite model converter convert open custom cnn f16 tflite wb write tflite model the output from the converter invocation it successfully convert the model to tflite f16 also please include a link to the save model or graphdef link the model be a simple cnn which take a 50x50x1 grayscale image and output probability for 10 class failure detail I want to run the float16 version of the model use tflite on the gpu samsung s10 however use the gpu delegate on this model cause the app to crash I have test on other device and this model crash on all phone when run on the gpu any other info log 2019 12 05 18 06 30 715 3830 3830 a debug 2019 12 05 18 06 30 716 3830 3830 a debug build fingerprint xxxxxxxxxxxxxxxx release key 2019 12 05 18 06 30 716 3830 3830 a debug revision 26 2019 12 05 18 06 30 716 3830 3830 a debug abi arm64 2019 12 05 18 06 30 716 3830 3830 a debug pid 2979 tid 3198 name inference com test app 2019 12 05 18 06 30 716 3830 3830 a debug signal 11 sigsegv code 1 segv maperr fault addr 0x0 2019 12 05 18 06 30 716 3830 3830 a debug cause null pointer dereference 2019 12 05 18 06 30 716 3830 3830 a debug x0 0000000000000000 x1 0000000000000000 x2 00000070516c44e0 x3 00000070516c43f0 2019 12 05 18 06 30 716 3830 3830 a debug x4 00000000000000ba x5 00000070546a5288 x6 000000704d0bb540 x7 000000704d0bb560 2019 12 05 18 06 30 716 3830 3830 a debug x8 185cb8064dde48fc x9 185cb8064dde48fc x10 0000000000000000 x11 00000070546a51d0 2019 12 05 18 06 30 716 3830 3830 a debug x12 000000704d0bb580 x13 000000704d0bb5a0 x14 00000000ffffffff x15 0000000000000000 2019 12 05 18 06 30 716 3830 3830 a debug x16 00000070f06f3bd0 x17 00000070f068898c x18 00000000ffffffff x19 000000706424f000 2019 12 05 18 06 30 716 3830 3830 a debug x20 00000070516c43b0 x21 00000070516c43f0 x22 00000070516c44e0 x23 00000070642a9fe0 2019 12 05 18 06 30 716 3830 3830 a debug x24 00000070516c4430 x25 00000070516c7588 x26 0000007064253b88 x27 00000070516c7588 2019 12 05 18 06 30 716 3830 3830 a debug x28 000000704d1803c0 x29 00000070516c4390 2019 12 05 18 06 30 716 3830 3830 a debug sp 00000070516c42c0 lr 0000007039a729f0 pc 0000007039a729f4 2019 12 05 18 06 30 739 3830 3830 a debug backtrace 2019 12 05 18 06 30 739 3830 3830 a debug 00 pc 00000000000c49f4 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite gpu jni so 2019 12 05 18 06 30 739 3830 3830 a debug 01 pc 000000000001ba8c datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite gpu jni so 2019 12 05 18 06 30 739 3830 3830 a debug 02 pc 000000000017e52c datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so 2019 12 05 18 06 30 739 3830 3830 a debug 03 pc 000000000017e0a4 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so 2019 12 05 18 06 30 739 3830 3830 a debug 04 pc 000000000017de68 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so 2019 12 05 18 06 30 739 3830 3830 a debug 05 pc 000000000001b5d4 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite gpu jni so 2019 12 05 18 06 30 739 3830 3830 a debug 06 pc 000000000017fb08 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so 2019 12 05 18 06 30 739 3830 3830 a debug 07 pc 0000000000182fa0 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so 2019 12 05 18 06 30 739 3830 3830 a debug 08 pc 000000000000f214 datum app com test app edfq7adoagblvng1917lag lib arm64 libtensorflowlite jni so java org tensorflow lite nativeinterpreterwrapper applydelegate 40 2019 12 05 18 06 30 739 3830 3830 a debug 09 pc 00000000005545e0 system lib64 libart so art quick generic jni trampoline 144 2019 12 05 18 06 30 739 3830 3830 a debug 10 pc 000000000054b84c system lib64 libart so art quick invoke static stub 604 2019 12 05 18 06 30 739 3830 3830 a debug 11 pc 00000000000d00b8 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 232 2019 12 05 18 06 30 739 3830 3830 a debug 12 pc 000000000027ec54 system lib64 libart so art interpreter artinterpretertocompiledcodebridge art thread art artmethod art shadowframe unsigned short art jvalue 344 2019 12 05 18 06 30 739 3830 3830 a debug 13 pc 0000000000279da4 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 752 2019 12 05 18 06 30 739 3830 3830 a debug 14 pc 000000000051d854 system lib64 libart so mterpinvokestaticrange 148 2019 12 05 18 06 30 739 3830 3830 a debug 15 pc 000000000053e014 system lib64 libart so executemterpimpl 15380 2019 12 05 18 06 30 739 3830 3830 a debug 16 pc 000000000046472e dev ashmem dalvik class dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk 2979 2979 delete org tensorflow lite nativeinterpreterwrapper applydelegate 122 2019 12 05 18 06 30 739 3830 3830 a debug 17 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 18 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 19 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 20 pc 000000000051be90 system lib64 libart so mterpinvokedirect 296 2019 12 05 18 06 30 739 3830 3830 a debug 21 pc 000000000053dc94 system lib64 libart so executemterpimpl 14484 2019 12 05 18 06 30 739 3830 3830 a debug 22 pc 00000000004649a4 dev ashmem dalvik class dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk 2979 2979 delete org tensorflow lite nativeinterpreterwrapper init 140 2019 12 05 18 06 30 739 3830 3830 a debug 23 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 24 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 25 pc 0000000000279d88 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 724 2019 12 05 18 06 30 739 3830 3830 a debug 26 pc 000000000051d6b4 system lib64 libart so mterpinvokedirectrange 244 2019 12 05 18 06 30 739 3830 3830 a debug 27 pc 000000000053df94 system lib64 libart so executemterpimpl 15252 2019 12 05 18 06 30 739 3830 3830 a debug 28 pc 000000000046468c dev ashmem dalvik class dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk 2979 2979 delete org tensorflow lite nativeinterpreterwrapper 128 2019 12 05 18 06 30 739 3830 3830 a debug 29 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 30 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 31 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 32 pc 000000000051be90 system lib64 libart so mterpinvokedirect 296 2019 12 05 18 06 30 739 3830 3830 a debug 33 pc 000000000053dc94 system lib64 libart so executemterpimpl 14484 2019 12 05 18 06 30 739 3830 3830 a debug 34 pc 0000000000464002 dev ashmem dalvik class dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk 2979 2979 delete org tensorflow lite interpreter 10 2019 12 05 18 06 30 739 3830 3830 a debug 35 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 36 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 37 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 38 pc 000000000051be90 system lib64 libart so mterpinvokedirect 296 2019 12 05 18 06 30 739 3830 3830 a debug 39 pc 000000000053dc94 system lib64 libart so executemterpimpl 14484 2019 12 05 18 06 30 739 3830 3830 a debug 40 pc 0000000000044cf2 dev ashmem dalvik classes2 dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk classes2 dex 2979 2979 delete com test app tflite classifier runmodel 302 2019 12 05 18 06 30 739 3830 3830 a debug 41 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 42 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 43 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 44 pc 000000000051ab60 system lib64 libart so mterpinvokevirtual 584 2019 12 05 18 06 30 739 3830 3830 a debug 45 pc 000000000053db94 system lib64 libart so executemterpimpl 14228 2019 12 05 18 06 30 739 3830 3830 a debug 46 pc 000000000003fe80 dev ashmem dalvik classes2 dex extract in memory from data app com test app edfq7adoagblvng1917lag base apk classes2 dex 2979 2979 delete com test app home homefragment run 1 run 128 2019 12 05 18 06 30 739 3830 3830 a debug 47 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 48 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 49 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 50 pc 000000000051bacc system lib64 libart so mterpinvokeinterface 1392 2019 12 05 18 06 30 739 3830 3830 a debug 51 pc 000000000053dd94 system lib64 libart so executemterpimpl 14740 2019 12 05 18 06 30 739 3830 3830 a debug 52 pc 0000000000dbd78c system framework boot framework vdex android os handler handlecallback 4 2019 12 05 18 06 30 739 3830 3830 a debug 53 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 54 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 55 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 56 pc 000000000051c054 system lib64 libart so mterpinvokestatic 204 2019 12 05 18 06 30 739 3830 3830 a debug 57 pc 000000000053dd14 system lib64 libart so executemterpimpl 14612 2019 12 05 18 06 30 739 3830 3830 a debug 58 pc 0000000000c658a8 system framework boot framework vdex android os handler dispatchmessage 8 2019 12 05 18 06 30 739 3830 3830 a debug 59 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 739 3830 3830 a debug 60 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 739 3830 3830 a debug 61 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 739 3830 3830 a debug 62 pc 000000000051ab60 system lib64 libart so mterpinvokevirtual 584 2019 12 05 18 06 30 739 3830 3830 a debug 63 pc 000000000053db94 system lib64 libart so executemterpimpl 14228 2019 12 05 18 06 30 739 3830 3830 a debug 64 pc 0000000000c6e4ee system framework boot framework vdex android os looper loop 406 2019 12 05 18 06 30 739 3830 3830 a debug 65 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 740 3830 3830 a debug 66 pc 0000000000258374 system lib64 libart so art interpreter artinterpretertointerpreterbridge art thread art codeitemdataaccessor const art shadowframe art jvalue 216 2019 12 05 18 06 30 740 3830 3830 a debug 67 pc 0000000000278c78 system lib64 libart so bool art interpreter docall art artmethod art thread art shadowframe art instruction const unsigned short art jvalue 920 2019 12 05 18 06 30 740 3830 3830 a debug 68 pc 000000000051c054 system lib64 libart so mterpinvokestatic 204 2019 12 05 18 06 30 740 3830 3830 a debug 69 pc 000000000053dd14 system lib64 libart so executemterpimpl 14612 2019 12 05 18 06 30 740 3830 3830 a debug 70 pc 0000000000c6540c system framework boot framework vdex android os handlerthread run 56 2019 12 05 18 06 30 740 3830 3830 a debug 71 pc 0000000000252c14 system lib64 libart so zn3art11interpreterl7executeepns 6threaderkns 20codeitemdataaccessorerns 11shadowframeens 6jvalueeb llvm 3612284370 488 2019 12 05 18 06 30 740 3830 3830 a debug 72 pc 000000000050b850 system lib64 libart so artquicktointerpreterbridge 1032 2019 12 05 18 06 30 740 3830 3830 a debug 73 pc 00000000005546fc system lib64 libart so art quick to interpreter bridge 92 2019 12 05 18 06 30 740 3830 3830 a debug 74 pc 000000000054b588 system lib64 libart so art quick invoke stub 584 2019 12 05 18 06 30 740 3830 3830 a debug 75 pc 00000000000d0098 system lib64 libart so art artmethod invoke art thread unsigned int unsigned int art jvalue char const 200 2019 12 05 18 06 30 740 3830 3830 a debug 76 pc 0000000000454970 system lib64 libart so art anonymous namespace invokewithargarray art scopedobjectaccessalreadyrunnable const art artmethod art anonymous namespace argarray art jvalue char const 104 2019 12 05 18 06 30 740 3830 3830 a debug 77 pc 0000000000455a3c system lib64 libart so art invokevirtualorinterfacewithjvalue art scopedobjectaccessalreadyrunnable const jobject jmethodid jvalue 424 2019 12 05 18 06 30 740 3830 3830 a debug 78 pc 00000000004807f0 system lib64 libart so art thread createcallback void 1260 2019 12 05 18 06 30 740 3830 3830 a debug 79 pc 0000000000084148 system lib64 libc so pthread start void 64 2019 12 05 18 06 30 740 3830 3830 a debug 80 pc 0000000000023b28 system lib64 libc so start thread 68 include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | keras custom loss model compilation | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window 10 1909 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary binary pip tensorflow version use command below 1 14 python version 3 7 3 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 7 6 4 gpu model and memory rtx 2080 with 8 gb vram 16 gb dram ddr4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version for 1 unknown 1 14 0 describe the current behavior model compilation fail with the follow error traceback most recent call last file vgg loss py line 103 in main file vgg loss py line 97 in main model compile optimizer adam loss some loss metric accuracy file c user intel anaconda3 lib site package tensorflow python training tracking base py line 457 in method wrapper result method self args kwargs file c user intel anaconda3 lib site package tensorflow python keras engine training py line 337 in compile self compile weight loss and weight metric file c user intel anaconda3 lib site package tensorflow python training tracking base py line 457 in method wrapper result method self args kwargs file c user intel anaconda3 lib site package tensorflow python keras engine training py line 1710 in compile weight loss and weight metric self total loss self prepare total loss mask otal loss per sample loss loss fn call y true y pre file c user intel anaconda3 lib site package tensorflow python keras loss py line 215 in call return self fn y true y pre self fn kwargs file vgg loss py line 86 in some loss return mse vgg model predict y pre step 1 vgg model predict y true step 1 file c user intel anaconda3 lib site package tensorflow python keras engine training py line 1078 in predict callback callback batch out f actual input run metadata self run metadata run metadata ptr tensorflow python framework error impl invalidargumenterror 2 root error s find 0 invalid argument you must feed a value for placeholder tensor input node with dtype float and shape 1 512 512 3 node input node block4 conv3 relu 217 1 invalid argument you must feed a value for placeholder tensor input node with dtype float and shape 1 512 512 3 node input node 0 successful operation 0 derive error ignore describe the expect behavior the code should compile properly code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem I be attach the link to the code also download the vgg model whose link be provide in the code I be also reference here again be sure to run it as python vgg loss py p other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf train adamoptimizer doesn t work with custom tpu training loop | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 1 15 python version 3 x bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior run this colab notebook with a tpu accelerator when run the above notebook with tf train adamoptimizer we get valueerror in convert code 21 simple model fn train op tf train adamoptimizer minimize y usr local lib python3 6 dist package tensorflow core python training optimizer py 413 minimize name name usr local lib python3 6 dist package tensorflow core python training optimizer py 569 apply gradient self distribute apply args grad and var global step name usr local lib python3 6 dist package tensorflow core python distribute distribute lib py 1940 merge call return self merge call merge fn args kwargs usr local lib python3 6 dist package tensorflow core python distribute distribute lib py 1947 merge call return merge fn self strategy args kwargs usr local lib python3 6 dist package tensorflow core python training optimizer py 717 distribute apply non slot device finish args self update op group false usr local lib python3 6 dist package tensorflow core python distribute distribute lib py 1577 update non slot return self update non slot colocate with fn args kwargs group usr local lib python3 6 dist package tensorflow core python distribute tpu strategy py 580 update non slot result fn args kwargs usr local lib python3 6 dist package tensorflow core python training optimizer py 713 finish return self finish update op update usr local lib python3 6 dist package tensorflow core python train adam py 228 finish beta1 power beta2 power self get beta accumulator usr local lib python3 6 dist package tensorflow core python train adam py 115 get beta accumulator return self get non slot variable beta1 power graph graph usr local lib python3 6 dist package tensorflow core python training optimizer py 868 get non slot variable if hasattr non slot distribute container usr local lib python3 6 dist package tensorflow core python distribute value py 827 getattr return super tpuvariablemixin self getattr name usr local lib python3 6 dist package tensorflow core python distribute value py 389 getattr return getattr self get name usr local lib python3 6 dist package tensorflow core python distribute value py 834 get return super tpuvariablemixin self get device device usr local lib python3 6 dist package tensorflow core python distribute value py 324 get return self device map select for device self value device usr local lib python3 6 dist package tensorflow core python distribute value py 219 select for device device self device device util current valueerror device job worker replica 0 task 0 device cpu 0 not find in job worker replica 0 task 0 device tpu 0 job worker replica 0 task 0 device tpu 1 job worker replica 0 task 0 device tpu 2 job worker replica 0 task 0 device tpu 3 job worker replica 0 task 0 device tpu 4 job worker replica 0 task 0 device tpu 5 job worker replica 0 task 0 device tpu 6 job worker replica 0 task 0 device tpu 7 current device job worker replica 0 task 0 device cpu 0 this code run just fine with tf train momentumoptimizer and tf keras optimizer adam run same code with the optimizer type form variable set to kerasadam or momentum describe the expect behavior code should run without error use tf train adamoptimizer just like it do for the other optimizer code to reproduce the issue |
tensorflowtensorflow | sparsecategoricalcrossentropy example contain a mistake in term of input | Bug | url s with the issue l493 description of issue what need change clear description entry 1 0 and 1 2 of a tensor link to should be 10 time small in order for the second entry to sum up to 1 submit a pull request be you plan to also submit a pull request to fix the issue no the change seem too small for an expensive tf ci to run |
tensorflowtensorflow | tf 2 0 warn dense feature be cast an input tensor from dtype float64 to the layer s dtype of float32 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution ubuntu linux 18 04 x64 tensorflow instal from source or binary instal from anaconda tensorflow version use command below python c import tensorflow as tf print tf version git version tf version version unknown 2 0 0 I be use tf 2 0 0 python version python 3 7 4 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version cuda release 10 1 v10 1 168 cudnn 7 6 0 gpu model and memory nvidia gtx 1080 11 gb you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior this be both a code issue and a documentation problem but mostly a code problem I be look at the tutorial numeric column and see that the tutorial itself be generate warning so that suggest some problem in the code as well as the tutorial describe the expect behavior I would expect the tutorial to generate no warning and hence demonstrate proper code functionality as it be it be not clear whether the warning be generate from a bug in the code or from spurious warning etc code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem from future import absolute import division print function unicode literal import numpy as np import panda as pd import tensorflow as tf from tensorflow import feature column from tensorflow keras import layer from sklearn model selection import train test split url dataframe pd read csv url train test train test split dataframe test size 0 2 train val train test split train test size 0 2 print len train train example print len val validation example print len test test example def df to dataset dataframe shuffle true batch size 32 dataframe dataframe copy label dataframe pop target ds tf datum dataset from tensor slice dict dataframe label if shuffle ds ds shuffle buffer size len dataframe ds ds batch batch size return ds batch size 5 a small batch sized be use for demonstration purpose train ds df to dataset train batch size batch size val ds df to dataset val shuffle false batch size batch size test ds df to dataset test shuffle false batch size batch size we will use this batch to demonstrate several type of feature column example batch next iter train ds 0 a utility method to create a feature column and to transform a batch of datum def demo feature column feature layer layer densefeature feature column print feature layer example batch numpy age feature column numeric column age demo age should trigger or display the warning other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach no other material provide |
tensorflowtensorflow | dask tf 2 0 valueerror typeerror len of unsized object | Bug | describe the current behavior python in 6 tf convert to tensor value 2019 12 04 13 15 11 908174 I tensorflow core platform cpu feature guard cc 145 this tensorflow binary be optimize with intel r mkl dnn to use the follow cpu instruction in performance critical operation sse4 1 sse4 2 avx avx2 fma to enable they in non mkl dnn operation rebuild tensorflow with the appropriate compiler flag 2019 12 04 13 15 12 647383 I tensorflow core platform profile util cpu util cc 94 cpu frequency 3299990000 hz 2019 12 04 13 15 12 648044 I tensorflow compiler xla service service cc 168 xla service 0x5644649bc070 execute computation on platform host device 2019 12 04 13 15 12 648118 I tensorflow compiler xla service service cc 175 streamexecutor device 0 host default version 2019 12 04 13 15 12 654239 I tensorflow core common runtime process util cc 115 create new thread pool with default inter op set 2 tune use inter op parallelism thread for good performance valueerror traceback most recent call last in 1 tf convert to tensor value opt anaconda envs tf lib python3 7 site package tensorflow core python framework op py in convert to tensor v2 value dtype dtype hint name 1240 name name 1241 prefer dtype dtype hint 1242 as ref false 1243 1244 opt anaconda envs tf lib python3 7 site package tensorflow core python framework op py in internal convert to tensor value dtype name as ref prefer dtype ctx accept composite tensor 1294 1295 if ret be none 1296 ret conversion func value dtype dtype name name as ref as ref 1297 1298 if ret be notimplemente opt anaconda envs tf lib python3 7 site package tensorflow core python framework constant op py in constant tensor conversion function v dtype name as ref 284 as ref false 285 as ref 286 return constant v dtype dtype name name 287 288 opt anaconda envs tf lib python3 7 site package tensorflow core python framework constant op py in constant value dtype shape name 225 226 return constant impl value dtype shape name verify shape false 227 allow broadcast true 228 229 opt anaconda envs tf lib python3 7 site package tensorflow core python framework constant op py in constant impl value dtype shape name verify shape allow broadcast 233 ctx context context 234 if ctx execute eagerly 235 t convert to eager tensor value ctx dtype 236 if shape be none 237 return t opt anaconda envs tf lib python3 7 site package tensorflow core python framework constant op py in convert to eager tensor value ctx dtype 94 dtype dtype as dtype dtype as datatype enum 95 ctx ensure initialize 96 return op eagertensor value ctx device name dtype 97 98 valueerror typeerror len of unsized object traceback most recent call last file opt anaconda envs tf lib python3 7 site package dask array core py line 1165 in len raise typeerror len of unsized object typeerror len of unsized object describe the expect behavior should just return a tf eagertensor code to reproduce the issue I upload a test dask array which produce this error please bunzip2 it before run the follow code python import pickle import tensorflow as tf with open test pkl rb as handle value pickle load handle tf convert to tensor value the part where it fail be value 0 0 which be a dask scalar array I have no idea why tensorflow try to fetch scalar value from a dask array instead of just call np asarray value on it system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 cento 7 tensorflow instal from source or binary conda forge tensorflow version use command below unknown 2 0 0 python version 3 7 3 |
tensorflowtensorflow | bug lambda multiple layer different shape valueerror dimension must be equal | Bug | please go to stack overflow for help and support if you open a github issue here be our policy 1 it must be a bug a feature request or a significant problem with documentation for small doc fix please send a pr instead 2 the form below must be fill out 3 it shouldn t be a tensorboard issue those go here here s why we have that policy tensorflow developer respond to issue we want to focus on work that benefit the whole community e g fix bug and add feature support only help individual github also notify thousand of people when issue be file we want they to see you communicate an interesting problem rather than be redirect to stack overflow system information have I write custom code as oppose to use a stock example script provide in tensorflow y os platform and distribution e g linux ubuntu 16 04 linux tfbug 4 9 0 11 amd64 1 smp debian 4 9 189 3 deb9u2 2019 11 11 x86 64 gnu linux mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary bash pip install upgrade tf nightly tensorflow version use command below v1 12 1 19580 gc397ed9 2 1 0 dev20191203 also try in 2 0 0 stable python version python 3 5 3 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce bash python lambda bug py you can collect some of this information use our environment capture script you can obtain the tensorflow version with bash python c import tensorflow as tf print tf version git version tf version version describe the problem describe the problem clearly here be sure to convey here why it s a bug in tensorflow or a feature request when there be multiple layer wrap in a lambda where the unit of the 1st layer be not the same as the input error occur valueerror dimension must be equal if there be multiple layer wrap in a lambda where the unit of the 1st layer be the same as the input no error occur refer to bug fine if there be only a single layer wrap in a lambda no error occur refer to model lambda single if layer s be not wrap in a lambda no error occur refer to model function model bare as a prototype counterpart of subclasse layer lambda should be able to wrap multiple layer hence it be convincing that lambda be not relay the shape correctly source code log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach try to provide a reproducible test case that be the bare minimum necessary to generate the problem source code lambda bug py import tensorflow as tf from tensorflow keras import input layer d 123 c 4 bare def model bare single model in input d name model in model out layer dense c name lyr 0 model in model tf keras model input model in output model out name model bare single return model def model bare multiple model in input d name model in model io layer dense c name lyr 0 model in model out layer dense c name lyr 1 model io model tf keras model input model in output model out name model bare multiple return model function def function single in out layer dense c name lyr 0 in return out def model function single model in input d name model in model out function single model in model tf keras model input model in output model out name model function single return model def function multiple in io layer dense c name lyr 0 in out layer dense c name lyr 1 ios return out def model function multiple model in input d name model in model out function multiple model in model tf keras model input model in output model out name model function multiple return model lambda def lambda single in out layer dense c name lyr 0 in return out def model lambda single model in input d name model in model out layer lambda lambda single name lambda single model in model tf keras model input model in output model out name model lambda single return model def lambda multiple in io layer dense c name lyr 0 input shape d in bug io layer dense d name lyr 0 in fine out layer dense c name lyr 1 ios return out def model lambda multiple bug model in input d name model in model out layer lambda lambda multiple name lambda multiple output shape c model in model tf keras model input model in output model out name model lambda multiple return model def main model bare single summary model bare multiple summary model function single summary model function multiple summary model lambda single summary model lambda multiple summary print tf version return if name main main log traceback most recent call last file home johnght venv lib python3 5 site package tensorflow core python framework op py line 1619 in creat e c op c op c api tf finishoperation op desc tensorflow python framework error impl invalidargumenterror dimension must be equal but be 4 and 123 for lamb da multiple lyr 1 matmul op matmul with input shape 4 123 4 during handling of the above exception another exception occur traceback most recent call last file lambda bug py line 69 in main file lambda bug py line 65 in main model lambda multiple summary file lambda bug py line 56 in model lambda multiple model out layer lambda lambda multiple name lambda multiple output shape c model in file home johnght venv lib python3 5 site package tensorflow core python keras engine base layer py line 773 in call output call fn cast input args kwargs file home johnght venv lib python3 5 site package tensorflow core python keras layers core py line 827 in cal l return self function input argument file lambda bug py line 52 in lambda multiple out layer dense c name lyr 1 ios file home johnght venv lib python3 5 site package tensorflow core python keras engine base layer py line 773 in call output call fn cast input args kwargs file home johnght venv lib python3 5 site package tensorflow core python keras layers core py line 1089 in ca ll output gen math op mat mul input self kernel file home johnght venv lib python3 5 site package tensorflow core python ops gen math op py line 5626 in mat mul name name file home johnght venv lib python3 5 site package tensorflow core python framework op def library py line 742 in apply op helper attrs attr proto op def op def file home johnght venv lib python3 5 site package tensorflow core python framework func graph py line 595 in create op internal compute device file home johnght venv lib python3 5 site package tensorflow core python framework op py line 3314 in creat e op internal op def op def file home johnght venv lib python3 5 site package tensorflow core python framework op py line 1786 in init control input op file home johnght venv lib python3 5 site package tensorflow core python framework op py line 1622 in creat e c op raise valueerror str e valueerror dimension must be equal but be 4 and 123 for lambda multiple lyr 1 matmul op matmul with inpu t shape 4 123 4 |
tensorflowtensorflow | typo in tf keras layer attention doc example | Bug | url s with the issue description of issue what need change in the code example it currently read python variable length int sequence query input tf keras input shape none dtype int32 value input tf keras input shape none dtype int32 embed lookup token embed tf keras layer embed max token dimension query embedding of shape batch size tq dimension query embedding token embed query input value embedding of shape batch size tv dimension value embedding token embed query input the last line should instead be value embedding token embed value input |
tensorflowtensorflow | grpc verb not be use | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 linux ubuntu 16 04 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary source tensorflow version 1 15 python version 3 5 2 3 6 8 different machine instal use virtualenv pip conda pip bazel version if compile from source 0 26 1 gcc compiler version if compile from source gcc 7 4 cuda cudnn version na gpu model and memory na describe the problem run code with tf estimator runconfig protocol grpc verb do not throw any error but do not appear to use rdma log do not show any mention of rdma run the code without this protocol I e just on grpc appear to work fine which lead I to think that it s a build installation issue we also observe no speed up by use rdma which suggest that rdma be not be use we be use a 1x parameter server 1x chief 1x evaluator 15x worker setup parameter server chief and evaluator be on the same node and share the same filesystem the worker be on separate machine and do not share the same file system provide the exact sequence of command step that you execute before run into the problem I follow the guide here to compile from source however run the configure step do not give an option to install with verb support hence I build tensorflow with the follow command bazel build config verb tensorflow tool pip package build pip package everything else follow the give guide any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach sample parameter server log 2019 12 03 13 26 22 885842 I tensorflow core platform cpu feature guard cc 142 your cpu support instruction that this tensorflow binary be not compile to use sse4 1 sse4 2 avx warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow the tensorflow contrib module will not be include in tensorflow 2 0 for more information please see for I o related op if you depend on functionality not list there please file an issue warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation warn tensorflow eval strategy be not pass in no distribution strategy will be use for evaluation 2019 12 03 13 26 22 899183 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job chief 0 192 168 9 20 1235 2019 12 03 13 26 22 899214 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job ps 0 localhost 1236 2019 12 03 13 26 22 899232 I tensorflow core distribute runtime rpc grpc channel cc 258 initialize grpcchannelcache for job worker 0 192 168 9 16 1234 1 192 168 9 17 1234 2 192 168 9 18 1234 3 192 168 9 23 1234 4 192 168 9 24 1234 5 192 168 9 25 1234 6 192 168 9 26 1234 7 192 168 9 27 1234 8 192 168 9 28 1234 9 192 168 9 29 1234 10 192 168 9 30 1234 11 192 168 9 31 1234 12 192 168 9 32 1234 13 192 168 9 105 1234 14 192 168 9 106 1234 2019 12 03 13 26 22 903725 I tensorflow core distribute runtime rpc grpc server lib cc 365 start server with target grpc localhost 1236 2019 12 03 13 26 22 903796 I tensorflow core distribute runtime rpc grpc server lib cc 369 server already start target grpc localhost 1236 2019 12 03 13 26 28 930441 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 29 073805 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 29 308595 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 29 609207 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 29 994561 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 30 430739 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 30 971334 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 59 046948 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph 2019 12 03 13 26 59 278155 I tensorflow core distribute runtime worker cc 203 cancellation request for rungraph |
tensorflowtensorflow | please add test build configuration for tensorflow 1 15 and tensorflow 1 15 gpu | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue n a please provide a link to the documentation entry for example description of issue what need change please add test build configuration for tensorflow 1 15 and tensorflow 1 15 gpu clear description for example why should someone use this method how be it useful I can not find the test build configuration for tensorflow 1 15 it be useful for someone who be try to build tensorflow 1 15 |
tensorflowtensorflow | object detection evaluation do on only a subset of test image | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary default installation on google colab tensorflow version use command below v1 15 0 0 g590d6eef7e 1 15 0 python version python 3 6 8 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory google colab describe the current behavior evaluation be do on precisely 496 image on every evaluation I get this i1129 04 00 20 238537 139729717991168 coco evaluation py 205 perform evaluation on 496 image describe the expect behavior evaluation be do on 1000 image code to reproduce the issue training ssd mobilenet v2 use object detection model main py run the follow python3 object detection model main py pipeline config path pipeline config model dir ssd mobilenet num train step 200000 sample 1 of n eval example 1 alsologtostderr inside pipeline config num example 1000 also inside test image directory there be precisely 1000 annotated image I do not specify the number 496 anywhere and it seem to be change accord to how many image there actually be inside test directory |
tensorflowtensorflow | dynamic scatter of tensorarray not work in tf function eager | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 19 10 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 describe the current behavior this may be relate to 34683 but this be explicitly about the scatter method I believe it be independent the follow code create a tensorarray in a while loop every time a variable length piece be add rule I need to use scatter and tf range like this the example be a simplification the follow work with graph mode graph mode with tf function no autograph eager mode it break in eager mode with tf function no autograph describe the expect behavior it fail say that the shape could not be determine since the scatter unstack the value internally code to reproduce the issue import tensorflow as tf use eager true switch to make it run if not use eager tf compat v1 disable eager execution stop at 1000 empty sample tf tensorarray dtype tf float32 size stop at dynamic size true clear after read true we read only once at end to tensor element shape tf function autograph false if remove work def body sample length n to draw tf cast tf random poisson shape lam 30 dtype tf int32 rnd tf random uniform shape n to draw dtype tf float32 maxval 1 3 new length length n to draw index tf range length new length new sample sample scatter indice index value rnd return new sample new length def cond sample length return tf less length stop at sample tf while loop cond cond body body loop var empty sample 0 back prop false 0 reshape sample tf reshape sample stack shape 1 make a read print reshape sample if not use eager with tf compat v1 session as sess print sess run reshape sample other info log this be the actual error log file home jonas pycharm2019 3 config scratch scratch 27 py line 21 in body new sample sample scatter indice index value rnd file home jonas anaconda3 envs zfit37tf2 lib python3 7 site package tensorflow core python util tf should use py line 198 in wrap return add should use warn fn args kwargs file home jonas anaconda3 envs zfit37tf2 lib python3 7 site package tensorflow core python op tensor array op py line 1168 in scatter return self implementation scatter indice value name name file home jonas anaconda3 envs zfit37tf2 lib python3 7 site package tensorflow core python op tensor array op py line 873 in scatter for index val in zip index array op unstack value file home jonas anaconda3 envs zfit37tf2 lib python3 7 site package tensorflow core python op array op py line 1333 in unstack raise valueerror can not infer num from shape s value shape valueerror can not infer num from shape none |
tensorflowtensorflow | grucell be not compatible with its own initial state | Bug | system information have I write custom code yes os platform and distribution ubuntu 16 04 tensorflow instal from binary tensorflow version 2 1 0rc0 python version 3 5 2 describe the current behavior the initial state return by tf keras layer grucell get initial state can not be pass to the first cell call without error it raise an invalidargumenterror error describe the expect behavior rnn cell should accept their own initial state code to reproduce the issue python import tensorflow as tf batch size 4 cell tf keras layer grucell 20 initial state cell get initial state batch size batch size dtype tf float32 output state cell tf random uniform batch size 10 initial state other info log traceback most recent call last file test gru incompat py line 5 in output state cell tf random uniform batch size 10 initial state file lib python3 5 site package tensorflow core python keras engine base layer py line 822 in call output self call cast input args kwargs file lib python3 5 site package tensorflow core python keras layers recurrent py line 1846 in call matrix inner k dot h tm1 self recurrent kernel file lib python3 5 site package tensorflow core python keras backend py line 1678 in dot out math op matmul x y file lib python3 5 site package tensorflow core python util dispatch py line 180 in wrapper return target args kwargs file lib python3 5 site package tensorflow core python ops math op py line 2797 in matmul a b transpose a transpose a transpose b transpose b name name file lib python3 5 site package tensorflow core python ops gen math op py line 5631 in mat mul op raise from not ok status e name file lib python3 5 site package tensorflow core python framework op py line 6598 in raise from not ok status six raise from core status to exception e code message none file line 3 in raise from tensorflow python framework error impl invalidargumenterror in 0 be not a matrix instead it have shape 20 op matmul name gru cell matmul |
tensorflowtensorflow | inconsistent cpu gpu result of gather nd | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 2 gpu model and memory 11 g you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I set device to gpu the code run correctly when I set device to cpu the code raise error tensorflow python framework error impl invalidargumenterror indice 4 4 1 do not index into param shape 5 1 op gathernd describe the expect behavior consistent result on cpu and gpu code to reproduce the issue import os import tensorflow as tf import numpy as np np random seed 2222222 with tf device cpu 0 a np random rand 5 1 print a b tf gather nd a 0 1 1 1 2 1 3 1 4 1 print b numpy provide a reproducible test case that be the bare minimum necessary to generate the problem other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | tf keras mix precision policy mixed bfloat16 not support in keras compile | Bug | system information have I write custom code yes os platform and distribution co lab tensorflow instal from source or binary binary tensorflow version use command below tf 2 1 rc0 python version 3 6 describe the current behavior we be port a gpu base model to cloudtpu we be use keras mixed float16 mixed precision policy to enable tensorcore on gpu without any code change we be try to use mixed bfloat16 for cloudtpu for maximal performance describe the expect behavior model compile with mixed bfloat16 policy to enable mixed precision training on cloudtpu code to reproduce the issue colab link def compile keras model dtype policy tf keras mixed precision experimental policy dtype tf compat v2 kera mix precision experimental set policy policy optimizer tf optimizer sgd learn rate 0 1 momentum 0 9 model tf keras application resnet50 resnet50 weight none model compile loss sparse categorical crossentropy optimizer optimizer metric sparse categorical accuracy return model gpu model compile keras model mixed float16 tpu model compile keras model mixed bfloat16 other info log typeerror traceback most recent call last in 12 13 gpu model compile keras model mixed float16 14 tpu model compile keras model mixed bfloat16 12 frame usr local lib python3 6 dist package tensorflow core python framework op def library py in satisfiestypeconstraint dtype attr def param name 59 allow value s 60 param name dtype as dtype dtype name 61 join dtype as dtype x name for x in allow list 62 63 typeerror value pass to parameter x have datatype bfloat16 not in list of allow value float16 float32 float64 complex64 complex128 |
tensorflowtensorflow | stackedrnncell have an invalid example | Bug | url s with the issue description of issue what need change documentation example do not actually use stackedrnncell there be no example for the class be document ideally there would be both an example of the class and an example show how the same behaviour would be implement without the class |
tensorflowtensorflow | docker install instruction page be break now that v2 0 be the late image | Bug | url s with the issue documentation page with the issue description of issue what need change since the late docker build be now tensorflow v2 x instead of v1 x the python example script on this page doesn t work out of the box if the user copy and paste the command from this page they ll get a attributeerror module tensorflow have no attribute enable eager execution error when try to verify their docker container the python example script can be change to this to fix it import tensorflow compat v1 as tf tf enable eager execution print tf reduce sum tf random normal 1000 1000 a well alternative however would be to change it to a tf v2 0 native example |
tensorflowtensorflow | multiple input for io benchmark app | Bug | hi I m try to find a way in the benchmark param json to define multiple input to the network be it even possible |
tensorflowtensorflow | multiworkermirroredstrategy performance be low 2gpu 2node x1 3 speed up | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution ubuntu 18 04 tensorflow instal from source or binary pip install tensorflow gpu tensorflow version use command below 2 0 python version 3 6 9 cuda cudnn version 10 7 6 4 38 gpu model and memory tesla p4 8 g describe the current behavior I run the code describe below test 1 two machine os environ tf config json dump cluster worker server1 12345 server2 12345 task type worker index 0 in the other machine os environ tf config json dump cluster worker server1 12345 server2 12345 task type worker index 1 when the script start process the first epoch it crash describe the expect behavior 15 epoch be so slow test 2 one machine os environ tf config json dump cluster worker server1 12345 task type worker index 0 describe the expect behavior 5s epoch same as use strategy tf distribute mirroredstrategy one gpu card code import ssl import os import json import argparse import time import numpy as np import tensorflow as tf import tensorflow dataset as tfds ssl create default https context ssl create unverifie context def configure cluster worker host none task index 1 set multi worker cluster spec in tf config environment variable args worker host comma separate list of worker ip port pair return number of worker in the cluster tf config json load os environ get tf config if tf config num worker len tf config cluster get worker elif worker host worker worker host split num worker len worker if num worker 1 and task index 0 raise valueerror must specify task index when number of worker 1 task index 0 if num worker 1 else task index os environ tf config json dump cluster worker worker task type worker index task index else num worker 1 return num worker parser argparse argumentparser description tensorflow benchmark formatter class argparse argumentdefaultshelpformatter parser add argument num epoch type int default 5 help input batch size parser add argument batch size per replica type int default 32 help input batch size parser add argument worker method type str default nccl parser add argument worker host type str default localhost 23456 parser add argument worker index type int default 0 args parser parse args worker num configure cluster args worker host args worker index batch size args batch size per replica worker num print batch size d batch size gpu tf config experimental list physical device gpu print physical gpu devices num len gpu for gpu in gpu tf config experimental set memory growth gpu true if args worker method auto communication tf distribute experimental collectivecommunication auto elif args worker method ring communication tf distribute experimental collectivecommunication ring else communication tf distribute experimental collectivecommunication nccl strategy tf distribute experimental multiworkermirroredstrategy communication communication logical gpu tf config experimental list logical device gpu print logical gpu devices num len gpu def resize image label image tf image resize image 128 128 255 0 return image label if as supervise be true return image abd label dataset info tfds load tf flower split tfds split train with info true as supervise true dataset dataset map resize repeat shuffle 1024 batch batch size option tf datum option option experimental distribute auto shard false dataset dataset with option option def build and compile cnn model model tf keras sequential tf keras layer conv2d 32 3 3 activation relu tf keras layer conv2d 64 3 3 activation relu tf keras layer maxpooling2d pool size 2 2 tf keras layers dropout 0 25 tf keras layer flatten tf keras layer dense 128 activation relu tf keras layers dropout 0 5 tf keras layer dense info feature label num class activation softmax model compile opt tf keras optimizer adam learning rate 0 0001 loss tf keras loss sparse categorical crossentropy metric tf keras metric sparse categorical accuracy return model with strategy scope multi worker model build and compile cnn model print now train the distribute model class timehistory tf keras callbacks callback def on train begin self log self time self totaltime time time def on train end self log self totaltime time time self totaltime def on epoch begin self batch log self epoch time start time time def on epoch end self batch log self times append time time self epoch time start time callback timehistory step per epoch 100 print run benchmark multi worker model fit dataset step per epoch step per epoch epochs args num epoch callback time callback per epoch time np mean time callback times 1 print per epoch time per epoch time img sec batch size step per epoch per epoch time print result 1f pic sec format img sec in test 2 only 1 worker 440pic sec batch szie 128 in test 1 2 worker 610 pic sec batch szie 128 2 expect 440 2 800 question1 with dist multiworkermirroredstrategy worker num 1 why training be so slow expect |
tensorflowtensorflow | tensor diag part do not vectorize | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos 10 15 1 tensorflow instal from source or binary binary tensorflow version use command below 2 0 0 python version 3 7 describe the current behavior this code throw an error despite tf linalg tensor diag part work on each of the two matrix python k tf convert to tensor np arange 8 reshape 2 2 2 tf vectorized map tf linalg tensor diag part k unrecognizedflagerror unknown command line flag f it should return the diagonal of each of the two submatrice |
tensorflowtensorflow | loss tf keras backend sparse categorical crossentropy be different than loss sparse categorical crossentropy | Bug | so they run differently with the version from the backend perform significantly bad this be perform on a google colab notebook reset each time with gpu acceleration code 1 tensorflow version 2 x import tensorflow as tf print tf version mnist tf keras datasets fashion mnist training image training label test image test label mnist load datum training image training image 255 0 training image tf reshape training image training image shape 1 test image test image 255 0 test image tf reshape test image test image shape 1 model tf keras model sequential tf keras layers inputlayer input shape 28 28 tf keras layer reshape 28 28 1 tf keras layer conv2d 64 3 3 activation tf nn leaky relu input shape 28 28 1 tf keras layer maxpooling2d 2 2 tf keras layer conv2d 64 3 3 activation tf nn leaky relu tf keras layer maxpooling2d 2 2 tf keras layer flatten tf keras layer dense 128 activation tf nn leaky relu tf keras layer dense 10 activation softmax model compile optimizer adam loss tf keras backend sparse categorical crossentropy metric accuracy model summary model fit training image training label epoch 5 test loss model evaluate test image test label result 1 tensorflow 2 x select 2 0 0 download datum from idx1 ubyte gz 32768 29515 0s 0us step download datum from idx3 ubyte gz 26427392 26421880 0s 0us step download datum from idx1 ubyte gz 8192 5148 0s 0us step download datum from idx3 ubyte gz 4423680 4422102 0s 0us step train on 60000 sample epoch 1 5 60000 60000 24 392us sample loss 0 4204 accuracy 0 1043 epoch 2 5 60000 60000 18 301us sample loss 0 2878 accuracy 0 1026 epoch 3 5 60000 60000 18 307us sample loss 0 2474 accuracy 0 1016 epoch 4 5 60000 60000 18 293us sample loss 0 2156 accuracy 0 1011 epoch 5 5 60000 60000 18 296us sample loss 0 1936 accuracy 0 1011 code 2 tensorflow version 2 x import tensorflow as tf print tf version mnist tf keras datasets fashion mnist training image training label test image test label mnist load datum training image training image 255 0 training image tf reshape training image training image shape 1 test image test image 255 0 test image tf reshape test image test image shape 1 model tf keras model sequential tf keras layers inputlayer input shape 28 28 tf keras layer reshape 28 28 1 tf keras layer conv2d 64 3 3 activation tf nn leaky relu input shape 28 28 1 tf keras layer maxpooling2d 2 2 tf keras layer conv2d 64 3 3 activation tf nn leaky relu tf keras layer maxpooling2d 2 2 tf keras layer flatten tf keras layer dense 128 activation tf nn leaky relu tf keras layer dense 10 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy model summary model fit training image training label epoch 5 test loss model evaluate test image test label result 2 tensorflow 2 x select 2 0 0 download datum from idx1 ubyte gz 32768 29515 0s 0us step download datum from idx3 ubyte gz 26427392 26421880 0s 0us step download datum from idx1 ubyte gz 8192 5148 0s 0us step download datum from idx3 ubyte gz 4423680 4422102 0s 0us step train on 60000 sample epoch 1 5 60000 60000 11 186us sample loss 0 4279 accuracy 0 8448 epoch 2 5 60000 60000 6s 92us sample loss 0 2905 accuracy 0 8934 epoch 3 5 60000 60000 6s 92us sample loss 0 2466 accuracy 0 9096 epoch 4 5 60000 60000 6s 92us sample loss 0 2177 accuracy 0 9183 epoch 5 5 60000 60000 5s 91us sample loss 0 1942 accuracy 0 9271 |
tensorflowtensorflow | bug with embed layer for position of sequence use tf cumsum | Bug | greeting I want to create an embed layer for token position along with an embed layer for token in the sequence the first way I do be that I create a pos ids input tensor and feed an numpy array of position into the input I then use embed layer for the input as usual word ids input dtype int32 batch shape batch size max seq length name word ids pos ids input dtype int32 batch shape batch size max seq length name pos ids pos embed layer tensorflow keras layer embed input dim max seq length 1 output dim word embed size input length max seq length mask zero true trainable true embed layer tensorflow keras layer embed input dim len d 1 output dim word embed size input length max seq length mask zero true trainable true pos ids embedding pos embed layer pos ids with this I notice the network be as usual with parameter 18432 for the embed layer here be what I get when I print the model summary image however I want to use tf cumsum as another way to create the index of position by do that I can avoid have pos ids as the another input my code be as follow which I believe it be correct word ids input dtype int32 batch shape batch size max seq length name word ids pos embed layer tensorflow keras layer embed input dim max seq length 1 output dim word embed size input length max seq length mask zero true trainable true embed layer tensorflow keras layer embed input dim len d 1 output dim word embed size input length max seq length mask zero true trainable true pos tensor tf cumsum tf one like word ids int32 axis 1 pos ids embedding pos embed layer pos tensor however with this I notice pos embed layer do not have any param when I print the model summary model summary the layer also do not connect to anything in the network image I guess this be a bug and please correct I if it be the case also how to fix this problem thx |
tensorflowtensorflow | miss information embed lookup automatically return 0 for an out of vocab index | Bug | greeting I expect embed layer give an error when a word i d be beyond the fix pre determine vocab size nonetheless it be not the case as tf nn embed lookup automatically return 0 for an out of vocab index while this be nice it be risky because there be no information or anything like that from the website I personally do not know that until dig to the code carefully so I think more information at the website should be update example code to see how embed lookup return output sess tf compat v1 interactivesession param tf constant 10 20 30 40 ids tf constant 0 1 2 3 4 5 tf nn embed lookup param ids eval array 10 20 30 40 0 0 dtype int32 |
tensorflowtensorflow | bug in person detection tf lite example | Bug | tf 1 15 I just follow the step in person detection example after I get vww 96 grayscale freeze pb when I want to generate vww 96 grayscale quantize tflite I get a error valueerror can not set tensor dimension mismatch can you see it dansitu traceback most recent call last file get tflite py line 32 in tflite quant model converter convert file usr local lib python3 6 dist package tensorflow core lite python lite py line 993 in conv ert inference output type file usr local lib python3 6 dist package tensorflow core lite python lite py line 239 in cal ibrate quantize model inference output type allow float file usr local lib python3 6 dist package tensorflow core lite python optimize calibrator py li ne 75 in calibrate and quantize self calibrator feedtensor calibration sample file usr local lib python3 6 dist package tensorflow core lite python optimize tensorflow lite wr ap calibration wrapper py line 112 in feedtensor return tensorflow lite wrap calibration wrapper calibrationwrapper feedtensor self input value valueerror can not set tensor dimension mismatch |
tensorflowtensorflow | please fix the example section of tf constant initialize | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue use in the tutorial please provide a link to the documentation entry for example use in the tutorial description of issue what need change please fix the example section of tf constant initialize clear description example section of the page be not properly compile make it unreadable for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | allow growth not work when use estimator and mirroredstrategy distribution | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 3 lts mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below 1 14 0 python version 3 6 8 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version 10 0 130 7 6 4 gpu model and memory 8 tesla p40 22919mib you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior when I use estimator and distribute mirroredstrategy it will allocate all gpu memory even if I just use one gpu import tensorflow as tf session config tf configproto session config gpu option allow growth true session config allow soft placement true strategy tf contrib distribute mirroredstrategy num gpus num gpu config tf estimator runconfig session config session config train distribute strategy estimator tf estimator estimator model fn model dir config param estimator train if I set os environ tf force gpu allow growth true gpu memory will be correctly allocate I don t know if it s relate to session config allow soft placement true but if I don t set this value it will raise error can not assign a device for operation argmax describe the expect behavior make allow growth work when use estimator and mirroredstrategy distribution code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem import os import tensorflow as tf import tensorflow keras layer as layer import numpy as np def model fn feature label mode a simple 2 classify model model tf keras sequential layer dense 80 activation relu layer dense 2 logit model feature loss tf loss softmax cross entropy label logit if mode tf estimator modekeys train optimizer tf train adamoptimizer learning rate 0 001 train op optimizer minimize loss tf train get global step return tf estimator estimatorspec mode loss loss train op train op def input fn dataset that return feature and label feature mat np random randn 10 10 label mat np random randint 0 2 size 10 2 dataset tf datum dataset from tensor slice feature mat label mat return dataset batch 1 os environ cuda visible device 0 set available gpu i d session config tf configproto session config gpu option allow growth true session config allow soft placement true strategy tf contrib distribute mirroredstrategy num gpus 1 config tf estimator runconfig session config session config train distribute strategy if disable above 3 line and use follow line gpu memory allocation will be correct config tf estimator runconfig session config session config estimator tf estimator estimator model fn config config while true estimator train input fn other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | program abort after use tf io gfile makedir | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no tensorflow instal from source or binary pip install tensorflow 1 14 0 tensorflow version use command below 1 14 0 python version 3 6 8 lsb version core 4 1 amd64 core 4 1 noarch distributor i d cento description cento linux release 7 6 1810 core release 7 6 1810 codename core describe the current behavior I follow this link to set up hadoop environment after execute below snippet of code my terminal abort describe the expect behavior should create test directory in hdfs file system code to reproduce the issue import tensorflow as tf tf io gfile makedirs hdfs kevin0 8020 user root test other info log filesystem loadfilesystem fail error unable to get root cause for java lang noclassdeffounderror unable to get stack trace for java lang noclassdeffounderror a fatal error have be detect by the java runtime environment sigsegv 0xb at pc 0x00007f6be49c64f1 pid 61570 tid 0x00007f6ca07c0740 jre version java tm se runtime environment 8 0 221 b11 build 1 8 0 221 b11 java vm java hotspot tm 64 bit server vm 25 221 b11 mixed mode linux amd64 compress oop problematic frame c libhdfs so 0xa4f1 fail to write core dump core dump have be disable to enable core dumping try ulimit c unlimited before start java again an error report file with more information be save as root hs err pid61570 log if you would like to submit a bug report please visit the crash happen outside the java virtual machine in native code see problematic frame for where to report the bug |
tensorflowtensorflow | attention of the value embedding input | Bug | this line l241 and this one l368 should be value embedding token embed value input |
tensorflowtensorflow | api documentation incorrect format for tanh activation function | Bug | url s with the issue description of issue what need change the argument section of the tanh activation documentation be not format correctly I be uncertain as to why but suspect it may be due to a lack of a newline after the example block code here l201 l221 |
tensorflowtensorflow | tfrecordswriter default write mode | Bug | url s with the issue description of issue so I ve be use tfrecordwriter for a while now and I would be interested in the method it write to file if I create two tfrecord file with the exact same name what be the default wrtiing option append or re write it would be crucial to know this in my use case thank in advance |
tensorflowtensorflow | contribution guideline for interative notebook need to be modify | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change documentation under interactive notebook need to be modify to accommodate issue create after direct editing of jupyter notebook in colab clear description direct editing on colab or use vscode jupyter notebook and commit as mention here add additional unintended change like prettify and escape unicode symbol for example see here relate code commit correct link yes parameter define n a return define n a raise list and define none usage example it be useful and need for any pr request for jupyter notebook request visual if applicable n a submit a pull request I be plan to submit a pr to improve the documentation |
tensorflowtensorflow | why model summary show all layer in sub model | Bug | in old version I forget which one sub model just show model s name but not all layer but now it show all layer my versoin tf2 0 gpu demo code class swish keras layers layer def init self super swish self init self weight self add weight initializer uniform trainable true def call self input return input tf sigmoid self weight input |
tensorflowtensorflow | tutorial on basic classification need some documentation improvement | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change one of the first step by step tutorial which explain a neural network be on basic classification here however it can benefit from additional explanation on few term like overfitte optimizer etc clear description we add one line description for overfitte to help user get a first hand idea of what overfitte do later we add an extra line with a link to tensorflow definition of overfit where user can find more information the suggest change be expect to make it easy for user to get an intuitive understanding of the term correct link be the link to the source code correct yes submit a pull request yes be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | rewrite security md with technical writing | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide rewrite security md with technical writing for well understanding |
tensorflowtensorflow | description in readme md can be more descriptive | Bug | description of tensorflow in readme md can be more descriptive |
tensorflowtensorflow | tensorflow savedmodel export fail with attributeerror | Bug | I m follow the tutorial exactly as it be here finally if I want to export the train model from this tutorial use model save I get this error message attributeerror traceback most recent call last in 3 4 export path tmp save model format int t 5 model save export path save format tf 4 frame usr local lib python3 6 dist package tensorflow core python keras engine network py in save self filepath overwrite include optimizer save format signature option 984 985 save save model self filepath overwrite include optimizer save format 986 signature option 987 988 def save weight self filepath overwrite true save format none usr local lib python3 6 dist package tensorflow core python keras save save py in save model model filepath overwrite include optimizer save format signature option 113 else 114 save model save save model filepath overwrite include optimizer 115 signature option 116 117 usr local lib python3 6 dist package tensorflow core python keras save save model save py in save model filepath overwrite include optimizer signature option 72 default learning phase placeholder 73 with k learning phase scope 0 74 save lib save model filepath signature option 75 76 if not include optimizer usr local lib python3 6 dist package tensorflow core python save model save py in save obj export dir signature option 905 note we run this twice since while construct the view the first time 906 there can be side effect of create variable 907 saveableview checkpoint graph view 908 saveable view saveableview checkpoint graph view 909 usr local lib python3 6 dist package tensorflow core python save model save py in init self checkpoint view 189 concrete function function 190 for concrete function in concrete function 191 if concrete function name not in see function name 192 see function name add concrete function name 193 self concrete function append concrete function attributeerror nonetype object have no attribute name what s go on shouldn t it be possible to simply export this model to the savedmodel format I try with and without the save format tf parameter |
tensorflowtensorflow | documentation error on nn ctc loss | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the logit and logit time major parameter be ill define current on doc logit tensor of shape frame batch size num label if logit time major false shape be batch size frame num label logit time major optional if true default logit be shape time batch logit if false shape be batch time logit clear description logit tensor of shape time batch size num label if logit time major true shape be batch size frame num label if logit time major false logit time major optional if true default logit be shape time batch size num label if false shape be batch size time num label submit a pull request I m prepare a pr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.