repository stringclasses 156 values | issue title stringlengths 1 1.01k ⌀ | labels stringclasses 8 values | body stringlengths 1 270k ⌀ |
|---|---|---|---|
tensorflowtensorflow | build current master fail due to miss file | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 debian 11 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version git master python version 3 9 instal use virtualenv pip conda conda bazel version if compile from source 4 1 0 gcc compiler version if compile from source 11 cuda cudnn version n a gpu model and memory n a describe the problem build tf from the current source on gh fail due to miss file all from llvm openmp tool pm kmp h kmp platform h kmp os h provide the exact sequence of command step that you execute before run into the problem my build command be as follow bash bazel build config mkl config nogcp config nonccl c opt copt march native copt o3 s tensorflow tool pip package build pip package any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | re initialize backend on demand | Bug | describe the current behavior while use jax I have a use case where I wish to control the cpu device be use base on num device input but this couldn t be change once the backend have be initialize python import os import jax from jax lib import xla bridge from jaxlib import xla client print jax device cpu cpudevice i d 0 os environ xla flag xla force host platform device count 8 print xla client get local backend cpu device cpudevice i d 0 describe the expect behavior I wish to re initialize the backend and the new backend pick up the env variable and return 8 cpu device in the above example |
tensorflowtensorflow | what be the reason for very low gpu utilization | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 lsb release a lsb version core 4 1 amd64 core 4 1 noarch distributor i d cento description cento linux release 7 9 2009 core release 7 9 2009 codename core mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary binary tensorflow version use command below pip freeze grep tensorflow tensorflow estimator 2 2 0 tensorflow gpu 2 2 0 python version python python 3 8 5 default mar 31 2021 02 37 07 gcc 7 3 1 20180303 red hat 7 3 1 5 on linux bazel version if compile from source gcc compiler version if compile from source gcc version gcc gcc 7 3 0 copyright c 2017 free software foundation inc cuda cudnn version stat usr local cuda file usr local cuda usr local cuda 10 2 size 20 block 0 io block 4096 symbolic link device fd00h 64768d inode 67157410 link 1 access 0777 lrwxrwxrwx uid 0 root gid 0 root context unconfine u object r usr t s0 access 2021 05 20 10 43 06 864530636 0400 modify 2020 09 21 09 39 18 559883390 0400 change 2020 09 21 09 39 18 559883390 0400 birth gpu model and memory sudo lshw c display sudo password for jalal display description vga compatible controller product gp102 geforce gtx 1080 ti vendor nvidia corporation physical i d 0 bus info pci 0000 06 00 0 version a1 width 64 bit clock 33mhz capability pm msi pciexpress vga controller bus master cap list rom configuration driver nvidia latency 0 resource irq 89 memory f8000000 f8ffffff memory a0000000 afffffff memory b0000000 b1ffffff ioport d000 size 128 memory f9000000 f907ffff display description vga compatible controller product gp102 geforce gtx 1080 ti vendor nvidia corporation physical i d 0 bus info pci 0000 05 00 0 version a1 width 64 bit clock 33mhz capability pm msi pciexpress vga controller bus master cap list rom configuration driver nvidia latency 0 resource irq 88 memory fa000000 faffffff memory c0000000 cfffffff memory d0000000 d1ffffff ioport e000 size 128 memory fb000000 fb07ffff you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version import tensorflow as tf print tf version git version tf version version v2 2 0 rc4 8 g2b96f3662b 2 2 0 describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach I be not sure why this code be barely use gpu memory and why the gpu utilization be very low it show it be use both of my gpu indeed how can I fix this also for 1000 image how long do this step take approximately python feature extraction py input list vertex path gt txt model pointnet hico model path feature extraction model 10000 ckpt I run the code from this directory scratch3 research code dj rn dawnlight pointnet screenshot from 2021 06 08 02 13 21 screenshot from 2021 06 08 02 13 26 here s a copy of your code import argparse import math import h5py import numpy as np import tensorflow as tf import tensorflow compat v1 as tf tf disable v2 behavior import socket import importlib import os import sys import cpickle as pickle import pickle base dir os path dirname os path abspath file sys path append base dir sys path append os path join base dir model sys path append os path join base dir util import tf util parser argparse argumentparser parser add argument gpu type int default 0 help gpu to use default gpu 0 parser add argument model default pointnet hico help model name pointnet cls or pointnet cls basic default pointnet cls parser add argument num point type int default 1228 help point number 256 512 1024 2048 default 1024 parser add argument model path default log model ckpt help model checkpoint file path default log model ckpt parser add argument input list default help path list of your point cloud file default pc list txt flag parser parse args num point flag num point gpu index flag gpu gpu index 1 print gpu index gpu index model path flag model path batch size 1 model importlib import module flag model import network module model file os path join base dir model flag model py max num point 1228 num class 600 hostname socket gethostname print hostname hostname def evaluate with tf device gpu str gpu index with tf device device gpu 1 pointcloud pl model placeholder input batch size num point be train pl tf placeholder tf bool shape simple model feat model get model pointcloud pl be train pl add op to save and restore all the variable saver tf train saver create a session config tf configproto config gpu option allow growth true config allow soft placement true config log device placement true sess tf session config config restore variable from disk saver restore sess model path op pointcloud pl pointcloud pl be train pl be train pl feat feat eval one epoch sess op def eval one epoch sess op be train false input list none with open flag input list r as f input list f readline for fn in range len input list current datum pickle load open fn rb current datum current datum none num point feed dict op pointcloud pl current datum op be train pl be train feat sess run op feat feed dict feed dict print filename fn pickle dump feat open fn 4 feature pkl wb with tf graph as default evaluate here s nvtop output screenshot from 2021 06 08 02 24 06 |
tensorflowtensorflow | how to get any tensorflow version to work with cuda 10 2 in centos 7 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach how should I fix this in centos 7 jalal goku pip freeze grep tensorflow tensorflow estimator 2 2 0 tensorflow gpu 2 2 0 jalal goku python python 3 8 5 default mar 31 2021 02 37 07 gcc 7 3 1 20180303 red hat 7 3 1 5 on linux type help copyright credit or license for more information import tensorflow as tf print num gpu available len tf config list physical device gpu 2021 06 07 23 50 07 811271 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcuda so 1 2021 06 07 23 50 07 867796 I tensorflow core common runtime gpu gpu device cc 1561 find device 0 with property pcibusid 0000 05 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 6705ghz corecount 28 devicememorysize 10 92gib devicememorybandwidth 451 17gib s 2021 06 07 23 50 07 869403 I tensorflow core common runtime gpu gpu device cc 1561 find device 1 with property pcibusid 0000 06 00 0 name geforce gtx 1080 ti computecapability 6 1 coreclock 1 6705ghz corecount 28 devicememorysize 10 92gib devicememorybandwidth 451 17gib s 2021 06 07 23 50 07 870136 w tensorflow stream executor platform default dso loader cc 55 could not load dynamic library libcudart so 10 1 dlerror libcudart so 10 1 can not open share object file no such file or directory ld library path usr local cuda 10 0 lib64 2021 06 07 23 50 07 874249 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcubla so 10 2021 06 07 23 50 07 877819 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcufft so 10 2021 06 07 23 50 07 878745 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcurand so 10 2021 06 07 23 50 07 882687 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusolver so 10 2021 06 07 23 50 07 884788 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcusparse so 10 2021 06 07 23 50 07 890952 I tensorflow stream executor platform default dso loader cc 44 successfully open dynamic library libcudnn so 7 2021 06 07 23 50 07 891011 w tensorflow core common runtime gpu gpu device cc 1598 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device num gpu available 0 this be despite have two gpu enter image description here 1 1 jalal goku lsb release a lsb version core 4 1 amd64 core 4 1 noarch distributor i d cento description cento linux release 7 9 2009 core release 7 9 2009 codename core also nvcc version nvcc nvidia r cuda compiler driver copyright c 2005 2018 nvidia corporation build on sat aug 25 21 08 01 cdt 2018 cuda compilation tool release 10 0 v10 0 130 I try the following as suggest by issuecomment 629801937 and didn t work jalal goku djrn ls usr local cuda 10 2 target x86 64 linux lib libcudart so 10 2 lrwxrwxrwx 1 root root 20 sep 21 2020 usr local cuda 10 2 target x86 64 linux lib libcudart so 10 2 libcudart so 10 2 89 jalal goku djrn sudo ln s usr local cuda 10 2 target x86 64 linux lib libcudart so 10 2 usr lib x86 64 linux gnu libcudart so 10 1 sudo password for jalal ln fail to create symbolic link usr lib x86 64 linux gnu libcudart so 10 1 no such file or directory to be specific I need tensforflow to work with cuda 10 2 I be fine with any version of tensorflow preference be tensorflow 2 however couldn t find a version that work with cuda 10 2 test build configuration also base on this my cuda version be 10 2 which be different from both nvidia smi and nvcc version version stat usr local cuda file usr local cuda usr local cuda 10 2 size 20 block 0 io block 4096 symbolic link device fd00h 64768d inode 67157410 link 1 access 0777 lrwxrwxrwx uid 0 root gid 0 root context unconfine u object r usr t s0 access 2021 05 20 10 43 06 864530636 0400 modify 2020 09 21 09 39 18 559883390 0400 change 2020 09 21 09 39 18 559883390 0400 birth p s I have make my virtual environment use python venv command and don t want to use conda or pyenv 1 |
tensorflowtensorflow | ptxas exit with non zero error code 256 output | Bug | version tensorflow 2 3 0 gpu shell 2021 06 06 03 21 05 290270 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2021 06 06 03 21 07 202513 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcuda so 1 2021 06 06 03 21 07 231310 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 232673 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 00 08 0 name tesla p40 computecapability 6 1 coreclock 1 531ghz corecount 30 devicememorysize 22 38gib devicememorybandwidth 323 21gib s 2021 06 06 03 21 07 232823 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2021 06 06 03 21 07 234815 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2021 06 06 03 21 07 236731 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2021 06 06 03 21 07 237019 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2021 06 06 03 21 07 239001 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2021 06 06 03 21 07 240099 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2021 06 06 03 21 07 244241 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2021 06 06 03 21 07 244420 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 245793 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 247102 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2021 06 06 03 21 07 247458 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 fma to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 06 06 03 21 07 254386 I tensorflow core platform profile util cpu util cc 104 cpu frequency 2399995000 hz 2021 06 06 03 21 07 254792 I tensorflow compiler xla service service cc 168 xla service 0x7f7f42b809f0 initialize for platform host this do not guarantee that xla will be use device 2021 06 06 03 21 07 254818 I tensorflow compiler xla service service cc 176 streamexecutor device 0 host default version 2021 06 06 03 21 07 353171 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 354886 I tensorflow compiler xla service service cc 168 xla service 0x7f7f42a982f0 initialize for platform cuda this do not guarantee that xla will be use device 2021 06 06 03 21 07 354922 I tensorflow compiler xla service service cc 176 streamexecutor device 0 tesla p40 compute capability 6 1 2021 06 06 03 21 07 355126 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 356205 I tensorflow core common runtime gpu gpu device cc 1716 find device 0 with property pcibusid 0000 00 08 0 name tesla p40 computecapability 6 1 coreclock 1 531ghz corecount 30 devicememorysize 22 38gib devicememorybandwidth 323 21gib s 2021 06 06 03 21 07 356253 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2021 06 06 03 21 07 356289 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2021 06 06 03 21 07 356307 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcufft so 10 2021 06 06 03 21 07 356322 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcurand so 10 2021 06 06 03 21 07 356337 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusolver so 10 2021 06 06 03 21 07 356352 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcusparse so 10 2021 06 06 03 21 07 356367 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2021 06 06 03 21 07 356421 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 357489 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 07 358524 I tensorflow core common runtime gpu gpu device cc 1858 add visible gpu device 0 2021 06 06 03 21 07 358576 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudart so 10 1 2021 06 06 03 21 08 072108 I tensorflow core common runtime gpu gpu device cc 1257 device interconnect streamexecutor with strength 1 edge matrix 2021 06 06 03 21 08 072157 I tensorflow core common runtime gpu gpu device cc 1263 0 2021 06 06 03 21 08 072167 I tensorflow core common runtime gpu gpu device cc 1276 0 n 2021 06 06 03 21 08 072387 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 08 073502 I tensorflow stream executor cuda cuda gpu executor cc 982 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero 2021 06 06 03 21 08 074578 I tensorflow core common runtime gpu gpu device cc 1402 create tensorflow device job localhost replica 0 task 0 device gpu 0 with 21292 mb memory physical gpu device 0 name tesla p40 pci bus i d 0000 00 08 0 compute capability 6 1 model movinet classifier layer type output shape param image inputlayer none none none none 0 movinet movinet stem none none no 911583 classifier head classifierh none 600 2214488 total param 3 126 071 trainable param 3 111 799 non trainable param 14 272 2021 06 06 03 21 17 983060 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcubla so 10 2021 06 06 03 21 18 261529 I tensorflow stream executor platform default dso loader cc 48 successfully open dynamic library libcudnn so 7 2021 06 06 03 21 19 360108 w tensorflow stream executor gpu asm compiler cc 81 run ptxas version return 256 2021 06 06 03 21 19 405225 w tensorflow stream executor gpu redzone allocator cc 314 internal ptxas exit with non zero error code 256 output rely on driver to perform ptx compilation modify path to customize ptxas location this message will be only log once 2021 06 06 03 21 19 592207 w tensorflow compiler xla service gpu buffer comparator cc 592 internal ptxas exit with non zero error code 256 output rely on driver to perform ptx compilation set xla flag xla gpu cuda data dir path to cuda or modify path can be use to set the location of ptxas this message will only be log once 2021 06 06 03 21 19 671090 f tensorflow compiler xla service gpu nvptx compiler cc 419 ptxas return an error during compilation of ptx to sass internal ptxas exit with non zero error code 256 output if the error message indicate that a file could not be write please verify that sufficient filesystem space be provide abort core dump |
tensorflowtensorflow | incorrect figure on word2vec tutorial | Bug | url s with the issue summary description of issue what need change clear description the summary figure on summary seem incorrect the figure show the index of shimmer be 7 while the code above say the index be 5 the red part of the figure have word temperature and code which didn t appear in the input sentence at all the figure also have index like 784 and 589 I don t know what s go on here as a result the figure become confusing for learner please fix the figure submit a pull request no |
tensorflowtensorflow | ambiguous example on word2vec tutorial | Bug | url s with the issue description of issue what need change clear description the code of the tutorial output the follow target index 3 target word road context indice 1 2 1 4 3 context word the wide the shimmer road label 1 0 0 0 0 the context word the have label both 1 and 0 I feel this example would be ambiguous when generate negative sample should the one in postive sample be exclude submit a pull request no I m learn word2vec not capable of write a correct algorithm |
tensorflowtensorflow | tf linalg eigh yield invalid eigensystem when use inside a decorate tf function with jitcompile true flag | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 host use docker image tensorflow tensorflow 2 5 0 gpu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary docker image tag 2 5 0 gpu tensorflow version use command below v2 5 0 rc3 213 ga4dfb8d1a71 2 5 0 python version 3 6 9 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version usr local cuda 11 2 target x86 64 linux lib libcudart so 11 2 152 usr local cuda 11 2 target x86 64 linux lib libcudart static a gpu model and memory nvidia rtx 2070 8 gb describe the current behavior use the jitcompile true flag for a decorate function that use tf linalg eigh return an incorrect solution remove the flag yield a correct solution note that this do work in tf 2 4 1 use the same code with experimentalcompile true it also fail for when cuda visible device 1 be to disable gpu evaluation describe the expect behavior the code snippet paste below which test that the eigen system be a valid solution should not raise an error contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute possibly if I can get some guidance on where the issue might be as it stand I wouldn t know where to begin standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook fail both with and without this environment variable set to disable gpu evaluation import os os environ cuda visible device 1 import numpy as np import tensorflow as tf a mat np array 0 1 1 1 1 0 1 1 1 1 0 1 1 1 1 0 dtype np float32 tf function input signature tf tensorspec none dtype tf float32 jit compile false def eigh uncompiled arg return tf linalg eigh arg tf function input signature tf tensorspec none dtype tf float32 jit compile true def eigh compile arg return tf linalg eigh arg for eigh in np linalg eigh eigh uncompile eigh compile val vec eigh a mat assert a x lambda x for eigen system print tf linalg matmul a mat vec val tf newaxis vec if not np allclose tf linalg matmul a mat vec val tf newaxis vec atol 1e 6 raise assertionerror f test fail for function eigh name other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach n a should be easy to reproduce with the above code snippet |
tensorflowtensorflow | modify tf math reduce std so that it be compatible with ragged tensor | Bug | same issue as reduce variance address in 37000 for reduce std |
tensorflowtensorflow | output from tensor flow lite doesn t meet python script to generate output from any layer if I train the model | Bug | image from quantization and training of neural network for efficient integer arithmetic only inference hello I be try to catch the error which be simply I take input weight zero scale then add bias to the result and apply this equation in first image use python compare the result with output from tensor flow lite but a weird thing happen which be if I untraine model just initialize it with random value the two output from python and tensor flow lite meet but if I train model and do the same thing I didn t get the same output in detail to get output from point wise layer conv2d I extract weight zero input scale from tensor flow lite and apply the equation in image get same result if I initialize model with random value in moment that I train model the result be different note that I extract the new parameter out for sure I implement moblie net and use normal relu not relu 6 be this can make problem I upload my neutron model |
tensorflowtensorflow | depthwiseconv2d documentation a different filter per channel vs the same filter per channel | Bug | the documentation for tf keras layer depthwiseconv2d describe this operation as split input into individual channel convolve each with the layer s kernel and finally stack the result this give the impression the same kernel be apply to all channel but I believe the actual implementation apply a different kernel per channel relevant code section backend implementation l760 l766 keras layer l2260 l2269 also see mobilenet paper page 3 formula 3 |
tensorflowtensorflow | incorrect variance in normalization layer of efficientnet | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 5 lts google colab mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below v2 5 0 0 ga4dfb8d1a71 2 5 0 python version 3 7 10 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior efficientnet include a normalization layer within its model definition l321 but it seem like the variance be incorrect the variance be 0 229 0 224 0 225 but those value be the standard deviation of the imagenet dataset l196 when the normalization layer be call on new input the input be normalize use the mean which look correct and the square root of the variance so in the current efficientnet implementation input be normalize use the square root of the standard deviation of imagenet describe the expect behavior I expect input to efficientnet to be normalize accord to the standard deviation of the imagenet dataset contribute do you want to contribute a pr yes no no because I don t know where I would make this change the change would have to be within the save model standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import numpy as np import tensorflow as tf model tf keras application efficientnetb0 weight imagenet norm layer model layer 2 assert normalization in norm layer name print norm layer mean numpy 0 485 0 456 0 406 print norm layer variance numpy 0 229 0 224 0 225 generate sample input tf random set seed 42 x tf random uniform 1 224 224 3 0 255 dtype int32 seed 42 def get reference input get the reference normalize output x np asarray input astype float32 x 255 0 x 0 0 485 x 1 0 456 x 2 0 406 x 0 0 229 x 1 0 224 x 2 0 225 return x def get current tf efficientnet norm output input get the normalize output from the current implementation x np asarray input astype float32 x 255 0 x 0 0 485 x 1 0 456 x 2 0 406 x 0 np sqrt 0 229 x 1 np sqrt 0 224 x 2 np sqrt 0 225 return x model normalizer tf keras model model input norm layer output below be true they be the same np allclose get current tf efficientnet norm output x model normalizer x atol 1e 07 below be false they be different np allclose get reference x model normalizer x atol 1e 07 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach the normalization layer normalize use the square root of the variance l242 l243 which equal to the standard deviation |
tensorflowtensorflow | debug issue | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary source tensorflow version use command below 2 4 1 python version 3 6 9 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory quadro rtx 6000 describe the current behavior invalidargumenterror condition 24 then 23 and else must be broadcastable node selectv2 7 op iteratorgetnext what do this mean I try debug use tf debug all my case be pass still not able to figure how how to debug this be be possible to print the graph and node of the function any help will be appreciate contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute yes |
tensorflowtensorflow | failure when train with tf gradienttape for regression problem | Bug | system information os platform and distribution window 10 the same thing happen on linux tensorflow version 2 1 0 the same thing happen on 2 5 0 python version 3 6 6 describe the current behavior for the exactly the same model training via tf gradienttape either a do not converge or b converge with bad score than model training via tf keras model fit method describe the expect behavior similar training outcome when use tf gradinettape and tf keras model fit standalone code to reproduce the issue import numpy as np from sklearn dataset import load boston import tensorflow as tf tf random set seed 0 def make model input layer tf keras layers input shape np shape x 1 inner layer 1 tf keras layer dense unit 10 activation selu kernel initializer tf keras initializers lecun normal input layer inner layer 2 tf keras layer dense unit 10 activation selu kernel initializer tf keras initializers lecun normal inner layer 1 output layer tf keras layer dense unit 1 activation linear inner layer 2 return tf keras model input layer output layer get raw data raw feature y load boston return x y true standardize feature x raw feature raw feature mean axis 0 raw feature std axis 0 print np std x axis 0 reshape target y np array np reshape y newshape 1 1 dtype np float32 training parameter optimizer tf keras optimizer adam optimizer mp tf keras mixed precision experimental lossscaleoptimizer optimizer dynamic objective tf keras loss meansquarederror batch size 4 epoch 5 fit via fit method train fit make model train fit summary train fit compile optimizer objective train fit fit x x y y batch size batch size epoch epoch fit via gradient tape gradient tape fit make model gradient tape fit summary dataset tf datum dataset from tensor slice x y batch batch size for epoch in range 0 epoch for step x batch y batch in enumerate dataset with tf gradienttape as tape prediction gradient tape fit x batch training true loss objective y batch prediction scale loss optimizer mp get scale loss loss scale grad tape gradient scale loss gradient tape fit trainable weight gradient optimizer mp get unscaled gradient scale grad optimizer apply gradient zip gradient gradient tape fit trainable weight prediction via tape gradient tape fit predict x print tape mse s np mean np power y prediction via tape 2 prediction via train train fit predict x print fit mse s np mean np power y prediction via train 2 |
tensorflowtensorflow | eager execution documentation include section not describe relation or relevance to eager execution | Bug | url s with the issue description of issue what need change in the documentation for eager execution the follow section be include but no detail be provide as to why these section be relate to eager execution a description be require to indicate if these capability operation be only available operate differently or in what other way be relate to eager execution right now it be entirely unclear why these section be locate on the eager execution page object base save summary and tensorboard advance automatic differentiation topic performance |
tensorflowtensorflow | fail to load weight of keras mobilenetv2 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux google colab tensorflow version use command below 2 5 0 python version 3 7 10 describe the current behavior tensorflow keras application mobilenetv2 fail to return a model with exception url fetch failure on 404 not find describe the expect behavior expect to get a model with load weight contribute do you want to contribute a pr yes no no standalone code to reproduce the issue import tensorflow model tensorflow keras application mobilenetv2 input shape 224 224 3 alpha 1 include top false weight imagenet input tensor tensorflow keras layers input 224 224 3 pool none class 1000 classifier activation softmax colab to reproduce other info log httperror traceback most recent call last usr local lib python3 7 dist package tensorflow python keras util datum util py in get file fname origin untar md5 hash file hash cache subdir hash algorithm extract archive format cache dir 257 try 258 urlretrieve origin fpath dl progress 259 except urllib error httperror as e 9 frame httperror http error 404 not find during handling of the above exception another exception occur exception traceback most recent call last usr local lib python3 7 dist package tensorflow python keras util datum util py in get file fname origin untar md5 hash file hash cache subdir hash algorithm extract archive format cache dir 258 urlretrieve origin fpath dl progress 259 except urllib error httperror as e 260 raise exception error msg format origin e code e msg 261 except urllib error urlerror as e 262 raise exception error msg format origin e errno e reason exception url fetch failure on 404 not find |
tensorflowtensorflow | no consideration of activation function in the implementation of initializer | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 everything mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary na tensorflow version use command below 2 5 python version bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior in the tensorflow source code of neither the he initializer l849 l883 nor the glorot initializer l711 l752 along with the code of the parent class variancescale initializer l430 l531 there be no mention of the dependency on activation function s in the code of initializer describe the expect behavior as per the screenshot show below first one be from the table 11 1 of the book hand on machine learn with scikit learn and tensorflow version 2 and the bottom one be from the first version of the same book the formula for glorot and he initializer should be dependent on the activation function image image standalone code to reproduce the issue source code of the he initializer l849 l883 glorot initializer l711 l752 and the code of the parent class variancescale initializer l430 l531 |
tensorflowtensorflow | list of initializer be specify twice in the class section of documentation | Bug | please provide a link to the documentation entry for example class description of issue what need change every initializer have be specify twice constant glorotuniform etc and constant glorot uniform etc however they for example glorotuniform and glorot uniform point to the same link be there any specific reason for this redundancy |
tensorflowtensorflow | documentation for model trainable weight be miss | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change model trainable weight return the weight and bias of all the layer of the model documentation to that function be miss in tensorflow org |
tensorflowtensorflow | update image dataset py | Bug | update return value for tf keras preprocesse image dataset from directory fix 49829 |
tensorflowtensorflow | update gpu support page to reflect correct library require for tf 2 5 0 | Bug | url s with the issue description of issue what need change the documentation have not be update to state the correct version of cuda cudnn and nvinfer that the current stable tensorflow version 2 5 0 require for example in the ubuntu 18 04 configuration instead of bash install development and runtime librarie 4 gb sudo apt get install no install recommend cuda 11 0 libcudnn8 8 0 4 30 1 cuda11 0 libcudnn8 dev 8 0 4 30 1 cuda11 0 it should be something like bash install development and runtime librarie 4 gb sudo apt get install no install recommend cuda 11 2 libcudnn8 8 1 0 77 1 1 cuda11 2 libcudnn8 dev 8 1 0 77 1 cuda11 2 it s also particularly unclear which version of the nvinfer library be require to enable tf trt optimization since it s also not state in the build from source page and it s also not obvious how to check that trt be enable |
tensorflowtensorflow | image dataset from directory return dtype uint8 dataset if interpolation near | Bug | url s with the issue clear description tf keras preprocesse image dataset from directory return dtype uint8 dataset if interpolation near however function docstring report mislead return define return a tf datum dataset object if label mode be none it yield float32 tensor of shape batch size image size 0 image size 1 num channel encode image see below for rule regard num channel otherwise it yield a tuple image label where image have shape batch size image size 0 image size 1 num channel and label follow the format describe below description of issue what need change return dtype uint8 for interpolation near should be specify such as return a tf datum dataset object if label mode be none it yield float32 tensor of shape batch size image size 0 image size 1 num channel encode image see below for rule regard num channel if label mode be none and interpolation be bilinear it yield uint8 otherwise it yield a tuple image label where image have shape batch size image size 0 image size 1 num channel and label follow the format describe below |
tensorflowtensorflow | add doc about force zero output for mask with bidirectional | Bug | 49738 add description that the output of mask timestep be zero when return sequence of layer be true with bidirectional |
tensorflowtensorflow | no description about force zero output for mask in bidirectional | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry description of issue what need change clear description python import tensorflow as tf lstm tf keras layers lstm 4 return sequence true bi tf keras layers bidirectional lstm x tf random normal 1 4 16 mask tf constant true true true false print lstm x mask mask print bi x mask mask tf tensor 0 01245601 0 38689056 0 01844893 0 0718843 0 14610071 0 23905458 0 39626616 0 17714866 0 00543382 0 06880241 0 04203304 0 08996341 0 00543382 0 06880241 0 04203304 0 08996341 shape 1 4 4 dtype float32 tf tensor 0 24271122 0 05120631 0 06832076 0 5101022 0 3812662 0 44380718 0 07919203 0 07195219 0 16884208 0 18173794 0 0029141 0 13847476 0 08567893 0 2971174 0 15979137 0 01258926 0 11614764 0 17977336 0 20486659 0 0677676 0 40112934 0 27802816 0 20526664 0 1625048 0 0 0 0 0 0 0 0 shape 1 4 8 dtype float32 l492 l498 when we use tf keras layers bidirectional with return sequence true the output for mask sequence become zero because force zero output for mask internally however there be no description that the mask timestep should be zero with return sequence true in bidirectional layer documentation without bidirectional zero output for mask be false by default many of people hardly expect the output of mask timestep be zero so I think this be very confused and easy to misunderstand I think the fact that it force zero oputput for mask internally should be add to the bidirectional document submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide yes I may be able to add document about this |
tensorflowtensorflow | error on google colab but not on local machine | Bug | 9 hour ago I run some code on google colab which run totally fine this morning I get the follow error try to run the exact same code valueerror traceback most recent call last in 11 model on top 12 x base model output 13 x flatten x 14 x dense 128 activation relu x 15 x dropout 0 5 x 5 frame usr local lib python3 7 dist package tensorflow python framework constant op py in convert to eager tensor value ctx dtype 96 dtype dtype as dtype dtype as datatype enum 97 ctx ensure initialize 98 return op eagertensor value ctx device name dtype 99 100 valueerror attempt to convert a value none with an unsupported type to a tensor the exact same code run totally fine on local runtime |
tensorflowtensorflow | 2 4 0 doc | Bug | when click on r2 4 here it direct to v2 5 0 instead |
tensorflowtensorflow | autograph could not transform function and will run it as be for an example from a guide | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 window 10 pro 19041 985 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device none tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 8 8 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version none gpu model and memory none you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior run code def train one step pass tf function def train num step print trace with num step num step tf print execute with num step num step for in tf range num step train one step train num step 10 from the guide and receive a warning see an attachment warning log describe the expect behavior an example from the guide should work contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute no standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | reduce variance give error in case of raggedtensor when axis 0 | Bug | as part of 37014 reduce variance be add for ragged tensor although it be work fine for axis 1 it give error for same input if we change to axis 0 I be attach the gist by observe the stack trace what I think the issue be 1 when I explicitly pass input to ragged math op reduce variance it give correct result so the problem be not with reduce variance function 2 when I call tf math reduce variance it internally call globaldispatcherop which iterate over all the dispatcher to see where the op be support 3 when the op run for binaryraggedelementwisedispatcher it fail for some reason and that be why execution fail it didn t reach till the dispatcher associate for reduce variance 4 that s why the problem I think be with dispatcher module mihaimaruseac edloper I be not able to exactly find why the error occur in dispatcher please help I by point to specific direction and will contribute the fix |
tensorflowtensorflow | update variable name within anomaly detection example | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change within anomaly detection section within build the model some variable name be mislead with regard to datum that be handle when use the encoder and decoder after train as a matter of fact datum here be ecg time series and no long image like in previous example with mnist and fashion mnist clear description encode imgs autoencod encoder normal test datum numpy decode imgs autoencoder decoder encode imgs numpy should be replace by something like encode datum autoencod encoder normal test datum numpy decode datum autoencoder decoder encode data numpy correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | tutorial return 404 on small part | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change clear description under ther part hist pd dataframe history history hist epoch history epoch hist tail part of the tutorial return a imbed site which be a 404 page correct link yes parameter define not relevant return define not relevant raise list and define not relevant usage example not relevant request visual if applicable bug report submit a pull request not relevant |
tensorflowtensorflow | be tf type be deprecate in tensor util py what else may I use thank | Bug | system information google colab tensorflow v 2 4 1 python 3 7 10 runtime no hardware acceleration describe the current behavior I m try to do inference from a model I train for object detection I successfully export the model I m use the python code to load this new model and do inference but my code keep crash at this part from object detection util import label map util and this be the error code 192 from tensorflow python framework tensor util import makendarray as make ndarray 193 from tensorflow python framework tensor util import constant value as get static value 194 from tensorflow python framework tensor util import be tf type as be tensor 195 from tensorflow python framework tensor util import make tensor proto 196 from tensorflow python framework type spec import typespec importerror can not import name be tf type from tensorflow python framework tensor util so I check tensor util py tensorflow python framework tensor util py and find that be tf type be to be deprecate the function be still there however be there any other way to load and use do inference with an export save model without use be tf type thank in advance here be the code I be use by the way import the require library for object detection infernece import time import tensorflow as tf import object detection util from object detection util import label map util from object detection util import visualization util as viz util import os import cv2 import matplotlib pyplot as plt matplotlib inline set min confidence threshold min conf thresh 6 load the export model from save model directory path to save model r custom od workspace export model save model path to save model r export model save model print loading model end start time time time load save model and build detection function detect fn tf save model load path to save model end time time time elapse time end time start time print do take second format elapse time load label map datum path to label r custom od workspace annotation label map pbtxt path to label r annotation label map pbtxt category index label map util create category index from labelmap path to label use display name true image file for inference image path r custom od workspace image test 00178 jpg def load image into numpy array path image path r image test testimage 1 jpg def load image into numpy array path load an image from file into a numpy array put image into numpy array of shape height width channel where channel 3 for rgb to feed into tensorflow graph args path the file path to the image return uint8 numpy array with shape img height img width 3 return np array cv2 cvtcolor cv2 imread path cv2 color bgr2rgb image np load image into numpy array image path run the infernce on the image specify in the image path the input need to be a tensor convert it use tf convert to tensor input tensor tf convert to tensor image np the model expect a batch of image so add an axis with tf newaxis input tensor input tensor tf newaxis detection detect fn input tensor all output be batch tensor convert to numpy array and take index 0 to remove the batch dimension we re only interested in the first num detection num detection int detection pop num detection detection key value 0 num detection numpy for key value in detection item detection num detection num detection detection class should be int detection detection class detection detection class astype np int64 print detection detection class image np with detection image np copy viz util visualize box and label on image array image np with detection detection detection box detection detection class detection detection score category index use normalize coordinate true max box to draw 200 min score thresh min conf thresh agnostic mode false plt figure plt imshow image np with detection print do plt show which I find at this tutorial online I go to the tensorflow api doc at but it use from object detection util import label map util as well and give I the same error any advice would be much appreciate thank in advance |
tensorflowtensorflow | doc for custom gradient to compute gradient for trainable param and compute over batch | Bug | fix 26270 thank to tsbertalan for gist I take help of that and soon will be raise pr to tensorflow example repo for add the custom gradient example for polynomial cc alextp dynamicwebpaige tsbertalan |
tensorflowtensorflow | incorrect requirement txt in magic wand example | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source tensorflow version commit sha if source f26800a1e5b1199cfc1a5aca916edc836b541687 tensorflow python package version 2 4 0 attempt python version python 3 8 64 bit pip version pip 21 1 1 describe the problem for the tensorflow lite micro magic wand example attempt to install requirement follow instruction in readme md fail collect numpy 1 16 2 download numpy 1 16 2 zip 5 1 mb 5 1 mb 477 kb s error can not install numpy 1 16 2 and tensorflow 2 4 0 because these package version have conflicting dependency the conflict be cause by the user request numpy 1 16 2 tensorflow 2 4 0 depend on numpy 1 19 2 even when numpy be manually update to 1 19 2 and the model be train I think tensorflow 2 4 0 doesn t recognize the operation didn t find op for builtin opcode reshape version 1 fail to get registration from op code reshape fail start model allocation caveat to get the above result I convert the lite model with xxd I model tflite magic wand model datum cc copy the model datum and length from the tensorflow tree into the zephyr tree s tensorflow magic wand sample magic wand model datum cc and run it on a renode emulate litex vexriscv board maybe there s a factor in that process that could ve introduce an issue and in that case my apology for file this issue but I also re do that process after change the requirement txt back to the previous commit s numpy 1 16 2 and tensorflow 2 0 0 beta1 cd tensorflow lite micro examples magic wand train python3 7 m venv venv follow readme md source venv bin activate pip install upgrade pip if necessary pip install r requirement txt I didn t have the same issue during operation although the model didn t recognize the sample input correctly this time I train it it s suppose to go ring circle ring circle boot zephyr os build v2 6 0 rc1 310 g2a9b32d43f31 get accelerometer label accel 0 wing wing wing please provide the exact sequence of command step when you run into the problem cd tensorflow lite micro examples magic wand train python3 8 m venv venv source venv bin activate pip install upgrade pip if necessary pip install r requirement txt describe the expect behavior pip install r requirement txt should work and the script should work correctly with the tensorflow version in the requirement txt possible solution downgrade requirement txt tensorflow 2 0 0 beta1 or upgrade numpy 1 19 2 to match tensorflow 2 4 0 rewrite script to accommodate tensorflow 2 4 0 this be my first time file an issue on tensorflow so my apology if this isn t a bug and please let I know if you d like I to remove add anything |
tensorflowtensorflow | removal of persistenttensor break custom op horovod | Bug | the recent commit diff b35354f62819de66aaa049a9498cccc261c108a7c488d39e04882110bdee65b5 remove the persistenttensor from the tensorflow c api for horovod we rely on this functionality to allocate the fusion buffer that pack multiple small tensor into a single buffer during allreduce here l39 which ultimately call into here l198 the doc recommend use tensor with allocate temp instead but this do not seem like a viable workaround my understanding be these tensor will not survive past the life of a single op other framework include pytorch and mxnet provide similar mechanism for allocate long live tensor buffer it seem reasonable tensorflow should continue to do the same or be there a workaround I be miss thank cc reedwm romerojosh dekhtiarjonathan |
tensorflowtensorflow | 404 from | Bug | this be the link from to open documentation the circle with a question mark inside |
tensorflowtensorflow | distinction between categorical crossentropy and sparse categorical crossentropy be not clear in the documentation | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change there be documentation for these 2 loss but there be no clear explanation distinguish each of they and when exactly a specific loss should be use |
tensorflowtensorflow | set fix seed in preprocessinglayer lead to different result | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu 16 04 tensorflow instal from source or binary binary tensorflow version 2 4 1 python version 3 8 cuda cudnn version driver version 450 102 04 cuda version 11 0 gpu model and memory v100 describe the current behavior I m solve a task of image to image translation and would like to apply an identical augmentation to both the input and output image for example I d like to flip both image or rotate to the same degree I can use transform from tf image stateless random and provide same seed value to get identical result but there be no randomtranslation or randomrotation transform so I m try to use layer from tf keras layer experimental preprocessing with fix seed but 2 layer initialise with same random seed be generate different transform see example below be that expect describe the expect behavior initialize from same seed 2 layer should do identical transformation standalone code to reproduce the issue python from tensorflow keras import layer import tensorflow as tf transform x layer experimental preprocessing randomtranslation height factor 0 2 0 3 width factor 0 2 0 3 seed 2 transform y layer experimental preprocessing randomtranslation height factor 0 2 0 3 width factor 0 2 0 3 seed 2 def be equal x y return tf reduce sum tf cast tf math equal x y tf float32 numpy 0 x tf random uniform 1 32 32 3 255 for in range 15 x t transform x x train true y t transform y x training true print be equal x t y t run this code lead to the following result false false false false false false false false false false |
tensorflowtensorflow | superflous keras nightly dependency | Bug | describe the problem tf 2 5 0 add a dependency on keras nightly for some reason with however at no place there be an actual import kera or similar in the tf code hence that package be completely optional at least or superflous it seem what be the reasoning for include that in the require dependency of tf can it be remove I d guess usually people use tf kera or install tf then keras for some reason I e keras depend on tf more or less but now there be a dependency cycle |
tensorflowtensorflow | tensorflow 2 5 0 cuda and cudnn version | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description the document show versionpython versioncompilerbuild toolscudnncuda tensorflow 2 5 03 6 3 9gcc 7 3 1bazel 3 1 08 011 0 but accord to tensorflow pip package be now build with cuda11 2 and cudnn 8 1 0 this table should be change to versionpython versioncompilerbuild toolscudnncuda tensorflow 2 5 03 6 3 9gcc 7 3 1bazel 3 1 08 111 2 |
tensorflowtensorflow | update metric py fix sparsecategoricalaccuracy update state doc string | Bug | update metric py fix sparsecategoricalaccuracy update state doc string try fix issue 49252 for sparsecategoricalaccuracy y true should be integer label and y pre should be probability m tf keras metric sparsecategoricalaccuracy m update state 2 1 0 1 0 6 0 3 0 05 0 95 0 correct usage y true as integer label and y pre as probability print m result numpy 0 5 m update state 2 1 1 1 wrong usage both as integer label print m result numpy 0 25 m update state 0 1 0 0 1 0 0 1 0 6 0 3 0 05 0 95 0 wrong usage both as probability print m result numpy error |
tensorflowtensorflow | hola desde hace d as presento un problema con tensorflow lo he intentando todo he le do cada gu a y parece siempre llego al mismo punto | Bug | como se ve en la siguiente imagen mi problema es que al parecer image |
tensorflowtensorflow | tflite gpu opengl delegate wrong result with newconvolution1x1nodeshader | Bug | tensorflow version use command below master happen with all release I get at least since 2 3 gpu model and memory jetson tx2 and also titan rtx I be get wrong result use senet 1x1 convolution in tflite use gpu opengl delegate result be good with cpu delegate and gpu opencl delegate I believe there be a bug in newconvolution1x1nodeshader l166 function I be sorry but I do not understand the syntax of the test code to create an official test case myself however my sample input be bhwc 1 1 1 72 and my sample output of keras layer conv2d be 1 1 1 24 with bias thus the convolution weight be 24x1x1x72 I believe the exact value be not important and the bug will make itself apparent with any value I do not test yet with small input the code seem to break the convolution into 4 float chunk if there be someone brave enough to create such a test case into the test code it would be much appreciate the main difference as of my problem with regard to the exist test case be that both the convolution kernel and the input be wxh 1x1 whereas in the present test case the input be wxh 2x1 |
tensorflowtensorflow | sparsecategoricalaccuracy y true and y pre shape | Bug | url s with the issue description of issue what need change it seem that the usage example conflict with documentation I suppose that the usage example be correct l3474 l3478 in usage example y true take ground truth label as input and y pre take logit probability as input m tf keras metric sparsecategoricalaccuracy m update state 2 1 0 1 0 6 0 3 0 05 0 95 0 m result numpy in documentation it state that y true and y pre should have the same shape y true ground truth value shape batch size d0 dn y pre the predict value shape batch size d0 dn the documentation may need update |
tensorflowtensorflow | valueerror as list be not define on an unknown tensorshape | Bug | hi I try to implement alexnet with coco dataset I want to make multi label classification but tf throw the error full error be valueerror in user code usr local lib python3 7 dist package tensorflow python keras engine training py 805 train function return step function self iterator usr local lib python3 7 dist package tensorflow python keras engine training py 795 step function output model distribute strategy run run step args datum usr local lib python3 7 dist package tensorflow python distribute distribute lib py 1259 run return self extend call for each replica fn args args kwargs kwargs usr local lib python3 7 dist package tensorflow python distribute distribute lib py 2730 call for each replica return self call for each replica fn args kwargs usr local lib python3 7 dist package tensorflow python distribute distribute lib py 3417 call for each replica return fn args kwargs usr local lib python3 7 dist package tensorflow python keras engine training py 788 run step output model train step datum usr local lib python3 7 dist package tensorflow python keras engine training py 758 train step self compile metric update state y y pre sample weight usr local lib python3 7 dist package tensorflow python keras engine compile util py 387 update state self build y pre y true usr local lib python3 7 dist package tensorflow python keras engine compile util py 318 build self metric y true y pre usr local lib python3 7 dist package tensorflow python util nest py 1163 map structure up to kwargs usr local lib python3 7 dist package tensorflow python util nest py 1258 map structure with tuple path up to func args kwargs for args in zip flat path gen flat value gen usr local lib python3 7 dist package tensorflow python util nest py 1258 func args kwargs for args in zip flat path gen flat value gen usr local lib python3 7 dist package tensorflow python util nest py 1161 lambda value func value discard the path arg usr local lib python3 7 dist package tensorflow python keras engine compile util py 418 get metric object return self get metric object m y t y p for m in metric usr local lib python3 7 dist package tensorflow python keras engine compile util py 418 return self get metric object m y t y p for m in metric usr local lib python3 7 dist package tensorflow python keras engine compile util py 439 get metric object y t rank len y t shape as list usr local lib python3 7 dist package tensorflow python framework tensor shape py 1190 as list raise valueerror as list be not define on an unknown tensorshape valueerror as list be not define on an unknown tensorshape here the colab link I don t understand where my mistake be system information google colab you can collect some of this information use our environment capture you can also obtain the tensorflow version with tf 2 0 v2 4 1 0 g85c8b2a817f 2 4 1 describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | multiheadattention padding mask example | Bug | url s with the issue please provide a link to the documentation entry for example description of issue I try to implement tranformer layer but there be no example of use this multiheadattention layer with padding mask it be poffible to get one |
tensorflowtensorflow | error when benchmarke tflite model error give shape 1 40 23 256 and 1 23 40 256 be not broadcastable | Bug | system information have I write custom code yes os platform and distribution ubuntu 20 04 android 10 mobile device xiaomi redmi note 7 tensorflow instal from pip tensorflow version v2 4 0 49 g85c8b2a817f 2 4 1 python version python 3 8 8 describe the current behavior I be train a model and convert it to tflite but when benchmarke it on tflite I get an unexpected error show below error give shape 1 40 23 256 and 1 23 40 256 be not broadcastable error node number 80 add fail to prepare describe the expect behavior since the training work and that the shape be good when I visualize the model in netron I expect it to work image standalone code to reproduce the issue I be run the benchmark use the native tool which include tfop and flex delegate that can be download here my model can be download here I run the benchmark on an android device use the follow line adb push android arm benchmark model plus flex datum local tmp benchmark adb shell chmod x datum local tmp benchmark adb push model tflite datum local tmp model tflite adb shell datum local tmp benchmark graph datum local tmp model tflite use gpu true input layer input scale input input layer shape 1 640 360 3 1 4 I do not understand where the error could be come from since can train and run inference on the tensorflow model and that the graph in netron be good thank in advance |
tensorflowtensorflow | model fit learningrateschedule ignore initial epoch step per epoch give wrong learning rate when resume from checkpoint | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes see below os platform and distribution e g linux ubuntu 16 04 google colab ubuntu 20 04 window 10 tensorflow instal from source or binary preinstalle custom package binary pip tensorflow version use command below v2 4 1 0 g85c8b2a817f 2 4 1 unknown 2 6 0 v2 4 0 49 g85c8b2a817f 2 4 1 python version python 3 7 10 python 3 8 5 python 3 7 9 describe the current behavior when training use a learningrateschedule optimizer and model fit the learn rate schedule ignore initial epoch and step per epoch so when start a new session to resume training from a checkpoint optimizerv2 iteration and the result learning rate will be incorrect describe the expect behavior model fit see that initial epoch and step per epoch have be specify and assign initial epoch step per epoch to optimizerv2 iteration so that training can resume correctly standalone code to reproduce the issue import tensorflow as tf input tf keras layers input shape 2 dtype tf float32 output tf keras layer dense 1 activation relu input model tf keras model input output learn rate schedule tf keras optimizer schedule piecewiseconstantdecay 10 0 5 0 25 optimizer tf keras optimizer sgd learn rate learn rate schedule model compile optimizer optimizer loss mse class validationcallback tf keras callbacks callback def init self expect self expect expect self actual def on train batch end self batch log none self actual append tf keras backend get value self model optimizer learning rate self model optimizer iteration item def on train end self log none if self actual self expect print good else print f bug expect self expect actual self actual value tf range 20 dtype tf float32 input tf stack value value axis 1 output value 2 0 dataset tf datum dataset from tensor slice input output dataset dataset batch 1 call fit for step 1 20 print test 1 model fit dataset epoch 2 initial epoch 0 step per epoch 10 callback validationcallback 0 5 10 0 25 10 verbose 0 call fit for step 1 20 again bug the optimizer just retain its iteration rather than calculate from epoch initial epoch step per epoch print test 2 model fit dataset epoch 2 initial epoch 0 step per epoch 10 callback validationcallback 0 5 10 0 25 10 verbose 0 call fit for step 1 20 a third time work around the bug by reset iteration print test 3 model optimizer iteration assign 0 read value false model fit dataset epoch 2 initial epoch 0 step per epoch 10 callback validationcallback 0 5 10 0 25 10 verbose 0 call fit for step 11 20 bug should calculate iteration from epoch initial epoch step per epoch very important when resume training from a checkpoint print test 4 model optimizer iteration assign 0 read value false model fit dataset epoch 2 initial epoch 1 step per epoch 10 callback validationcallback 0 25 10 verbose 0 call fit for step 11 20 work around the bug by manually initialize iteration print test 5 initial epoch 1 step per epoch 10 model optimizer iteration assign initial epoch step per epoch read value false model fit dataset epoch 2 initial epoch initial epoch step per epoch step per epoch callback validationcallback 0 25 10 verbose 0 other info log test 1 good test 2 bug expect 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 actual 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 test 3 good test 4 bug expect 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 0 25 actual 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 test 5 good |
tensorflowtensorflow | ssim performance degradation in tf v2 5 0 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow custom os platform and distribution e g linux ubuntu 16 04 linux colab environment mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip tensorflow version use command below 2 4 0 2 5 0 python version 3 7 bazel version if compile from source none gcc compiler version if compile from source none cuda cudnn version gpu model and memory tesla k80 describe the current behavior ssim have become extremely slow in tf 2 5 0 as compare to tf 2 4 x here be some test I run on colab import tensorflow as tf print tf version 2 5 0 timeit ssim tf image ssim tf random normal 30 500 500 1 tf random normal 30 500 500 1 1 output 1 loop good of 5 30 1 s per loop import tensorflow as tf print tf version 2 4 0 timeit ssim tf image ssim tf random normal 30 500 500 1 tf random normal 30 500 500 1 1 output 10 loop good of 5 93 5 ms per loop the timeit test warn clearly that there might be some cache involve for the 2 4 0 test which isn t available for 2 5 0 I assume I don t know why cache should be enable for 2 5 0 as well if its the default behaviour describe the expect behavior the ssim function clearly have performance degradation in 2 5 0 and should not take so much time on a gpu contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute yes but I will have to go through the history of the tensorflow python op image op impl py file standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook mention above other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | model re compilation update under distribute mirror strategy issue | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution linux ubuntu 18 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 4 0 python version 3 6 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 11 0 gpu model and memory rtx 2080 ti x4 you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I be face to an issue with tensorflow with a model train on multiple gpu x4 with the distribute mirror strategy the first training work perfectly with the distribute mirror strategy after I want to update some parameter in the same run such as learn rate optimizer the model compilation work perfectly and when the new training sequence be start I get the error below here be my code do you have any advise to avoid this error and the training stop when I reload the model and run the next step of the training it work but not in the pipeline do you have any idea mirror strategy tf distribute mirroredstrategy with mirror strategy scope everything that create variable should be under the strategy scope in general this be only model construction compile print info compile model basemodel headmodel model get model lastlayerneuron 20 dropout 0 5 for layer in basemodel layer layer trainable false construct the set of metric metric accuracy opt sgd lr 1e 3 model compile loss categorical crossentropy optimizer opt metric metric print info training head model fit train dataset step per epoch step per epoch validation datum val dataset validation step validation step epoch 10 max queue size 500 callback callback verbose 1 use multiprocesse true worker 20 with mirror strategy scope print info re compile model for layer in model layer derni block I e 5 layer trainable true construct the set of metric metric accuracy opt sgd lr 1e 4 model compile loss categorical crossentropy optimizer opt metric metric print info fine tune v2 model model fit train dataset step per epoch step per epoch validation datum val dataset validation step validation step epoch 100 max queue size 500 callback callback verbose 1 use multiprocesse true worker 20 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach exception ignore in traceback most recent call last file usr lib python3 6 tkinter init py line 3507 in del self tk call image delete self name runtimeerror main thread be not in main loop |
tensorflowtensorflow | error in tf keras metric auc | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 macos catalina mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary no tensorflow version use command below 2 4 1 python version 3 6 6 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf keras metric auc from logit true and tf keras metric auc from logit false output typeerror init get an unexpected keyword argument from logit describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook reproducible error other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach traceback most recent call last file line 1 in typeerror init get an unexpected keyword argument from logit |
tensorflowtensorflow | can t save checkpoint when use multi cell rnn | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary binary tensorflow version use command below v2 4 0 49 g85c8b2a817f 2 4 1 python version python 3 7 6 cuda cudnn version 11 3 gpu model and memory nvidia corporation gv100gl tesla v100 pcie 32 gb describe the current behavior I m use multi cell rnn with the follow code python self rnn cell tf keras layer lstmcell self cfg hide unit for in range self cfg depth self encoder tf keras layer rnn self rnn cell return sequence true return state true and I m use tf train checkpoint and tf train checkpointmanager to save checkpoint of my model however self manager save throw an error that be paste below valueerror unable to save the object listwrapper a list wrapper construct to track trackable tensorflow object a list element be replace setitem setslice delete delitem delslice or move sort in order to support restoration on object creation tracking be exclusively for append only datum structure if you don t need this list checkpointe wrap it in a non trackable object it will be subsequently ignore describe the expect behavior self manager save can save the model correctly contribute do you want to contribute a pr yes no no I don t know how to solve this problem |
tensorflowtensorflow | instal tensorflow gpu and upgrade grpcio | Bug | I have never have this problem before today but now when I run code I have these error |
tensorflowtensorflow | problem with use tf data dataset with mirroredstrategy with eager execution disable | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 mac mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device na tensorflow instal from source or binary binary tensorflow version use command below 2 4 python version 3 8 describe the current behavior when train a model that use tf data dataset with mirroredstrategy while eager execution be disable the function under backend graphexecutionfunction below do not know how to handle input that do not include numpy array def call self input input nest flatten input expand composite true session get session input feed array array val feed symbol symbol val for tensor value in zip self input input if value be none continue if tensor util be tensor value case feed symbolic tensor feed symbol append tensor symbol val append value else case feed numpy array feed array append tensor we need to do array conversion and type casting at this level since callable fn only support exact match tensor type dtype module as dtype tensor dtype array val append np asarray value dtype tensor type as numpy dtype if self feed dict for key in sorted self feed dict key array val append np asarray self feed dict key dtype key dtype base dtype name refresh callable if anything have change if self callable fn be none or feed array self feed array or symbol val self symbol val or feed symbol self feed symbol or self fetch self fetch or session self session self make callable feed array feed symbol symbol val session fetch self callable fn array val run metadata self run metadata self call fetch callback fetch len self fetch output structure nest pack sequence as self output structure fetch len self output expand composite true we need to evaluate any composite tensor object that have be reconstruct in pack sequence as since otherwise they ll be output as actual compositetensor object instead of the value s contain in the compositetensor e g if output structure contain a sparsetensor then this ensure that we return its value as a sparsetensorvalue rather than a sparsetensor return nest map structure self eval if composite output structure in particular these line fail when the input be only symbolic tensor with symbolic value fetch self callable fn array val run metadata self run metadata self call fetch callback fetch len self fetch output structure nest pack sequence as self output structure fetch len self output expand composite true describe the expect behavior this function should be able to run when there be only feed symbol and symbol val unless I be miss something do this function need to have feed array array val |
tensorflowtensorflow | tensorflow gpu instruction should be update for tf 2 5 and cuda 11 2 | Bug | url s with the issue window setup description of issue what need change tensorflow 2 5 be build against cuda 11 2 and cudnn 8 1 0 however the gpu instruction still refer to cuda 11 0 and cudnn 8 0 4 I believe that for the sake of clarity these should be update to match tf 2 5 this include both the installation command for linux and the path command for window submit a pull request I d be glad to submit a pr for the window instruction but I don t currently have access to a machine I can use to verify the linux step |
tensorflowtensorflow | error import from keras model attributeerror module tensorflow compat v2 internal have no attribute tf2 | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device keras version 2 4 3 tensorflow instal from source or binary source tensorflow version use command below 2 5 0 python version 3 7 10 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior the follow error be raise when import model from keras model bash dosma model oaiunet2d py 14 in from keras model import model opt hostedtoolcache python 3 7 10 x64 lib python3 7 site package kera init py 20 in from import initializer opt hostedtoolcache python 3 7 10 x64 lib python3 7 site package kera initializers init py 124 in populate deserializable object opt hostedtoolcache python 3 7 10 x64 lib python3 7 site package kera initializers init py 49 in populate deserializable object local generate with v2 tf internal tf2 enable e attributeerror module tensorflow compat v2 internal have no attribute tf2 describe the expect behavior no error should be throw contribute do you want to contribute a pr yes no if this require a fix yes standalone code to reproduce the issue python from keras model import model other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | checkpoint be expect a trackable object | Bug | system information os platform and distribution linux ubuntu 18 04 5 python version 3 7 tensorflow gpu 2 4 1 cuda cudnn version 11 3 7 6 5 description all model be write in tf keras I want to do a hyperparameter tune for the model and vary the number of convultional layer bottleneck layer but now the discriminator model seem to be no long trackable can anyone help output python valueerror traceback most recent call last in 21 print start trial s run name 22 print h name hparam h for h in hparam 23 run log hparam tune run name hparam train 256 train256 dataset test 256 test256 dataset epoch 5 24 session num 1 in run run dir hparam train 256 test 256 epoch 3 with tf summary create file writer run dir as default 4 hp hparam hparam 5 gen l1 loss disc loss train test hparam train 256 test 256 epoch epoch 6 tf summary scalar gen l1 loss gen l1 loss step epoch 7 tf summary scalar disc loss disc loss step epoch in train test hparam train 256 test 256 epoch 129 discriminator optimizer discriminator optimizer 130 generator generator 131 discriminator discriminator 132 log dir log 133 sum log f log dir model name datetime datetime now strftime y m d h m s anaconda3 envs tf gpu lib python3 7 site package tensorflow python training tracking util py in init self root kwargs 1927 v to a trackable data structure when v be a list dict tuple 1928 convert v getattr self k 1929 assert trackable convert v 1930 1931 if root anaconda3 envs tf gpu lib python3 7 site package tensorflow python training tracking util py in assert trackable obj 1413 object should be trackable I e it be part of the 1414 tensorflow python api and manage state please open an issue 1415 format obj 1416 1417 valueerror checkpoint be expect a trackable object an object derive from trackablebase get discriminator at 0x7fa5a8174f80 if you believe this object should be trackable I e it be part of the tensorflow python api and manage state please open an issue |
tensorflowtensorflow | tf gradient with variable shape input return error inverse extract volume patch | Bug | system information google colab tf v2 4 1 0 g85c8b2a817f use the gradient as suggest in 6743 issuecomment 271969125 and on stackoverflow to compute the inverse of tf extract volume patch work for statically know input shape however it do return an error when use with variably input shape an minimal example for synthetic 3d mnist dataset it work when input 28 28 28 1 however I need the extraction on variable sized data input large biomedical image python import tensorflow as tf import numpy as np def gaussian x amp 1 mu none sig none gaussian function over d dimension of x if mu be none mu np zero like x if sig be none sig np one like x return amp np exp np sum np square x mu 2 np square sig x train y train x test y test tf keras datasets mnist load datum x train x train astype float32 255 x test x test astype float32 255 multipli np zero 28 dtype np float32 for I in range 4 multipli 13 I round gaussian I 2 multipli 14 I round gaussian I 2 print multipli x train np einsum bhw d bdhw x train multipli np newaxis class extractpatche tf keras layers layer def init self ksize stride shape super extractpatche self init self ksize ksize self stride stride self shape shape def call self input patch tf extract volume patch input ksize self ksize stride self stride padding valid return tf reshape patch self shape tf shape input class combinepatche tf keras layers layer def init self ksize stride super combinepatche self init self ksize ksize self stride stride def call self patch input target volume tf zero like input target patch tf extract volume patch target volume ksize self ksize stride self stride padding valid create list of gradient mapping from patch to target shape patch without overlap get 1 element that overlap receive 1 time the number of overlap target grad mapping tf gradient target patch target volume 0 compute gradient again and dividing by grad otherwise its just sum return tf gradient target patch target volume patch 0 target grad mapping def create model input tf keras layers input shape none none none 1 patch shape extractpatche ksize 1 14 14 14 1 stride 1 14 14 14 1 shape 1 14 14 14 1 input encode tf keras layer conv3d filter 28 kernel size 14 14 14 stride 14 14 14 patch decode tf keras layer conv3dtranspose filter 1 kernel size 14 14 14 stride 14 14 14 encode merge combinepatche ksize 1 14 14 14 1 stride 1 14 14 14 1 decode input return tf keras model model inputs input output merge ae create model ae compile optimizer tf keras optimizer adam learning rate 1e 3 loss tf keras loss meansquarederror metric accuracy test history ae fit x train x train batch size 1 epoch 1 callback none stacktrace python typeerror traceback most recent call last in 55 56 57 ae create model 58 59 ae compile optimizer tf keras optimizer adam learning rate 1e 3 5 frame usr local lib python3 7 dist package tensorflow python autograph impl api py in wrapper args kwargs 668 except exception as e pylint disable broad except 669 if hasattr e ag error metadata 670 raise e ag error metadata to exception e 671 else 672 raise typeerror in user code 41 call target grad mapping tf gradient target patch target volume 0 usr local lib python3 7 dist package tensorflow python op gradient impl py 318 gradient v2 unconnected gradient usr local lib python3 7 dist package tensorflow python op gradient util py 684 gradientshelper lambda grad fn op out grad usr local lib python3 7 dist package tensorflow python op gradient util py 340 maybecompile return grad fn exit early usr local lib python3 7 dist package tensorflow python op gradient util py 684 lambda grad fn op out grad usr local lib python3 7 dist package tensorflow python ops array grad py 1067 extractvolumepatchesgrad input indice num 1 plane in row in col in typeerror unsupported operand type s for nonetype and nonetype |
tensorflowtensorflow | tensorflow python framework error impl notfounderror key beamsearchdecoderstep multi rnn cell cell 0 attention attention wrapper lstm cell 9 bias not find in checkpoint | Bug | system information tensorflow version 2 4 0 python version 3 6 2 problem I be try to upgrade the las model from tensorflow 1 8 0 version to 2 4 0 there be no problem in train the model but in the testing phase load the model will show that there be a parameter not find I print the save model file there be a parameter name 1 in it I don t understand where the problem be I would be very grateful if you can answer my question error message 2021 05 11 17 09 25 582423 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library cudart64 110 dll dlerror cudart64 110 dll not find 2021 05 11 17 09 25 582893 I tensorflow stream executor cuda cudart stub cc 29 ignore above cudart dlerror if you do not have a gpu set up on your machine info tensorflow use config model dir datum kss kspon dataset model test tf random seed none save summary step 100 save checkpoint step none save checkpoint sec 600 session config allow soft placement true graph option rewrite option meta optimizer iteration one keep checkpoint max 5 keep checkpoint every n hour 10000 log step count step 100 train distribute none device fn none protocol none eval distribute none experimental distribute none experimental max worker delay sec none session creation timeout sec 7200 checkpoint save graph def true service none cluster spec clusterspec task type worker task i d 0 global i d in cluster 0 master evaluation master be chief true num ps replicas 0 num worker replicas 1 info tensorflow call model fn info tensorflow building listener 2021 05 11 17 09 33 837384 I tensorflow compiler jit xla cpu device cc 41 not create xla device tf xla enable xla device not set 2021 05 11 17 09 33 841199 w tensorflow stream executor platform default dso loader cc 60 could not load dynamic library nvcuda dll dlerror nvcuda dll not find 2021 05 11 17 09 33 841721 w tensorflow stream executor cuda cuda driver cc 326 fail call to cuinit unknown error 303 2021 05 11 17 09 33 850617 I tensorflow stream executor cuda cuda diagnostic cc 169 retrieve cuda diagnostic information for host desktop i630cdv 2021 05 11 17 09 33 851304 I tensorflow stream executor cuda cuda diagnostic cc 176 hostname desktop i630cdv 2021 05 11 17 09 33 852078 I tensorflow core platform cpu feature guard cc 142 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2021 05 11 17 09 33 853485 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set info tensorflow building speller warning tensorflow from c user yangrui desktop pythonproject korean speech las model py 346 div from tensorflow python op math op be deprecate and will be remove in a future version instruction for update deprecate in favor of operator or tf math divide info tensorflow do calling model fn info tensorflow start evaluation at 2021 05 11t17 09 47z info tensorflow graph be finalize 2021 05 11 17 09 48 052221 I tensorflow compiler jit xla gpu device cc 99 not create xla device tf xla enable xla device not set info tensorflow restore parameter from data kss kspon dataset model test model ckpt 0 2021 05 11 17 09 48 135902 I tensorflow compiler mlir mlir graph optimization pass cc 196 none of the mlir optimization pass be enable register 0 pass 2021 05 11 17 09 48 321843 w tensorflow core framework op kernel cc 1763 op require fail at save restore v2 op cc 205 not find key beamsearchdecoderstep multi rnn cell cell 0 attention attention wrapper lstm cell 9 bias not find in checkpoint traceback most recent call last file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1375 in do call return fn args file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1360 in run fn target list run metadata file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1453 in call tf sessionrun run metadata tensorflow python framework error impl notfounderror key beamsearchdecoderstep multi rnn cell cell 0 attention attention wrapper lstm cell 9 bias not find in checkpoint node save restorev2 during handling of the above exception another exception occur traceback most recent call last file c anaconda3 envs tf2 lib site package tensorflow python training saver py line 1298 in restore self saver def filename tensor name save path file c anaconda3 envs tf2 lib site package tensorflow python client session py line 968 in run run metadata ptr file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1191 in run feed dict tensor option run metadata file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1369 in do run run metadata file c anaconda3 envs tf2 lib site package tensorflow python client session py line 1394 in do call raise type e node def op message tensorflow python framework error impl notfounderror key beamsearchdecoderstep multi rnn cell cell 0 attention attention wrapper lstm cell 9 bias not find in checkpoint node save restorev2 define at anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py 1647 original stack trace for save restorev2 file user yangrui desktop pythonproject korean speech eval py line 114 in main args file user yangrui desktop pythonproject korean speech eval py line 85 in main input fn lambda input fn file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 467 in evaluate name name file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 510 in actual eval return evaluate file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 499 in evaluate output dir self eval dir name file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 1647 in evaluate run config self session config file anaconda3 envs tf2 lib site package tensorflow python training evaluation py line 269 in evaluate once session creator session creator hook hook as session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1038 in init stop grace period sec stop grace period sec file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 749 in init self sess recoverablesession self coordinate creator file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1231 in init wrappedsession init self self create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1236 in create session return self sess creator create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 902 in create session self tf sess self session creator create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 660 in create session self scaffold finalize file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 235 in finalize self saver training saver get saver or default pylint disable protect access file anaconda3 envs tf2 lib site package tensorflow python training saver py line 606 in get saver or default saver saver sharde true allow empty true file anaconda3 envs tf2 lib site package tensorflow python training saver py line 835 in init self build file anaconda3 envs tf2 lib site package tensorflow python training saver py line 847 in build self build self filename build save true build restore true file anaconda3 envs tf2 lib site package tensorflow python training saver py line 885 in build build restore build restore file anaconda3 envs tf2 lib site package tensorflow python training saver py line 509 in build internal restore sequentially reshape file anaconda3 envs tf2 lib site package tensorflow python training saver py line 388 in addshardedrestoreop name restore shard file anaconda3 envs tf2 lib site package tensorflow python training saver py line 335 in addrestoreop restore sequentially file anaconda3 envs tf2 lib site package tensorflow python training saver py line 582 in bulk restore return io op restore v2 filename tensor name slice dtype file anaconda3 envs tf2 lib site package tensorflow python ops gen io op py line 1510 in restore v2 name name file anaconda3 envs tf2 lib site package tensorflow python framework op def library py line 750 in apply op helper attrs attr proto op def op def file anaconda3 envs tf2 lib site package tensorflow python framework op py line 3536 in create op internal op def op def file anaconda3 envs tf2 lib site package tensorflow python framework op py line 1990 in init self traceback tf stack extract stack during handling of the above exception another exception occur traceback most recent call last file c anaconda3 envs tf2 lib site package tensorflow python train py checkpoint reader py line 70 in get tensor self compat as bytes tensor str runtimeerror key checkpointable object graph not find in checkpoint during handling of the above exception another exception occur traceback most recent call last file c anaconda3 envs tf2 lib site package tensorflow python training saver py line 1308 in restore name to key object graph key mapping save path file c anaconda3 envs tf2 lib site package tensorflow python training saver py line 1626 in object graph key mapping object graph string reader get tensor trackable object graph proto key file c anaconda3 envs tf2 lib site package tensorflow python train py checkpoint reader py line 74 in get tensor error translator e file c anaconda3 envs tf2 lib site package tensorflow python train py checkpoint reader py line 35 in error translator raise error impl notfounderror none none error message tensorflow python framework error impl notfounderror key checkpointable object graph not find in checkpoint during handling of the above exception another exception occur traceback most recent call last file c user yangrui desktop pythonproject korean speech eval py line 114 in main args file c user yangrui desktop pythonproject korean speech eval py line 85 in main input fn lambda input fn file c anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 467 in evaluate name name file c anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 510 in actual eval return evaluate file c anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 499 in evaluate output dir self eval dir name file c anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 1647 in evaluate run config self session config file c anaconda3 envs tf2 lib site package tensorflow python training evaluation py line 269 in evaluate once session creator session creator hook hook as session file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1038 in init stop grace period sec stop grace period sec file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 749 in init self sess recoverablesession self coordinate creator file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1231 in init wrappedsession init self self create session file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1236 in create session return self sess creator create session file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 902 in create session self tf sess self session creator create session file c anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 669 in create session init fn self scaffold init fn file c anaconda3 envs tf2 lib site package tensorflow python training session manager py line 295 in prepare session config config file c anaconda3 envs tf2 lib site package tensorflow python training session manager py line 209 in restore checkpoint saver restore sess checkpoint filename with path file c anaconda3 envs tf2 lib site package tensorflow python training saver py line 1314 in restore err a variable name or other graph key that be miss tensorflow python framework error impl notfounderror restore from checkpoint fail this be most likely due to a variable name or other graph key that be miss from the checkpoint please ensure that you have not alter the graph expect base on the checkpoint original error key beamsearchdecoderstep multi rnn cell cell 0 attention attention wrapper lstm cell 9 bias not find in checkpoint node save restorev2 define at anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py 1647 original stack trace for save restorev2 file user yangrui desktop pythonproject korean speech eval py line 114 in main args file user yangrui desktop pythonproject korean speech eval py line 85 in main input fn lambda input fn file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 467 in evaluate name name file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 510 in actual eval return evaluate file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 499 in evaluate output dir self eval dir name file anaconda3 envs tf2 lib site package tensorflow estimator python estimator estimator py line 1647 in evaluate run config self session config file anaconda3 envs tf2 lib site package tensorflow python training evaluation py line 269 in evaluate once session creator session creator hook hook as session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1038 in init stop grace period sec stop grace period sec file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 749 in init self sess recoverablesession self coordinate creator file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1231 in init wrappedsession init self self create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 1236 in create session return self sess creator create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 902 in create session self tf sess self session creator create session file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 660 in create session self scaffold finalize file anaconda3 envs tf2 lib site package tensorflow python training monitor session py line 235 in finalize self saver training saver get saver or default pylint disable protect access file anaconda3 envs tf2 lib site package tensorflow python training saver py line 606 in get saver or default saver saver sharde true allow empty true file anaconda3 envs tf2 lib site package tensorflow python training saver py line 835 in init self build file anaconda3 envs tf2 lib site package tensorflow python training saver py line 847 in build self build self filename build save true build restore true file anaconda3 envs tf2 lib site package tensorflow python training saver py line 885 in build build restore build restore file anaconda3 envs tf2 lib site package tensorflow python training saver py line 509 in build internal restore sequentially reshape file anaconda3 envs tf2 lib site package tensorflow python training saver py line 388 in addshardedrestoreop name restore shard file anaconda3 envs tf2 lib site package tensorflow python training saver py line 335 in addrestoreop restore sequentially file anaconda3 envs tf2 lib site package tensorflow python training saver py line 582 in bulk restore return io op restore v2 filename tensor name slice dtype file anaconda3 envs tf2 lib site package tensorflow python ops gen io op py line 1510 in restore v2 name name file anaconda3 envs tf2 lib site package tensorflow python framework op def library py line 750 in apply op helper attrs attr proto op def op def file anaconda3 envs tf2 lib site package tensorflow python framework op py line 3536 in create op internal op def op def file anaconda3 envs tf2 lib site package tensorflow python framework op py line 1990 in init self traceback tf stack extract stack process finish with exit code 1 part of my code import tensorflow as tf import numpy as np from tensorflow python util import nest import tensorflow addon as tfa from las ops import lstm cell from las ops import pyramidal bilstm assert tf executing eagerly all listener speller reference class attentionmulticell tf keras layers stackedrnncell class attentionmulticell tf compat v1 nn rnn cell multirnncell a multicell with attention style def init self attention cell cell use new attention false create a attentionmulticell args attention cell an instance of attentionwrapper cell a list of rnncell wrap with attentioninputwrapper use new attention whether to use the attention generate from current step bottom layer s output default be false cell attention cell cell self use new attention use new attention super attentionmulticell self init cell def call self input state training false scope none run the cell with bottom layer s attention copy to all upper layer if not nest be sequence state raise valueerror expect state to be a tuple of length d but receive s len self state size state with tf compat v1 variable scope scope or multi rnn cell new state with tf compat v1 variable scope cell 0 attention attention cell self cell 0 attention state state 0 cur inp new attention state attention cell input attention state new state append new attention state for I in range 1 len self cell with tf compat v1 variable scope cell d I cell self cell I cur state state I if self use new attention cur inp tf concat cur inp new attention state attention 1 else cur inp tf concat cur inp attention state attention 1 cur inp new state cell cur inp cur state new state append new state return cur inp new state class customattention tfa seq2seq luongattention def init self num unit memory memory sequence length none scale false probability fn none score mask value none dtype none name customattention super customattention self init num unit num unit memory memory memory sequence length memory sequence length scale scale probability fn probability fn score mask value score mask value dtype dtype name name self query layer tf compat v1 layer dense num unit name query layer use bias false dtype dtype self key tf nn relu self key def call self query state process query tf nn relu self query layer query return super customattention self call process query state def listener encoder input source sequence length mode hparam if hparam use pyramidal return pyramidal bilstm encoder input source sequence length mode hparam else forward cell list backward cell list for layer in range hparam num layer with tf compat v1 variable scope fw cell format layer cell lstm cell hparam num unit hparam dropout mode forward cell list append cell with tf compat v1 variable scope bw cell format layer cell lstm cell hparam num unit hparam dropout mode backward cell list append cell forward cell tf keras layers stackedrnncell forward cell list backward cell tf keras layers stackedrnncell backward cell list encoder output encoder state tf keras layers bidirectional forward cell backward cell encoder input sequence length source sequence length dtype tf float32 output batch size max time forward cell output size batch size max time hide size encoder output tf concat encoder output 1 return encoder output source sequence length encoder state def attend encoder output source sequence length mode hparam memory encoder output if hparam attention type luong attention fn tfa seq2seq luongattention elif hparam attention type bahdanau attention fn tfa seq2seq bahdanauattention elif hparam attention type custom attention fn customattention attention mechanism attention fn hparam num unit memory source sequence length cell list for layer in range hparam num layer with tf compat v1 variable scope decoder cell format layer cell lstm cell hparam num unit hparam dropout mode cell lstm cell hparam num unit hparam dropout mode cell list append cell alignment history mode tf estimator modekeys train if hparam bottom only false only wrap the bottom layer with the attention mechanism attention cell cell list pop 0 attention cell tf cast attention cell dtype float32 attention mechanism tf cast attention mechanism dtype float32 attention cell tfa seq2seq attentionwrapper attention cell attention mechanism attention layer size hparam attention layer size alignment history alignment history decoder cell attentionmulticell attention cell cell list else decoder cell tf keras layers stackedrnncells cell list decoder cell tfa seq2seq attentionwrapper decoder cell attention mechanism attention layer size hparam attention layer size alignment history alignment history return decoder cell def speller encoder output encoder state decoder input source sequence length target sequence length mode hparam batch size tf shape input encoder output 0 beam width hparam beam width if mode tf estimator modekey predict and beam width 0 encoder output tfa seq2seq tile batch encoder output multipli beam width source sequence length tfa seq2seq tile batch source sequence length multipli beam width encoder state tfa seq2seq tile batch encoder state multipli beam width batch size batch size beam width if mode tf estimator modekey eval and beam width 0 encoder output tfa seq2seq tile batch encoder output multipli beam width source sequence length tfa seq2seq tile batch source sequence length multipli beam width encoder state tfa seq2seq tile batch encoder state multipli beam width batch size batch size beam width def embed fn ids pass callable object to avoid oom when use one hot encoding if hparam embed size 0 target embed tf compat v1 get variable target embed hparam target vocab size hparam embed size dtype tf float32 initializer tf compat v1 kera initializer variancescale scale 1 0 mode fan avg distribution uniform return tf nn embed lookup param target embed id ids else return tf one hot ids hparam target vocab size decoder cell attend encoder output source sequence length mode hparam projection layer tf keras layer dense hparam target vocab size use bias true name projection layer if hparam pass hidden state and hparam bottom only initial state tuple zs clone cell state es if isinstance zs tfa seq2seq attentionwrapperstate else es for zs es in zip decoder cell get initial state batch size batch size dtype tf float32 encoder state else initial state decoder cell get initial state batch size batch size dtype tf float32 maximum iteration none if mode tf estimator modekeys train max source length tf reduce max input tensor source sequence length maximum iteration tf cast tf round tf cast max source length dtype tf float32 hparam decode length factor dtype tf int32 if mode tf estimator modekeys train decoder input embed fn decoder input decay step hparam decay step iter num tf compat v1 train get global step inverse probability tf compat v1 train polynomial decay 1 0 iter num decay step 0 6 sample probability 1 0 inverse probability if hparam sample probability helper tfa seq2seq scheduledembeddingtrainingsampl sample probability sample probability embed fn embed fn else helper tfa seq2seq trainingsampler decoder tfa seq2seq basicdecoder cell decoder cell sampler helper output layer projection layer maximum iteration maximum iteration decoder output final context state final sequence length tfa seq2seq dynamic decode decoder training true decoder init input decoder input decoder init kwargs initial state initial state sequence length target sequence length elif mode tf estimator modekey predict and beam width 0 start token tf fill tf compat v1 div batch size beam width hparam sos i d decoder tfa seq2seq beamsearchdecoder cell decoder cell embed fn embed fn beam width beam width output layer projection layer maximum iteration maximum iteration decoder output final context state final sequence length tfa seq2seq dynamic decode decoder decoder input embed fn decoder input training false decoder init kwargs start token start token end token hparam eos i d initial state initial state else start token tf fill batch size hparam sos i d helper tf contrib seq2seq greedyembeddinghelper embed fn start token hparam eos i d decoder tf contrib seq2seq basicdecoder decoder cell helper initial state output layer projection layer start token tf fill tf compat v1 div batch size beam width hparam sos i d decoder tfa seq2seq beamsearchdecoder cell decoder cell embed fn embed fn beam width beam width output layer projection layer maximum iteration maximum iteration decoder output final context state final sequence length tfa seq2seq dynamic decode decoder decoder input embed fn decoder input training false decoder init kwargs start token start token end token hparam eos i d initial state initial state return decoder output final context state final sequence length besides I use tf estimator estimator for training and evaluation |
tensorflowtensorflow | tf keras preprocesse image smart resize stable doc do not render properly | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description in the stable tab of the tf keras preprocesse image smart resize doc the output doc seem like they re html escape and or input markdown oppose to actually render as html for reference I m report this bug from chrome 90 on macos I get the same result on firefox and safari also on macos likely the generate doc page be incorrect usage example n a request visual if applicable be there currently visual if not will it clarify the content here s a screenshot of what I see on chrome 90 on macos under the stable tab screen shoot 2021 05 10 at 5 18 41 pm in contrast here be the nightly tab which work as intend screen shoot 2021 05 10 at 5 19 21 pm submit a pull request not very familiar with the tensorflow doc api but if it s a simple change I wouldn t mind take a crack at it |
tensorflowtensorflow | unclear error message in keras backend bias add | Bug | l6004 l6007 be line 6007 suppose to be len bias shape ndim x 1 |
tensorflowtensorflow | variable scope be create repeatedly | Bug | tensorflow version 1 15 0 python version 3 6 12 variable scope be create repeatedly use tf variable scope as follow python with tf variable scope my variable scope reuse tf auto reuse v tf get variable v 1 print v name w tf get variable w 1 print w name b w 1 c w 1 print b name c name with tf variable scope my variable scope reuse tf auto reuse as scope scope reuse variable v1 tf get variable v a v1 1 print v1 name print a name with tf variable scope my variable scope reuse tf auto reuse as scope scope reuse variable v1 tf get variable v a v1 1 print v1 name print a name here come the output bash my variable scope v 0 my variable scope w 0 my variable scope add 0 my variable scope add 1 0 my variable scope v 0 my variable scope 1 add 0 my variable scope v 0 my variable scope 2 add 0 so why my variable scope my variable scope 1 my variable scope 2 be create for a thank for your help |
tensorflowtensorflow | order of resampled dataset be wrong | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 colab tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 describe the current behavior I be use tf datum experimental sample from dataset to balance a dataset now I be create two datum loader out of the resampled dataset and then zip they up when I be iterate through the zipped dataset the image be come in different order let s walk this through step by step datum python x train y train tf keras datasets cifar10 load datum sample idx np random choice len x train 4000 sample train sample label x train sample idx y train sample idx squeeze support ds tf datum dataset from tensor slice sample train sample label utility python def support sampler ds ds list for I in np arange 0 10 ds label ds filter lambda image label label I repeat ds list append ds label return ds list def get support ds ds bs 640 list ds support sampler ds balance ds tf datum experimental sample from dataset list ds 0 1 10 loader tuple for in range 2 loader balance ds final ds tf datum dataset zip loader return final ds batch bs issue verification get a sample batch first python sample support ds get support ds support ds support image one support image two next iter sample support ds plot the image from support image one python plt figure figsize 7 7 for I image in enumerate support image one 0 9 ax plt subplot 3 3 I 1 plt imshow image numpy astype int plt title int support image one 1 I plt axis off give image the same thing with support image two 0 9 with the label print accordingly image describe the expect behavior the image batch should be the same contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute no standalone code to reproduce the issue colab notebook |
tensorflowtensorflow | globalaveragepooling1d do not work with a mask when mixed precision policy be set | Bug | system information os colab ubuntu 18 04 5 tensorflow instal from pre instal in colab tensorflow version 2 4 1 python version 3 7 10 gpu model and memory tesla k80 12 gb describe the current behavior when set the mixed precision policy and use a globalaveragepooling1d layer with a mask the mask be wrongly cast with backend floatx float32 even with mixed precision and this raise an issue of un matching type for the forward pass describe the expect behavior we can use globalaveragepooling1d with a mask and mixed precision set contribute do you want to contribute a pr yes briefly describe your candidate solution in globalaveragepooling1d call tensorflow python keras layer pool py l795 mask math op cast mask input dtype standalone code to reproduce the issue from tensorflow keras import mixed precision mixed precision set global policy mixed float16 inputs1 keras input shape 36 512 name digit dtype float16 inputs2 keras input shape 36 name digit dtype bool average layer layer globalaveragepooling1d x average layer inputs1 inputs2 this yield the issue set tf keras backend set floatx float16 solve the issue but this might be well to directly cast the mask to input dtype in globalaveragepooling1d call def call self input mask none step axis 1 if self data format channel last else 2 if mask be not none mask math op cast mask backend floatx mask array op expand dim mask 2 if self data format channel last else 1 input mask return backend sum input axis step axis math op reduce sum mask axis step axis else return backend mean input axis step axis other info log error trace usr local lib python3 7 dist package tensorflow python keras layer pool py in call self input mask 796 mask array op expand dim 797 mask 2 if self data format channel last else 1 798 input mask 799 return backend sum input axis step axis math op reduce sum 800 mask axis step axis typeerror input y of mul op have type float32 that do not match type float16 of argument x |
tensorflowtensorflow | tflite hexagon delegate performance drop when upgrade from tf 2 2 to 2 4 | Bug | system information have I write custom code yes os platform and distribution e g linux ubuntu 16 04 window 10 android mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device snapdragon 855 dev platform tensorflow instal from source or binary source tensorflow version use command below 2 2 and 2 4 python version 3 8 bazel version if compile from source 3 7 gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior a keras model be convert to tflite with full integer quantization and be execute on device use hexagon delegate when convert use tensorflow 2 2 97 of 99 node be a execute on dsp and good performance be achieve 12 ms per inference when the same model be convert use tensorflow 2 4 the output tflite nn have 112 node of which only 81 be execute on dsp an runtime be 40 ms nearly 4 time bad when convert use tensorflow 2 2 a run time of 12 ms be observe in an offline test when run online in an android app run time of inference degrade to 18 ms describe the expect behavior runtime in an offline test be expect to remain nearly the same when upgrade from tensorflow 2 2 to tensorflow 2 4 runtime difference between online and offline execution be expect to be much less than 6ms 18 ms vs 12 ms as most of the network be offload to dsp standalone code to reproduce the issue see attach a zip file with 1 the keras model 2 conversion script 3 representative dataset for conversion 3 output tflite model for tf version 2 2 4 output tflite model for tf version 2 4 5 log from tflite benchmark utility for both tflite model version usage python convertandquantize py switch from tf 2 2 to tf 2 4 in your environment between run |
tensorflowtensorflow | fail to initialize tensorflow context subgraph be null | Bug | I be try to run this program however I keep on get the error list below package com example se import android content context import android content intent import android content sharedpreference import android content re assetfiledescriptor import android content re assetmanag import android database cursor import android media audiomanager import android net uri import android net rtp audiostream import android os bundle import android os fileutil import android util log import android view layoutinflater import android view view import android view viewgroup import android widget button import android widget toast import androidx annotation nonnull import androidx annotation nullable import androidx fragment app fragment import androidx fragment app fragmentmanager import androidx fragment app fragmentpageradapter import androidx fragment app fragmenttransaction import androidx viewpager widget viewpager import com example se ui main sectionspageradapter import com google android material tabs tablayout import org tensorflow lite interpreter import org tensorflow lite flex flexdelegate import java io bufferedinputstream import java io bufferedreader import java io bytearrayoutputstream import java io file import java io fileinputstream import java io ioexception import java io inputstream import java io inputstreamreader import java lang object import java lang reflect array import java net urisyntaxexception import java nio bytebuffer import java nio floatbuffer import java nio mappedbytebuffer import java nio channel filechannel import java util arraylist import java util array import java util hashmap import java util list import java util map public class classify extend fragment private button choose file button public static final int pickfile result code 1 private uri fileuri private string filepath this be the final file path view classify view to load from asset folder private static final string label filename file android asset label txt private static final string model filename file android asset soundclassifier tflite private static final string log tag log tagge be here for label and modelfile private list label new arraylist private list displayedlabel new arraylist for the audio file bytearrayoutputstream out new bytearrayoutputstream inputstream in byte audiobyte float audiofile new float 1 44032 for machine learn private mappedbytebuffer tflitemodel private interpreter tflite private final interpreter option ftliteoption new interpreter option float output private recognizecommand recognizecommand null private int modelinputlength private int modelnumclasse private floatbuffer inputbuffer nullable override public view oncreateview nonnull layoutinflater inflater nullable viewgroup container nullable bundle savedinstancestate classify view inflater inflate r layout classify container false both find the classify and stop classify button choose file button button classify view findviewbyid r i d classify button for label file string actuallabelfilename label filename split file android asset 1 1 log I log tag read label from actuallabelfilename bufferedreader br null try br new bufferedreader new inputstreamreader classify view getcontext getasset open actuallabelfilename string line while line br readline null label add line if line charat 0 displayedlabel add line substre 0 1 touppercase line substre 1 catch ioexception e throw new runtimeexception problem read the label file e create the equal number of label on the output file log I log tag label file message be displayedlabel output new float displayedlabel size todo implement recognize command if not work open the model file string actualmodelfilename model filename split file android asset 1 1 try tflitemodel loadmodelfile classify view getcontext getasset actualmodelfilename catch exception e throw new runtimeexception e log I log tag the modal file be actualmodelfilename log I log tag the actual content be tflitemodel todo model file open here try ftliteoption setnumthread 1 flexdelegate flex new flexdelegate ftliteoption adddelegate flex file openthis new file model filename tflite new interpreter tflitemodel ftliteoption tflite new interpreter openthis catch exception e throw new runtimeexception e log I log tag tf lite file load to load the metadata and verify it int inputshape tflite getinputtensor 0 shape modelinputlength inputshape 1 int outputshape tflite getoutputtensor 0 shape modelnumclasse outputshape 1 log I log tag modelnumclasse if modelnumclasse displayedlabel size log e log tag the file s metadata be not the same else log I log tag the file s metadata be same log I log tag displayedlabel size inputbuffer floatbuffer allocate modelinputlength choose file button setonclicklistener new view onclicklistener override public void onclick view v intent choosefile new intent intent action get content choosefile settype choosefile intent createchooser choosefile choose a file startactivityforresult choosefile pickfile result code at this point we have the path of the file file path work for pie chart we can have return classify view this get the file path override public void onactivityresult int requestcode int resultcode intent datum switch requestcode case pickfile result code if resultcode 1 fileuri datum getdata filepath fileuri getpath system out println the select file path be filepath open audio file fileuri open main audio file try floatbuffer outputbuffer floatbuffer allocate modelnumclasse inputbuffer rewind outputbuffer rewind tflite run inputbuffer outputbuffer log I log tag the output be array tostre outputbuffer array sharedpreference sharedpref classify view getcontext getsharedpreference getstre r string ml value context mode private to open in private mode can only be see by our application sharedpreference editor editor sharedpref edit open the file to edit float arr outputbuffer array string str for int I 0 I 0 out write buff 0 read out flush catch exception e throw new runtimeexception e todo change the audio file to a float pointer audiobyte out tobytearray for int I 0 I audiobyte length I float val float audiobyte I inputbuffer put I val audiofile 0 I val this method load the tf lite file private static mappedbytebuffer loadmodelfile assetmanager asset string modelfilename throw ioexception assetfiledescriptor filedescriptor asset openfd modelfilename fileinputstream inputstream new fileinputstream filedescriptor getfiledescriptor filechannel filechannel inputstream getchannel long startoffset filedescriptor getstartoffset long declaredlength filedescriptor getdeclaredlength return filechannel map filechannel mapmode read only startoffset declaredlength to open history page public void openhistorypage viewpager viewpager getactivity findviewbyid r i d view pager viewpager setcurrentitem 2 e androidruntime fatal exception main process com example se pid 5777 java lang runtimeexception java lang illegalargumentexception internal error fail to apply delegate fail to initialize tensorflow context subgraph be null regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 2 flexsize fail to prepare at com example se classify oncreateview classify java 138 at androidx fragment app fragment performcreateview fragment java 2600 at androidx fragment app fragmentmanagerimpl movetostate fragmentmanagerimpl java 881 at androidx fragment app fragmentmanagerimpl movefragmenttoexpectedstate fragmentmanagerimpl java 1238 at androidx fragment app fragmentmanagerimpl movetostate fragmentmanagerimpl java 1303 at androidx fragment app backstackrecord executeops backstackrecord java 439 at androidx fragment app fragmentmanagerimpl executeop fragmentmanagerimpl java 2079 at androidx fragment app fragmentmanagerimpl executeopstogether fragmentmanagerimpl java 1869 at androidx fragment app fragmentmanagerimpl removeredundantoperationsandexecute fragmentmanagerimpl java 1824 at androidx fragment app fragmentmanagerimpl execsingleaction fragmentmanagerimpl java 1696 at androidx fragment app backstackrecord commitnowallowingstateloss backstackrecord java 299 at androidx fragment app fragmentpageradapter finishupdate fragmentpageradapter java 235 at androidx viewpager widget viewpager populate viewpager java 1244 at androidx viewpager widget viewpager populate viewpager java 1092 at androidx viewpager widget viewpager onmeasure viewpager java 1622 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at androidx coordinatorlayout widget coordinatorlayout onmeasurechild coordinatorlayout java 760 at com google android material appbar headerscrollingviewbehavior onmeasurechild headerscrollingviewbehavior java 99 at com google android material appbar appbarlayout scrollingviewbehavior onmeasurechild appbarlayout java 2003 at androidx coordinatorlayout widget coordinatorlayout onmeasure coordinatorlayout java 831 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at android widget framelayout onmeasure framelayout java 194 at androidx appcompat widget contentframelayout onmeasure contentframelayout java 146 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at android widget linearlayout measurechildbeforelayout linearlayout java 1552 at android widget linearlayout measurevertical linearlayout java 842 at android widget linearlayout onmeasure linearlayout java 721 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at android widget framelayout onmeasure framelayout java 194 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at android widget linearlayout measurechildbeforelayout linearlayout java 1552 at android widget linearlayout measurevertical linearlayout java 842 at android widget linearlayout onmeasure linearlayout java 721 at android view view measure view java 25466 at android view viewgroup measurechildwithmargin viewgroup java 6957 at android widget framelayout onmeasure framelayout java 194 at com android internal policy decorview onmeasure decorview java 747 at android view view measure view java 25466 at android view viewrootimpl performmeasure viewrootimpl java 3397 at android view viewrootimpl measurehierarchy viewrootimpl java 2228 at android view viewrootimpl performtraversal viewrootimpl java 2486 at android view viewrootimpl dotraversal viewrootimpl java 1952 at android view viewrootimpl traversalrunnable run viewrootimpl java 8171 e androidruntime at android view choreographer callbackrecord run choreographer java 972 at android view choreographer docallback choreographer java 796 at android view choreographer doframe choreographer java 731 at android view choreographer framedisplayeventreceiver run choreographer java 957 at android os handler handlecallback handler java 938 at android os handler dispatchmessage handler java 99 at android os looper loop looper java 223 at android app activitythread main activitythread java 7656 at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java 592 at com android internal os zygoteinit main zygoteinit java 947 cause by java lang illegalargumentexception internal error fail to apply delegate fail to initialize tensorflow context subgraph be null regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 2 flexsize fail to prepare at org tensorflow lite nativeinterpreterwrapper applydelegate native method at org tensorflow lite nativeinterpreterwrapper applydelegate nativeinterpreterwrapper java 367 at org tensorflow lite nativeinterpreterwrapper init nativeinterpreterwrapper java 85 at org tensorflow lite nativeinterpreterwrapper nativeinterpreterwrapper java 63 at org tensorflow lite interpreter interpreter java 277 at com example se classify oncreateview classify java 135 58 more |
tensorflowtensorflow | tf datum experimental snapshot segfault when use repeat and prefetch | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux cento 7 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary 2 4 0 tensorflow version use command below 2 4 0 python version 3 7 7 bazel version if compile from source gcc compiler version if compile from source cuda cudnn version n a gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior use the follow simple script we can see a segmentation fault python import tensorflow as tf import numpy as np dataset tf datum dataset from tensor slice np random rand 16 1024 dataset dataset apply tf datum experimental snapshot snapshot dataset dataset shuffle buffer size 16 dataset dataset batch 16 dataset dataset repeat dataset dataset prefetch 1 def run dataset iterator iter dataset for in range 30 next iterator for in range 10 run dataset if we run it with tensorflow 2 4 0 or tensorflow 2 4 1 the output be 2021 05 04 11 04 17 989897 I tensorflow compiler mlir mlir graph optimization pass cc 116 none of the mlir optimization pass be enable register 2 2021 05 04 11 04 17 990504 I tensorflow core platform profile util cpu util cc 112 cpu frequency 2596985000 hz segmentation fault core dump if either of snapshot or repeat or prefetch be remove this would not occur describe the expect behavior the expect behavior be that there would not be a segmentation fault contribute do you want to contribute a pr yes no yes briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook python import tensorflow as tf import numpy as np dataset tf datum dataset from tensor slice np random rand 16 1024 dataset dataset apply tf datum experimental snapshot snapshot dataset dataset shuffle buffer size 16 dataset dataset batch 16 dataset dataset repeat dataset dataset prefetch 1 def run dataset iterator iter dataset for in range 30 next iterator for in range 10 run dataset other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach analyze the core dump this be the truncated stack trace 0 0x00007fa2236c08af in tensorflow data experimental snapshotdatasetv2op dataset iterator reader reader from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 1 0x00007fa2236c0971 in tensorflow data experimental snapshotdatasetv2op dataset iterator reader reader from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 2 0x00007fa2236c04aa in tensorflow data experimental snapshotdatasetv2op dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 3 0x00007fa2222eefee in tensorflow data mapdatasetop dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 4 0x00007fa222335867 in tensorflow datum shuffledatasetopbase shuffledatasetbase iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 5 0x00007fa2222c13a9 in tensorflow data batchdatasetop dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 6 0x00007fa22232b529 in tensorflow data repeatdatasetop dataset foreveriterator foreveriterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 7 0x00007fa223e7e385 in tensorflow data prefetchdatasetop dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 8 0x00007fa223771615 in tensorflow data experimental anonymous namespace maxintraopparallelismdatasetop dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 9 0x00007fa2222fb665 in tensorflow datum modeldatasetop dataset iterator iterator from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 10 0x00007fa223e441ab in std sp count ptr inplace gnu cxx lock policy 2 m dispose from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 11 0x00007fa21d44b1f6 in std sp count base gnu cxx lock policy 2 m release from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 12 0x00007fa223e4dc62 in tensorflow data iteratorresource iteratorresource from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 13 0x00007fa223e4dd51 in tensorflow data iteratorresource iteratorresource from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 14 0x00007fa2199ac086 in tensorflow resourcemgr resourceandname resourceandname from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python libtensorflow framework so 2 15 0x00007fa2199ae73f in tensorflow resourcemgr dodelete std string const unsigned long long std string const std string const from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python libtensorflow framework so 2 16 0x00007fa2199aeb89 in tensorflow resourcemgr delete tensorflow resourcehandle const from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python libtensorflow framework so 2 17 0x00007fa223e4f684 in tensorflow data deleteiteratorop docompute tensorflow opkernelcontext from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 18 0x00007fa223e444b1 in tensorflow datum hybridasyncopkernel compute tensorflow opkernelcontext from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 19 0x00007fa22396409b in tensorflow kernelanddeviceop run tensorflow scopedstepcontainer tensorflow eagerkernelarg const std vector std allocator tensorflow cancellationmanager absl lts 2020 02 25 optional const from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 20 0x00007fa22391f359 in tensorflow eagerkernelexecute tensorflow eagercontext absl lts 2020 02 25 inlinedvector const absl lts 2020 02 25 optional const std unique ptr const tensorflow graphcollector tensorflow cancellationmanager absl lts 2020 02 25 span from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 21 0x00007fa2239202c0 in tensorflow executenode run from home ashahab dev tensorflow build trunk tmp tf venv lib python3 7 site package tensorflow python pywrap tensorflow internal so 22 0x00007fa22395d14f in tensorflow eagerexecutor syncexecute tensorflow eagernode |
tensorflowtensorflow | issue | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow os platform and distribution e g linux ubuntu 16 04 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary tensorflow version use command below python version bazel version if compile from source gcc compiler version if compile from source cuda cudnn version gpu model and memory you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior describe the expect behavior contribute do you want to contribute a pr yes no briefly describe your candidate solution if contribute standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | download error for cars196 dataset when use tfds load | Bug | it seem cars196 dataset be not available train ds val ds test ds metadata tfds load cars196 split train 80 train 80 test with info true downloaderror fail to get url http code 404 |
tensorflowtensorflow | error when load tensorflow savedmodel with tag set serve filefactory local file not find | Bug | please make sure that this be a build installation issue as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag build template system information os platform and distribution e g linux ubuntu 16 04 google colab linux 4 19 112 x86 64 with ubuntu 18 04 bionic mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device tensorflow instal from source or binary preinstalle tensorflow version 2 4 1 python version 3 7 10 instal use virtualenv pip conda pip bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version 11 2 gpu model and memory tesla p100 16 g describe the problem I have create a sequence classification model use tfhub small bert bert en uncased l 6 h 512 a 8 preprocessor and bert en uncased preprocess 2 encoder I have provide the main component of the code below after save the save model and upload to google cloud storage I be follow the documentation for instantiate a bigquery ml model from the save model but receive the follow error execute query with job i d d6223072 8b7c 4672 a450 5ec8a23e7351 query execute 8 99 error 400 get error when load tensorflow savedmodel with tag set serve filefactory local file not find the application have not be link against the file localfile library or initgoogle have not be call yet node text file init initializetablefromtextfilev2 job i d d6223072 8b7c 4672 a450 5ec8a23e7351 query job sql follow 1 create or replace model adapt ml bigquery ml sentiment 2 option model type tensorflow model path gs adapt ml bucket finetune model I be not sure if this be a tensorflow tfhub or bigquery error as I be unable to locate any similar issue in web search or github search provide the exact sequence of command step that you execute before run into the problem command step follow linearly below any other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach model information def build classifier model train dataset text input tf keras layers input shape dtype tf string name text preprocesse layer hub keraslayer tfhub handle preprocess name preprocesse encoder input preprocesse layer text input encoder hub keraslayer tfhub handle encoder trainable true name bert encoder output encoder encoder input pool output pool output net tf keras layer dense hide layer dim kernel initializer tf keras initializers truncatednormal stddev 0 002 activation relu name pre classifier pool net tf keras layer dropout dropout trainable true net net tf keras layer dense 2 activation sigmoid name classifier net model tf keras model text input net loss tf keras loss sparsecategoricalcrossentropy from logit true epoch epoch step per epoch tf datum experimental cardinality train dataset numpy num train step step per epoch epoch num warmup step int 0 1 num train step optimizer optimization create optimizer init lr learn rate num train step num train step num warmup step num warmup step optimizer type adamw model compile optimizer optimizer loss loss metric accuracy model summary return model train model model build classifi model train dataset checkpoint modelcheckpoint filepath save model path verbose 1 save freq epoch monitor val accuracy save well only true mode max save weight only true training model history model fit train dataset batch size batch size epoch epoch callback checkpoint validation datum valid dataset reload model and save as save model model build classifi model train dataset model load weight save model path model save finetune model include optimizer false save trace false model tf keras model load model finetune model rm r finetune model tf save model save model finetune model signature none option none check correct signature load tf save model load finetune model print list load signature key serve default infer load signature serve default print infer structured input signature print infer structure output serve default text tensorspec shape none dtype tf string name text classifi tensorspec shape none 2 dtype tf float32 name classifier after copy to gs bucket bigquery create or replace model adapt ml bigquery ml sentiment option model type tensorflow model path gs adapt ml bucket finetune model |
tensorflowtensorflow | cache augmentation and batch size not use in cyclegan tutorial | Bug | url s with the issue description of issue what need change seem like the data pipeline for the cyclegan be use cache after augment the datum be this intentional doesn t that make augmentation useless another problem in the data pipeline a batch size of 1 be use even if there be a batch size variable at the start of the tutorial |
tensorflowtensorflow | tfds link break for cars196 | Bug | try to download cars196 use tflite model maker train datum validation data test datum imageclassifierdataloader from tfds cars196 downloaderror fail to get url http code 404 it be work previously how to resolve |
tensorflowtensorflow | access numpy array from a tensor object use dataset map | Bug | I be try to access the numpy array from a tensor object that be process with map I get the error attributeerror tensor object have no attribute numpy when I try to access the tensor as np array tensor numpy while if I use dataset take n I be able to access the numpy array for more clarity on the situation I be face here be a short reproducible example of the error in a google colab tensorflow version 2 4 1 |
tensorflowtensorflow | example of tf scatter nd contain session which be obselete | Bug | url s with the issue please provide a link to the documentation entry for example description of issue what need change the code python with tf session as sess print sess run scatter should be replace with print scatter as the session be no long require in tensorflow 2 x correct link be the link to the source code correct na parameter define be all parameter define and format correctly yes return define be return value define yes raise list and define be the error define no usage example be there a usage example yes |
tensorflowtensorflow | error with sigsegv while build for teachable machine output | Bug | I be try to implement tensor flow lite in my application the code be attach below package com example se import android content context import android content intent import android content re assetfiledescriptor import android content re assetmanag import android database cursor import android net uri import android os bundle import android os fileutil import android util log import android view layoutinflater import android view view import android view viewgroup import android widget button import android widget toast import androidx annotation nonnull import androidx annotation nullable import androidx fragment app fragment import org tensorflow lite interpreter import java io bufferedinputstream import java io bufferedreader import java io bytearrayoutputstream import java io file import java io fileinputstream import java io ioexception import java io inputstream import java io inputstreamreader import java net urisyntaxexception import java nio mappedbytebuffer import java nio channel filechannel import java util arraylist import java util hashmap import java util list import java util map public class classify extend fragment private button choose file button private button stop classify public static final int pickfile result code 1 private uri fileuri private string filepath this be the final file path view classify view to load from asset folder private static final string label filename file android asset label txt private static final string model filename file android asset soundclassifier tflite private static final string log tag log tagge be here for label and modelfile private list label new arraylist private list displayedlabel new arraylist for the audio file bytearrayoutputstream out new bytearrayoutputstream bufferedinputstream in byte audiobyte for machine learn private final interpreter option tfliteoption new interpreter option private mappedbytebuffer tflitemodel private interpreter tflite private map outputmap new hashmap private final interpreter option ftliteoption new interpreter option private recognizecommand recognizecommand null todo remove this if not need nullable override public view oncreateview nonnull layoutinflater inflater nullable viewgroup container nullable bundle savedinstancestate classify view inflater inflate r layout classify container false both find the classify and stop classify button choose file button button classify view findviewbyid r i d classify button stop classify button classify view findviewbyid r i d stop classify for label file string actuallabelfilename label filename split file android asset 1 1 log I log tag read label from actuallabelfilename bufferedreader br null try br new bufferedreader new inputstreamreader classify view getcontext getasset open actuallabelfilename string line while line br readline null label add line if line charat 0 displayedlabel add line substre 0 1 touppercase line substre 1 catch ioexception e throw new runtimeexception problem read the label file e log I log tag label file message be displayedlabel todo implement recognize command if not work open the model file string actualmodelfilename model filename split file android asset 1 1 try tflitemodel loadmodelfile classify view getcontext getasset actualmodelfilename catch exception e throw new runtimeexception e log I log tag the modal file be actualmodelfilename log I log tag the actual content be tflitemodel todo model file open here try ftliteoption setnumthread 1 tflite new interpreter tflitemodel ftliteoption catch exception e throw new runtimeexception e log I log tag tf lite file load choose file button setonclicklistener new view onclicklistener override public void onclick view v intent choosefile new intent intent action get content choosefile settype choosefile intent createchooser choosefile choose a file startactivityforresult choosefile pickfile result code at this point we have the path of the file file path work for pie chart we can have return classify view this get the file path override public void onactivityresult int requestcode int resultcode intent datum switch requestcode case pickfile result code if resultcode 1 fileuri datum getdata filepath fileuri getpath system out println the select file path be filepath open audio file fileuri open main audio file try outputmap put 0 outputscore todo remove another loadmodelfile depreciate tflite run audiobyte outputmap log I log tag the output be outputmap catch exception e throw new runtimeexception e break public void open audio file uri filepath try in new bufferedinputstream getcontext getcontentresolver openinputstream filepath int read byte buff new byte 1024 while read in read buff 0 out write buff 0 read out flush catch exception e throw new runtimeexception e audiobyte out tobytearray log I log tag the audio file be audiobyte tostre this method load the tf lite file private static mappedbytebuffer loadmodelfile assetmanager asset string modelfilename throw ioexception assetfiledescriptor filedescriptor asset openfd modelfilename fileinputstream inputstream new fileinputstream filedescriptor getfiledescriptor filechannel filechannel inputstream getchannel long startoffset filedescriptor getstartoffset long declaredlength filedescriptor getdeclaredlength return filechannel map filechannel mapmode read only startoffset declaredlength the tflite application be create use the teachable machine currently in the application I have try to use a depreciated version of the interpreter constructor however there be no success therefore I define option and render the application however I constantly get the same error attach below 04 27 00 44 57 launch app on pixel 3a api 30 install successfully finish in 13 s 670 ms adb shell be start n com example se com example se login page a android intent action main c android intent category launcher connect to process 12447 on device emulator 5554 capturing and display logcat message from application this behavior can be disable in the logcat output section of the debugger setting page d networksecurityconfig no network security config specify use platform default d networksecurityconfig no network security config specify use platform default w componentdiscovery class com google firebase dynamicloading dynamicloadingregistrar be not an find I firebaseapp device unlock initialize all firebase apis for app default I firebaseinitprovider firebaseapp initialization successful d libegl load vendor lib egl libegl emulation so d libegl load vendor lib egl libglesv1 cm emulation so d libegl load vendor lib egl libglesv2 emulation so I firebaseauth firebaseauth prepare to create service connection to fallback implementation w com example se access hide method landroid view view computefitsystemwindow landroid graphic rect landroid graphic rect z greylist reflection allow w com example se access hide method landroid view viewgroup makeoptionalfitssystemwindow v greylist reflection allow d firebaseauth notify i d token listener about a sign out event d firebaseauth notify auth state listener about a sign out event d hostconnection hostconnection get new host connection establish 0xf542d1f0 tid 12502 d hostconnection hostcomposition ext android emu checksum helper v1 android emu native sync v2 android emu native sync v3 android emu native sync v4 android emu dma v1 android emu direct mem android emu host composition v1 android emu host composition v2 android emu yuv cache android emu async unmap buffer gl oes egl image external essl3 gl oes vertex array object gl khr texture compression astc ldr android emu host side trace android emu async frame command android emu gles max version 3 0 w openglrenderer fail to choose config with egl swap behavior preserve retrying without d egl emulation eglcreatecontext 0xf542c9a0 maj 3 min 0 rcv 3 d egl emulation eglmakecurrent 0xf542c9a0 ver 3 0 tinfo 0xf57788d0 first time I gralloc4 mapper 4 x be not support d hostconnection createunique call d hostconnection hostconnection get new host connection establish 0xf542dd50 tid 12502 d goldfish address space allocate ask for block of size 0x100 d goldfish address space allocate ioctl allocate return offset 0x3fbdbc000 size 0x2000 d hostconnection hostcomposition ext android emu checksum helper v1 android emu native sync v2 android emu native sync v3 android emu native sync v4 android emu dma v1 android emu direct mem android emu host composition v1 android emu host composition v2 android emu yuv cache android emu async unmap buffer gl oes egl image external essl3 gl oes vertex array object gl khr texture compression astc ldr android emu host side trace android emu async frame command android emu gles max version 3 0 I com example se background young concurrent copying gc free 30675 2242 kb allocspace object 8 224 kb los object 90 free 2531 kb 26 mb pause 2 806ms total 524 254m I openglrenderer davey duration 1175ms flag 1 intendedvsync 73082886249371 vsync 73082886249371 oldestinputevent 9223372036854775807 newestinputevent 0 handleinputstart 73082894115840 animationstart 73082894159840 performtraversalsstart 73082894249840 drawstart 73083121417840 syncqueue 73083147195840 syncstart 73083154696840 issuedrawcommandsstart 73083154893840 swapbuffer 73084012695840 framecomplete 73084068849840 dequeuebufferduration 570000 queuebufferduration 3404000 gpucomplete 72904454231491230 w com example se verification of java lang string com google android gms common connectionresult geterrormessage take 198 255ms 15 13 bytecode s 800b approximate peak alloc w com example se verification of android app pendingintent com google android gms common connectionresult getresolution take 144 927m 20 70 bytecode s 808b approximate peak alloc I choreographer skip 97 frame the application may be do too much work on its main thread w com example se verification of void com google android gms common api internal zabo run take 101 075ms 999 26 bytecode s 2312b approximate peak alloc I assiststructure flatten final assist datum 1548 byte contain 1 window 8 view w system ignore header x firebase locale because its value be null w system ignore header x firebase locale because its value be null d firebaseauth notify i d token listener about user vjokjtuijuwgm96bbprros6skfx2 d firebaseauth notify auth state listener about user vjokjtuijuwgm96bbprros6skfx2 d compatibilitychangereporter compat change i d report 147798919 uid 10154 state enable I log tagge be here read label from label txt I log tagge be here label file message be 0 background noise 1 cow 2 dog 3 hen 4 sheep I log tagge be here the modal file be soundclassifi tflite the actual content be java nio directbytebuffer pos 0 lim 5780044 cap 5780044 I tflite initialize tensorflow lite runtime w native cpu feature guard cc 36 the tensorflow library be compile to use sse instruction but these aren t available on your machine a libc fatal signal 11 sigsegv code 1 segv maperr fault addr 0xfffffff4 in tid 12447 com example se pid 12447 com example se what would be the good method to resolve this error the gradle file be plugin i d com android application i d com google gms google services android compilesdkversion 30 buildtoolsversion 30 0 3 defaultconfig applicationid com example se minsdkversion 25 targetsdkversion 30 versioncode 1 versionname 1 0 testinstrumentationrunner androidx test runner androidjunitrunner buildtype release minifyenable false proguardfile getdefaultproguardfile proguard android optimize txt proguard rule pro compileoption sourcecompatibility javaversion version 1 8 targetcompatibility javaversion version 1 8 dependency implementation androidx appcompat appcompat 1 2 0 implementation com google android material material 1 3 0 implementation androidx constraintlayout constraintlayout 2 0 4 implementation com google firebase firebase auth 20 0 3 implementation com google firebase firebase database 19 7 0 implementation org tensorflow tensorflow lite 2 3 0 implementation org tensorflow tensorflow lite gpu 2 3 0 implementation org tensorflow tensorflow lite select tf op 2 3 0 implementation org tensorflow tensorflow lite support 0 1 0 rc1 implementation org tensorflow tensorflow lite metadata 0 1 0 rc2 implementation androidx lifecycle lifecycle livedata ktx 2 2 0 implementation androidx lifecycle lifecycle viewmodel ktx 2 2 0 testimplementation junit junit 4 androidtestimplementation androidx test ext junit 1 1 2 androidtestimplementation androidx test espresso espresso core 3 3 0 |
tensorflowtensorflow | clarify label weight parameter in tf keras metric auc | Bug | summary of documentation issue in tf keras metrics auc it be not clear whether or not the label weight parameter in the tf keras metric auc documentation expect weight to sum to 1 or not for example with multi label datum you can take the sum of the occurrence of each label divide by the total number of sample to compute the following example weight unscaled label weight the weight above reflect the prevalence of each label you could also divide these weight by the sum of the weight array to put the weight in the range of 0 1 like so scale label weight description of what to clarify it would be helpful if the doc could be update to clarify what these weight represent the prevalence of each class the scale prevalence of each class on the range of 0 1 or something else if the doc seem straightforward to other and I be just miss something please let I know I m interested in computing micro average pr auc for my multi label model and it s not clear to I which of the follow I should use for multi hot encoded data datum sample print y train array 1 0 1 0 1 1 0 0 0 0 1 1 unscaled label weight n sample y train shape 0 class total y train sum axis 0 label weight class total n sample scale label weight n sample y train shape 0 class total y train sum axis 0 weight class total n sample label weight weight sum weight |
tensorflowtensorflow | tf keras preprocesse image dataset from directory error | Bug | tensorflow version 2 4 1 python version 3 7 10 cuda version 11 2 cuda driver version 465 19 01 document of the function tf keras preprocesse image dataset from directory the parameter directory be directory where the datum be locate if label be infer it should contain subdirectory each contain image for a class otherwise the directory structure be ignore but the directory structure be not ignore I read a code of tf keras preprocesse dataset util index directory and I find that it always care about subdir even if label be feed by list already sample code python train ds tf keras preprocesse image dataset from directory image path label label label mode binary validation split 0 2 subset training len label 202599 len os listdir image path 202599 every file in image path be jpg format it raise an error expect the length of label to match the number of file in the target directory len label be 202599 while we find 0 file in image path |
tensorflowtensorflow | loss calculation for multi dim sample in distribute strategy | Bug | hi I be read the distribute strategy and find one issue in the official doc if label be multi dimensional then average the per example loss across the number of element in each sample for example if the shape of prediction be batch size h w n class and label be batch size h w you will need to update per example loss like per example loss tf cast tf reduce prod tf shape label 1 tf float32 since the last dimension be automatically reduce in none mode then the manual reduce formula for the distribute strategy should be per example loss tf cast tf reduce prod tf shape label 1 1 tf float32 notice the addition of 1 in shape to remove the accumulation for the last axis I don t know which place I should put this post to but please update the relevant people who be in charge of the doc thank |
tensorflowtensorflow | dow | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change clear description for example why should someone use this method how be it useful correct link be the link to the source code correct parameter define be all parameter define and format correctly return define be return value define raise list and define be the error define for example raise usage example be there a usage example see the api guide on how to write testable usage example request visual if applicable be there currently visual if not will it clarify the content submit a pull request be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | add a utility to penalize majority class pixel in the segmentation tutorial | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue please provide a link to the documentation entry for example description of issue what need change it s not really an issue a suggestion rather clear description semantic segmentation dataset can be highly imbalance meaning that particular class pixel can be present more inside image than that of other class since segmentation problem can be treat as per pixel classification problem we can deal with the imbalance problem by weigh the loss function to account for this it s a simple and elegant way to deal with this problem other solution include always ensure that a batch of sample during training always contain some proportion which be prefix of positive class however tensorflow do not yet support the class weight argument in model fit for target that be 3d for segmentation problem we be essentially predict a map of shape batch size height width nb channel one way to get around this problem be to use sample weight instead but then again it s not very clear as to how to do that properly particularly with tf datum pipeline multiple folk have try several hack to get around this problem but it keep come back see here therefore I think the tutorial under question be a perfect opportunity to demonstrate the use case cc markdaoust |
tensorflowtensorflow | error print when call tf datum experimental load | Bug | system information os platform wsl2 linux ubuntu 20 04 tensorflow version 2 5 0 rc1 python version 3 9 cuda cudnn version 11 2 gpu model and memory rtx 3090 describe the current behavior when call tf datum experimental load the bellow error be print however the dataset appear to load without issue unimplemente can not merge option for dataset of type loaddataset because the dataset do not implement inputdataset describe the expect behavior no error message should be print standalone code to reproduce the issue python import tensorflow as tf ds tf datum dataset from tensor slice 1 2 3 for I in ds print I tf datum experimental save ds mnt d load issue dataset compression gzip load ds tf datum experimental load mnt d load issue dataset compression gzip for e in load ds print e |
tensorflowtensorflow | multi output keras model change metric name when reload | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 20 10 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device no tensorflow instal from source or binary pip install tensorflow version use command below 2 4 1 python version 3 79 bazel version if compile from source na gcc compiler version if compile from source na cuda cudnn version na gpu model and memory na describe the current behavior when reload a multi output model the name of the output get repeat when call tf keras model model evaluate import tensorflow as tf tf version 2 4 1 input a tf keras input shape 1 ifoo tf keras layer dense 1 name foo input a ibar tf keras layer dense 1 name bar input a imodel tf keras model input a foo foo bar bar imodel compile loss mse optimizer adam metric foo mae bar mae model evaluate tf constant 1 tf constant 1 return dict true 1 1 0s 236ms step loss 4 0179 bar loss 3 9153 foo loss 0 1026 bar mae 1 9787 foo mae 0 3203 loss 4 0179123878479 bar loss 3 915294885635376 foo loss 0 10261743515729904 bar mae 1 978710412979126 foo mae 0 320339560508728 model save tmp model model relaode tf keras model load model tmp model model relaode evaluate tf constant 1 tf constant 1 return dict true 1 1 0s 87m step loss 4 0179 bar loss 3 9153 foo loss 0 1026 bar bar mae 1 9787 foo foo mae 0 3203 loss 4 0179123878479 bar loss 3 915294885635376 foo loss 0 10261743515729904 bar bar mae 1 978710412979126 foo foo mae 0 320339560508728 describe the expect behavior I would expect the metric name to remain unchanged when reload model save tmp model model relaode tf keras model load model tmp model model relaode evaluate tf constant 1 tf constant 1 return dict true 1 1 0s 87m step loss 4 0179 bar loss 3 9153 foo loss 0 1026 bar mae 1 9787 foo mae 0 3203 loss 4 0179123878479 bar loss 3 915294885635376 foo loss 0 10261743515729904 bar mae 1 978710412979126 foo mae 0 320339560508728 standalone code to reproduce the issue here be a collab with the above code other info log include any log or source code that would be helpful to na ps I have never contribute to tensorflow but I would be glad to help resolve this issue |
tensorflowtensorflow | change simple audio recognition to right link | Bug | tensorflow micro system information host os platform and distribution e g linux ubuntu 16 04 tensorflow instal from source or binary tensorflow version commit sha if source target platform e g arm mbe os arduino nano 33 etc describe the problem the link of simple audio recognition in tensorflow lite micro examples micro speech train readme md be not right anymore it jump to before but now be so need to update please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | change simple audio recognition to right link | Bug | 1 change the link of simple audio recognition from to resolve 48627 |
tensorflowtensorflow | old makefile target in person dection example readme | Bug | tensorflow micro system information host os platform and distribution debian linux tensorflow instal from source tensorflow version 7e55a20 target platform host describe the problem the readme documentation for the run the test on a development machine of the person detection example readme run the test on a development machine specify a wrong make target this pr show the incorrect target and what it ought to be correct to 48594 please provide the exact sequence of command step when you run into the problem I be attempt to follow the instruction at tensorflow lite micro examples person detection readme md run the test on a development machine for run person detection test on a development machine the second step be make f tensorflow lite micro tool make makefile test person detection test this fail with the message make no rule to make target test person detection test stop |
tensorflowtensorflow | enhance greedymemoryplanner printmemoryplan format | Bug | tensorflow micro system information host os platform and distribution all tensorflow instal from source tensorflow version all target platform all describe the problem while use the output of greedymemoryplann printmemoryplan to well understand model memory use I find myself tweak the output to provide information in a more readily understandable way I have a pr that collect these change 48595 as note there the change be per buffer info line improvement a reduce text quantity to make it easy to scan for information b move size to near front line to make it simple for the eye to find this information c include ordinal letter use in per time display below to make it simple to cross reference per time improvement a include tick number this be useful in large model for cross referencing to the per buffer information and also for help to determine the actual operation be execute at that time b include total memory use of buffer which help to more clearly identify memory bottleneck please provide the exact sequence of command step when you run into the problem |
tensorflowtensorflow | resnet50 pretraine model for fine tuning the model be not convergence | Bug | hi I m use pretraine resnet50 model for my own datum s training the model be not convergence even the train accuracy look good show by the log and validation loss and accuracy be not improve during the training phase I also test the train model on training and val set the accuracy be very pool see below and I try the tensorflow 1 15 0 and 2 4 0 different version the problem be the same then I just change to vgg model it work fine no convergence problem so could help on this issue my code be base model tf keras application resnet50 include top false base model trainable false model tf keras model sequential base model tf keras layer conv2d filter num cat kernel size 1 tf keras layer globalaveragepooling2d tf keras layer dense unit num cat model summary optimizer tf keras optimizer rmsprop learn rate 0 001 loss func tf keras loss categoricalcrossentropy from logit true eval func tf keras metric categoricalaccuracy model compile optimizer optimizer loss loss func metric eval func history model fit train ds epoch 10 validation datum val ds model save weight checkpoint final and the training log be train on 156 step validate on 39 step epoch 1 10 156 156 183s 1s step loss 1 8214 categorical accuracy 0 5048 val loss 3 5411 val categorical accuracy 0 0386 epoch 2 10 156 156 33 211ms step loss 0 4409 categorical accuracy 0 8931 val loss 3 6663 val categorical accuracy 0 0386 epoch 3 10 156 156 34 219ms step loss 0 2582 categorical accuracy 0 9365 val loss 3 8821 val categorical accuracy 0 0386 epoch 4 10 156 156 32 203ms step loss 0 1666 categorical accuracy 0 9550 val loss 3 9013 val categorical accuracy 0 0386 epoch 5 10 156 156 31 201ms step loss 0 1212 categorical accuracy 0 9630 val loss 4 2440 val categorical accuracy 0 0386 epoch 6 10 156 156 31 201ms step loss 0 0826 categorical accuracy 0 9759 val loss 4 2431 val categorical accuracy 0 0386 epoch 7 10 156 156 31 198ms step loss 0 0648 categorical accuracy 0 9807 val loss 4 3009 val categorical accuracy 0 0514 epoch 8 10 156 156 32 205ms step loss 0 0573 categorical accuracy 0 9823 val loss 4 3420 val categorical accuracy 0 0386 epoch 9 10 156 156 31 196ms step loss 0 0548 categorical accuracy 0 9839 val loss 4 4843 val categorical accuracy 0 0386 epoch 10 10 156 156 31 200ms step loss 0 0478 categorical accuracy 0 9887 val loss 4 7390 val categorical accuracy 0 0386 run inference on training datum and validation datum 156 156 35 227m step loss 4 7357 categorical accuracy 0 0386 39 39 11 279ms step loss 4 7326 categorical accuracy 0 0386 train loss 4 735676199961931 train acc 0 03858520835638046 val loss 4 732603843395527 val acc 0 03858520835638046 |
tensorflowtensorflow | tensorflow reddit dataset to dataframe throw error | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow no os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 tensorflow instal from source or binary source python version 3 6 9 tensorflow version use command below tf version git version v2 4 0 49 g85c8b2a817f tf version version 2 4 1 describe the current behavior I want to load the tfds reddit dataset and convert it to a dataframe the below standalone code throw the follow error file home rylan documents fietelab rcrp exp 03 language model main py line 36 in load dataset reddit dataframe tfds as dataframe reddit dataset file home rylan documents fietelab rcrp rcrp lib python3 6 site package tensorflow dataset core as dataframe py line 218 in as dataframe df styleddataframe row file home rylan documents fietelab rcrp rcrp lib python3 6 site package tensorflow dataset core as dataframe py line 144 in init super init args kwargs typeerror object init take no parameter python baseexception standalone code to reproduce the issue import tensorflow as tf import tensorflow dataset as tfds reddit dataset tfds load reddit split train shuffle file false download true datum dir datum dir assert isinstance reddit dataset tf datum dataset reddit dataframe tfds as dataframe reddit dataset |
tensorflowtensorflow | conv2d didn t raise exception for invalid input argument | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary bianry tensorflow version use command below 2 4 1 python version 3 7 standalone code to reproduce the issue when conv2d with kernel size 2 and pad valid receive an invalid input it do not raise any exception instead it output a tensor with zero dimension this can lead to future crash for other apis with 0 dim tensor as input import tensorflow as tf import numpy as np filter kernel size stride padding 3 2 2 2 valid datum np random rand 1 1 1 1 layer tf keras layer conv2d filter kernel size stride stride padding padding print layer datum shape output 1 0 0 3 describe the current behavior no exception be raise for invalid input argument describe the expect behavior expect valueerror to be raise |
tensorflowtensorflow | update custom gradient py | Bug | add error raise when execute eagerly fix |
tensorflowtensorflow | while create vggsegnet giving follow error | Bug | below be vggsegnet code this function call model vggsegnet n class 1 input height 224 input width 224 be give same error on follow both version of tf and keras and python system information tf version 2 4 1 2 2 0 keras version 2 4 3 2 4 3 python 3 7 and python 3 8 follow be code define vggsegnet def vggsegnet n class input height input width vgg level 3 pretraine weight none img input input shape input height input width 3 x conv2d 64 3 3 activation relu pad same name block1 conv1 datum format channel last img input x conv2d 64 3 3 activation relu pad same name block1 conv2 datum format channel last x x maxpooling2d 2 2 stride 2 2 name block1 pool1 datum format channel last x f1 x x conv2d 128 3 3 activation relu pad same name block2 conv1 datum format channel last x x conv2d 128 3 3 activation relu pad same name block2 conv2 datum format channel last x x maxpooling2d 2 2 stride 2 2 name block2 pool datum format channel last x f2 x x conv2d 256 3 3 activation relu pad same name block3 conv1 datum format channel last x x conv2d 256 3 3 activation relu pad same name block3 conv2 datum format channel last x x conv2d 256 3 3 activation relu pad same name block3 conv3 datum format channel last x x maxpooling2d 2 2 stride 2 2 name block3 pool1 datum format channel last x f3 x x conv2d 512 3 3 activation relu pad same name block4 conv1 datum format channel last x x conv2d 512 3 3 activation relu pad same name block4 conv2 datum format channel last x x conv2d 512 3 3 activation relu pad same name block4 conv3 datum format channel last x x maxpooling2d 2 2 stride 2 2 name block4 pool1 datum format channel last x f4 x x conv2d 512 3 3 activation relu pad same name block5 conv1 datum format channel last x x conv2d 512 3 3 activation relu pad same name block5 conv2 datum format channel last x x conv2d 512 3 3 activation relu pad same name block5 conv3 datum format channel last x x maxpooling2d 2 2 stride 2 2 name block5 pool1 datum format channel last x f5 x x flatten name flatten x x dense 4096 activation relu name fc1 x x dense 4096 activation relu name fc2 x x dense 1000 activation softmax name prediction x vgg model img input x vgg load weight image segmentation kera py3 master model vgg16 weight th dim order th kernel hdf5 level f1 f2 f3 f4 f5 o level vgg level o zeropadding2d 1 1 datum format channel last o o conv2d 512 3 3 padding valid datum format channel last o o batchnormalization o o upsampling2d 2 2 datum format channel last o o zeropadding2d 1 1 datum format channel last o o conv2d 256 3 3 padding valid datum format channel last o o batchnormalization o o upsampling2d 2 2 datum format channel last o o zeropadding2d 1 1 datum format channel last o o conv2d 128 3 3 padding valid datum format channel last o o batchnormalization o o upsampling2d 2 2 datum format channel last o o zeropadding2d 1 1 datum format channel last o o conv2d 64 3 3 padding valid datum format channel last o o batchnormalization o o upsampling2d 2 2 datum format channel last o o zeropadding2d 1 1 datum format channel last o o conv2d 32 3 3 padding valid datum format channel last o o batchnormalization o o upsampling2d 2 2 datum format channel last o o zeropadding2d 1 1 datum format channel last o o conv2d n class 3 3 padding same datum format channel last o o batchnormalization o o shape model img input o output shape outputheight o shape 2 outputwidth o shape 3 outputheight o shape 2 outputwidth o shape 1 o reshape outputheight outputwidth 1 o o permute 1 2 o o activation sigmoid o model model img input o model outputwidth outputwidth model outputheight outputheight if pretraine weight model load weight pretraine weight return model error be as below traceback most recent call last file scratch pkasar dbatu training vggsegnet 224 224 work on 20 03 21 on augment image of size 256 by 256 py line 248 in model vggsegnet n class 1 input height 224 input width 224 file scratch pkasar dbatu training vggsegnet 224 224 work on 20 03 21 on augment image of size 256 by 256 py line 56 in vggsegnet vgg load weight image segmentation kera py3 master model vgg16 weight th dim order th kernel h5 file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python keras engine training py line 2234 in load weight hdf5 format load weight from hdf5 group f self layer file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python keras save hdf5 format py line 710 in load weight from hdf5 group k batch set value weight value tuple file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python util dispatch py line 201 in wrapper return target args kwargs file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python keras backend py line 3706 in batch set value x assign np asarray value dtype dtype x file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute value py line 781 in assign return value util on write assign file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute value util py line 140 in on write assign return var update pylint disable protect access file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute value py line 940 in update return self update cross replica update fn value kwargs file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute value py line 893 in update cross replica return self distribute strategy extend update file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute distribute lib py line 2494 in update return self update var fn args kwargs group file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute mirror strategy py line 710 in update fn v distribute util select replica mirror I args file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python autograph impl api py line 572 in wrapper return func args kwargs file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python distribute value util py line 139 in assign fn lambda var a kw var assign a kw file home pkasar dbatu conda envs dl new lib python3 8 site package tensorflow python op resource variable op py line 888 in assign raise valueerror valueerror can not assign to variable block1 conv1 kernel 0 due to variable shape 3 3 3 64 and value shape 3 3 64 3 be incompatible I be do segmentation task use iou as performance metric the link for vgg16 weight th dim order th kernel h5 be this help I out thank you in advance |
tensorflowtensorflow | warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 macos big sur 11 2 3 and ubuntu 18 google colab tensorflow instal from source or binary pip binary tensorflow version use command below colab v2 4 1 0 g85c8b2a817f 2 4 1 macos v2 4 0 49 g85c8b2a817f 2 4 1 python version 3 8 8 macos and 3 7 colab you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior I m use the follow actor critic model def create a2c input shape actor unit initializer orthogonal tf math sqrt 2 0 x0 input input shape x dense 64 activation tanh kernel initializer initializer x0 x dense 64 activation tanh kernel initializer initializer x actor out dense actor unit kernel initializer orthogonal 0 01 x critic out dense 1 kernel initializer orthogonal 1 0 x return model x0 actor out critic out I keep get warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss warn tensorflow gradient do not exist for variable dense 2 kernel 0 dense 2 bias 0 when minimize the loss describe the expect behavior I expect the gradient to exist standalone code to reproduce the issue here s a colab notebook other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | wrong warning message about variable be use a lambda layer s call | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 ubuntu mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below tf 2 4 1 and tf nightly python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a describe the current behavior I want to create a trainable variable I use a subclasse layer and make this variable a weight of the layer instead of create a variable directly and use it in a lambda layer as recommend by however there be a warning message say warn tensorflow the follow variable be use a lambda layer s call tf linalg matmul 7 but be not present in its track object it be possible that this be intend behavior but it be more likely an omission this be a strong indication that this layer should be formulate as a subclasse layer rather than a lambda layer the model build compile and train successfully and the variable be recognize as a trainable weight describe the expect behavior expect no warning standalone code to reproduce the issue I have test the code on colab in both tensorflow 2 4 1 and tf nightly and receive the same warning import tensorflow as tf import numpy as np class variablelayer tf keras layers layer def init self shape initializer trainable v name super variablelayer self init self shape shape self initializer initializer self trainable trainable self v name v name def build self input shape self v self add weight shape self shape initializer self initializer trainable self trainable name self v name def call self input return self v def build subclass functional x tf keras input shape 8 layer variablelayer shape 8 3 initializer glorot normal trainable true v name v z layer x o tf matmul x z model tf keras model inputs x output o return model if name main datum np random rand 2 8 model build subclass functional print model datum x np random rand 5 8 y np random rand 5 3 model compile optimizer adam loss mse model fit x y print model trainable weight output warn tensorflow the follow variable be use a lambda layer s call tf linalg matmul 7 but be not present in its track object it be possible that this be intend behavior but it be more likely an omission this be a strong indication that this layer should be formulate as a subclasse layer rather than a lambda layer |
tensorflowtensorflow | tflite potential out of bind array access in log softmax optimize kernel | Bug | hello the optimize log softmax operator seem to have a bug that may lead to out of bind memory access the follow access l4143 be equivalent to param table input datum j max q8 max val which can be be negative with int8 input datum example input datum 109 9 0 12 78 max q8 127 max val 78 access for j 0 be param table 109 127 78 which result in a param table 60 access the test currently don t highlight the bug by luck as we end up access other field in logsoftmaxopdata but replace logsoftmaxopdata l87 to remove all the unused parameter or even just put softmaxparam as fist member c struct logsoftmaxopdata struct softmaxparam param float f table 256 make the test fail when run the test in bazel test c opt mode though it may vary as it s undefined valgrind output on the test binary run logsoftmaxopt logsoftmaxopt logsoftmaxint8 0 8559 invalid read of size 4 8559 at 0x697d124 void tflite optimize op logsoftmax tflite softmaxparams const float tflite runtimeshape const sign char const tflite runtimeshape const sign char optimize op h 4143 8559 by 0x6976ef9 tflitestatus tflite op builtin activation logsoftmaxeval tflite op builtin activation kerneltype 1 tflitecontext tflitenode activation cc 1250 8559 by 0x13626c77 tflite subgraph opinvoke tfliteregistration const tflitenode subgraph h 430 8559 by 0x13623ed3 tflite subgraph invoke subgraph cc 1062 thibaut |
tensorflowtensorflow | invalid 404 link in the the tf data guide | Bug | url s with the issue description of issue what need change in the sentence see loading tfrecord for an end to end example the link point to which return a 404 it need to point to the correct url |
tensorflowtensorflow | model convert to tflite but invocation fail | Bug | system information os platform windows linux tensorflow 2 4 1 standalone code to reproduce the issue or if you like here be the same code as in google colab to reproduce the problem from tensorflow keras layers import concatenate input lstm bidirectional embed dense timedistribute spatialdropout1d from tensorflow keras model import model import tensorflow as tf print tf version create tensorflow model word in input shape 300 name input wor emb wor embed input dim 1834 output dim 16 input length 300 mask zero true name emb wor word in char in input shape 300 20 name input char emb char timedistribute embed input dim 132 output dim 32 input length 20 mask zero true name emb char char in char enc timedistribute lstm unit 32 return sequence false recurrent dropout 0 15 name char enc emb char input pos input shape 300 4 name input pos input par input shape 300 3 name input par x concatenate emb wor char enc input pos input par x spatialdropout1d 0 1 x main lstm bidirectional lstm unit 64 return sequence true dropout 0 recurrent dropout 0 1 name main lstm x input word in char in input pos input par output timedistribute dense 4 activation softmax name out main lstm model model inputs input output output model compile optimizer adam loss sparse categorical crossentropy print model summary convert model to tensorflow lite converter tf lite tfliteconverter from keras model model converter optimization tf lite optimize default converter target spec support op tf lite opsset tflite builtin enable tensorflow lite op tf lite opsset select tf op enable tensorflow op converter experimental new converter true tflite model converter convert with open model tflite wb as f f write tflite model install tflite runtime pip3 install extra index url tflite runtime use tflite runtime false if true then you need to first restart runtime before run this code import numpy as np if use tflite runtime import tflite runtime interpreter as tflite interpreter tflite interpreter model path model tflite else import tensorflow as tf interpreter tf lite interpreter model path model tflite interpreter allocate tensor input detail interpreter get input detail output detail interpreter get output detail print input detail input detail print output detail output detail set random value for I in range len input detail x np random random input detail I shape interpreter set tensor I x astype input detail I dtype print I input detail I name input detail I shape input detail I dtype x shape invoke interpreter invoke provide the text output from tflite convert 2021 04 12 13 24 02 507539 w tensorflow python util util cc 348 set be not currently consider sequence but this may change in the future so consider avoid use they 2021 04 12 13 24 07 691525 I tensorflow core grappler device cc 69 number of eligible gpu core count 8 compute capability 0 0 0 2021 04 12 13 24 07 691860 I tensorflow core grappler cluster single machine cc 357 start new session 2021 04 12 13 24 07 730306 I tensorflow core grappler optimizer meta optimizer cc 1144 optimization result for grappler item graph to optimize function optimizer graph size after 447 node 0 564 edge 0 time 6 973ms function optimizer graph size after 447 node 0 564 edge 0 time 6 713ms optimization result for grappler item model bidirectional forward main lstm while body 19625 function optimizer function optimizer do nothing time 0 001ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item model bidirectional backward main lstm while cond 19891 function optimizer function optimizer do nothing time 0 001ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item model time distribute 1 char enc while body 19340 function optimizer function optimizer do nothing time 0 001ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item model bidirectional forward main lstm while cond 19624 function optimizer function optimizer do nothing time 0 001ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item model time distribute 1 char enc while cond 19339 function optimizer function optimizer do nothing time 0ms function optimizer function optimizer do nothing time 0ms optimization result for grappler item model bidirectional backward main lstm while body 19892 function optimizer function optimizer do nothing time 0 001ms function optimizer function optimizer do nothing time 0ms 2021 04 12 13 24 08 058077 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 345 ignore output format 2021 04 12 13 24 08 058217 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 348 ignore drop control dependency 2021 04 12 13 24 08 086587 I tensorflow compiler mlir tensorflow util dump mlir util cc 210 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2021 04 12 13 24 08 172281 w tensorflow compiler mlir lite flatbuffer export cc 1782 tflite interpreter need to link flex delegate in order to run the model since it contain the follow flex op s flex op flexall detail tf all device keep dim false when I run the invocation use tflite runtime I get this error runtimeerror regular tensorflow op be not support by this interpreter make sure you apply link the flex delegate before inference node number 16 flexall fail to prepare when I run the invocation use tensorflow I get this follow error runtimeerror external org tensorflow tensorflow lite kernels concatenation cc 76 t dim datum d t0 dim datum d 300 1 node number 37 concatenation fail to prepare node number 49 while fail to invoke either way the invocation fail and it seem that be have something to do with the concatenate layer I would highly appreciate an answer or eventually a solution as you can see the model do convert but the invocation doesn t run I test it on both window and linux same problem and same error |
tensorflowtensorflow | documentation for tf tensor scatter nd add mention non exist method | Bug | url s with the issue description of issue what need change the documentation for tensor scatter nd add mention tf scatter nd add but that method no long exist |
tensorflowtensorflow | conv2dtranspose crash with filter 0 | Bug | system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 window mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a standalone code to reproduce the issue sample code import os os environ tf cpp min log level 2 import tensorflow as tf import numpy as np input np random rand 2 8 8 8 x tf keras input none none 8 y tf keras layer conv2dtranspose filter 0 kernel size 3 padding same dilation rate 1 1 x model tf keras model x y z model input numpy print z mean describe the current behavior the process die after call model input describe the expect behavior expect a valueerror raise if filter 0 be not support it seem that conv2d support this for example import os os environ tf cpp min log level 2 import tensorflow as tf import numpy as np input np random rand 2 8 8 8 x tf keras input none none 8 y tf keras layer conv2d 0 kernel size 3 x model tf keras model x y z model input numpy print z shape output 2 6 6 0 |
tensorflowtensorflow | keras layer dense should not have float unit | Bug | please make sure that this be a bug as per our github policy we only address code doc bug performance issue feature request and build installation issue on github tag bug template system information have I write custom code as oppose to use a stock example script provide in tensorflow yes os platform and distribution e g linux ubuntu 16 04 linux ubuntu 18 04 3 mobile device e g iphone 8 pixel 2 samsung galaxy if the issue happen on mobile device n a tensorflow instal from source or binary binary tensorflow version use command below 2 4 1 python version 3 7 bazel version if compile from source n a gcc compiler version if compile from source n a cuda cudnn version n a gpu model and memory n a you can collect some of this information use our environment capture script you can also obtain the tensorflow version with 1 tf 1 0 python c import tensorflow as tf print tf git version tf version 2 tf 2 0 python c import tensorflow as tf print tf version git version tf version version describe the current behavior tf keras layer dense accept a float unit and successfully init describe the expect behavior expect valueerror because unit should be a positive integer standalone code to reproduce the issue provide a reproducible test case that be the bare minimum necessary to generate the problem if possible please share a link to colab jupyter any notebook import tensorflow as tf layer tf keras layer dense unit 3 3 layer tf one 1 3 other info log include any log or source code that would be helpful to diagnose the problem if include traceback please include the full traceback large log and file should be attach |
tensorflowtensorflow | break link at tf keras callback tensorboard in doc page | Bug | thank you for submit a tensorflow documentation issue per our github policy we only address code doc bug performance issue feature request and build installation issue on github the tensorflow doc be open source to get involve read the documentation contributor guide url s with the issue description of issue what need change the link need to be change or remove submit a pull request yes here it be 48457 be you plan to also submit a pull request to fix the issue see the docs contributor guide doc api guide and the doc style guide |
tensorflowtensorflow | can not convert a tf model to tf lite format | Bug | tf version 2 4 1 here be my model constrution class transformerblock layer layer def init self embed dim num head ff dim rate 0 1 super transformerblock self init self att layer multiheadattention num head num head key dim embed dim self ffn keras sequential layer dense ff dim activation relu layer dense embed dim self layernorm1 layer layernormalization epsilon 1e 6 self layernorm2 layer layernormalization epsilon 1e 6 self dropout1 layer dropout rate self dropout2 layer dropout rate def call self input training attn output self att input input attn output self dropout1 attn output training training out1 self layernorm1 input attn output ffn output self ffn out1 ffn output self dropout2 ffn output training training return self layernorm2 out1 ffn output class tokenandpositionembedde layer layer def init self maxlen vocab size embed dim super tokenandpositionembedde self init self token emb layer embed input dim vocab size output dim embed dim self pos emb layer embed input dim maxlen output dim embed dim def call self x maxlen tf shape x 1 position tf range start 0 limit maxlen delta 1 position self pos emb position x self token emb x return x position preprocessor file albert en preprocess 3 preprocessor layer hub keraslayer preprocessor file def get model transormer num class embed dim 32 embed size for each token num head 2 number of attention head ff dim 32 hide layer size in feed forward network inside transformer preprocessor hub load preprocessor file vocab size preprocessor tokenize get special token dict vocab size numpy text input tf keras layers input shape dtype tf string encoder input preprocessor layer text input input word ids embed layer tokenandpositionembedde encoder input shape 1 vocab size embed dim x embed layer encoder input transformer block transformerblock embed dim num head ff dim x transformer block x x layer globalaveragepooling1d x x layer dropout 0 1 x x layer dense 20 activation relu x x layer dropout 0 1 x output layer dense num class activation softmax x output layer dense 1 activation sigmoid x model keras model input text input output output model compile adam categorical crossentropy metric acc model compile adam binary crossentropy metric accuracy return model model get model transormer 4 model save model charl after save the model I want to convert it to tf lite model converter tf lite tfliteconverter from save model model charl converter optimization tf lite optimize optimize for size converter optimization tf lite optimize default tflite quant model converter convert this be the error message 2021 04 10 04 20 30 488392 I tensorflow core common runtime gpu gpu device cc 1267 0 2021 04 10 04 20 30 488415 I tensorflow core common runtime gpu gpu device cc 1280 0 n loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 error require element dtype to be 1 bit 8 bit 16 bit 32 bit 64 bit integer or 16 bit 32 bit 64 bit float type during tf lite transformation pass loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 error fail to legalize operation tf tensorlistreserve that be explicitly mark illegal exception traceback most recent call last usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 212 debug info str 213 enable mlir converter 214 return model str usr local lib python3 6 dist package tensorflow lite python wrap toco py in wrap toco convert model flags str toco flags str input data str debug info str enable mlir converter 37 debug info str 38 enable mlir converter 39 exception 0 error loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 require element dtype to be 1 bit 8 bit 16 bit 32 bit 64 bit integer or 16 bit 32 bit 64 bit float type during tf lite transformation pass 0 note loc statefulpartitionedcall 1 call from 0 error loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 fail to legalize operation tf tensorlistreserve that be explicitly mark illegal 0 note loc statefulpartitionedcall 1 call from during handling of the above exception another exception occur convertererror traceback most recent call last in 2 converter optimization tf lite optimize optimize for size 3 converter optimization tf lite optimize default 4 tflite quant model converter convert usr local lib python3 6 dist package tensorflow lite python lite py in convert self 737 converter kwargs update quant mode converter flag 738 739 result convert save model converter kwargs 740 calibrate and quantize flag quant mode quantizer flag 741 if calibrate and quantize usr local lib python3 6 dist package tensorflow lite python convert py in convert save model save model dir save model version save model tag save model export name kwargs 635 none input datum unused 636 none debug info str unused 637 enable mlir converter true 638 return datum 639 usr local lib python3 6 dist package tensorflow lite python convert py in toco convert protos model flags str toco flags str input data str debug info str enable mlir converter 214 return model str 215 except exception as e 216 raise convertererror str e 217 218 if distutil spawn find executable toco from proto bin be none convertererror 0 error loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 require element dtype to be 1 bit 8 bit 16 bit 32 bit 64 bit integer or 16 bit 32 bit 64 bit float type during tf lite transformation pass 0 note loc statefulpartitionedcall 1 call from 0 error loc callsite callsite callsite callsite callsite callsite map tensorarrayv2 2 inference bert pack input layer call and return conditional loss 4831 at bert pack input partitionedcall inference model layer call and return conditional loss 4887 at statefulpartitionedcall inference model layer call fn 4897 at statefulpartitionedcall inference restore function body 12076 at model keras layer statefulpartitionedcall inference wrap model 12243 at statefulpartitionedcall inference signature wrapper 13370 at statefulpartitionedcall 1 fail to legalize operation tf tensorlistreserve that be explicitly mark illegal 0 note loc statefulpartitionedcall 1 call from |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.