repository
stringclasses
156 values
issue title
stringlengths
1
1.01k
labels
stringclasses
8 values
body
stringlengths
1
270k
vuejscore
mixin prop be not merge into component
Bug
version 3 0 0 beta 14 reproduction link step to reproduce execute reproduction what be expect mixin prop should be render what be actually happen nothing be render
vuejscore
a full page reload be need when change attribute prop of import component
Bug
describe the bug this may or may not be relate to vuejs vue next 1156 in short if you import and use any vue component hmr will not work when you change attribute prop of that component you will need to do a full page reload to see the change system info require vite version 0 13 2 require operating system macos 10 15 4 require node version 12 16 1 optional npm yarn version 6 13 4 1 22 4 instal vue version from yarn lock or package lock json 3 0 0 beta 10 instal vue compiler sfc version 3 0 0 beta 10 log vite server server ready in 187ms 0ms vite history redirect to index html 0ms vite hmr serve hmr client 0ms vite resolve vue vue node modules vue runtime dom dist runtime dom esm bundler js 0ms vite sfc user damian desktop vite app app vue parse in 6ms 0ms vite rewrite module vue rewrite 0ms vite rewrite vue runtime core module vue runtime core 2ms vite hmr module vue import module vue runtime core 20ms vite rewrite vue runtime core module vue runtime core 0ms vite hmr module vue import module vue runtime core 0ms vite rewrite vue share module vue share 0ms vite hmr module vue import module vue share 0ms vite sfc user damian desktop vite app app vue parse cache hit 15ms vite sfc app vue template compile in 15ms 16ms vite resolve vue vue runtime core node modules vue runtime core dist runtime core esm bundler js 41ms vite hmr ws client connect 27ms vite resolve vue vue share node modules vue share dist share esm bundler js 2ms vite sfc user damian desktop vite app comp vue parse in 0ms 4ms vite rewrite module vue share no import find 31ms vite rewrite module vue runtime core rewrite 4ms vite rewrite vue reactivity module vue reactivity 4ms vite hmr module vue runtime core import module vue reactivity 12ms vite rewrite vue reactivity module vue reactivity 0ms vite hmr module vue runtime core import module vue reactivity 0ms vite rewrite vue share module vue share 0ms vite hmr module vue runtime core import module vue share 0ms vite sfc user damian desktop vite app comp vue parse cache hit 12ms vite sfc comp vue template compile in 1ms 1ms vite resolve vue vue reactivity node modules vue reactivity dist reactivity esm bundler js 26ms vite rewrite module vue reactivity rewrite 17ms vite rewrite vue share module vue share 0ms vite hmr module vue reactivity import module vue share 18ms vite sfc user damian desktop vite app app vue parse cache hit 28ms vite sfc app vue style compile in 35ms 35ms vite hmr bust vue cache for user damian desktop vite app app vue 2 m vite rewrite app vue cache bust 2 m vite sfc user damian desktop vite app app vue parse in 3ms 2 m vite hmr update vite hmr type vue rerender vite hmr path app vue vite hmr timestamp 1589118158817 vite hmr 5ms vite sfc user damian desktop vite app app vue parse cache hit 5ms vite sfc app vue template compile in 4ms 4ms vite rewrite skip app vue type template t 1589118158817 13ms vite history redirect to index html 2 m vite sfc user damian desktop vite app comp vue parse cache hit 6s vite rewrite comp vue rewrite 6s vite rewrite vite hmr vite hmr 0ms vite rewrite index html rewrite 2ms vite rewrite vue module vue 0ms vite hmr index html import module vue 6s vite rewrite app vue app vue 0ms vite hmr index html import app vue 1ms reproduction 1 run create vite app cmd 2 create a simple sfc call for example comp vue vue 3 import previously create component in the app vue and use it in the
vuejscore
hmr crash on attribute change
Bug
describe the bug in a freshly create app create use create vite app I stumble upon a weird bug that be trigger when you update the state for example by click on the increase button and then add remove change attribute of any native html element that listen to any event this should trigger one of two possible error 1 typeerror el be null if style or class attribute be change 2 typeerror right hand side of in should be an object get null if any other attribute be change system info require vite version 0 13 2 require operating system macos 10 15 4 require node version 12 16 1 optional npm yarn version 6 13 4 1 22 4 instal vue version from yarn lock or package lock json 3 0 0 beta 10 instal vue compiler sfc version 3 0 0 beta 10 log vite server server ready in 170ms 0ms vite history redirect to index html 0ms vite hmr serve hmr client 0ms vite resolve vue vue node modules vue runtime dom dist runtime dom esm bundler js 0ms vite sfc user damian desktop vite app app vue parse in 8ms 0ms vite rewrite app vue rewrite 0ms vite rewrite comp vue comp vue 1ms vite hmr app vue import comp vue 18ms vite rewrite vue module vue 1ms vite hmr app vue import module vue 0ms vite rewrite vite hmr vite hmr 0ms vite rewrite module vue rewrite 3ms vite rewrite vue runtime core module vue runtime core 1ms vite hmr module vue import module vue runtime core 4ms vite rewrite vue runtime core module vue runtime core 0ms vite hmr module vue import module vue runtime core 0ms vite rewrite vue share module vue share 0ms vite hmr module vue import module vue share 0ms vite sfc user damian desktop vite app app vue parse cache hit 28ms vite sfc app vue template compile in 15ms 15ms vite rewrite skip app vue type template 35ms vite resolve vue vue share node modules vue share dist share esm bundler js 54ms vite resolve vue vue runtime core node modules vue runtime core dist runtime core esm bundler js 1ms vite sfc user damian desktop vite app comp vue parse in 1ms 2ms vite rewrite module vue share no import find 4ms vite rewrite module vue runtime core rewrite 3ms vite rewrite vue reactivity module vue reactivity 4ms vite hmr module vue runtime core import module vue reactivity 46ms vite rewrite vue reactivity module vue reactivity 0ms vite hmr module vue runtime core import module vue reactivity 0ms vite rewrite vue share module vue share 0ms vite hmr module vue runtime core import module vue share 0ms vite sfc user damian desktop vite app comp vue parse cache hit 10ms vite sfc comp vue template compile in 1ms 1ms vite resolve vue vue reactivity node modules vue reactivity dist reactivity esm bundler js 26ms vite rewrite module vue reactivity rewrite 19ms vite rewrite vue share module vue share 0ms vite hmr module vue reactivity import module vue share 19ms vite sfc user damian desktop vite app app vue parse cache hit 26ms vite sfc app vue style compile in 34ms 34ms vite hmr ws client connect 21 vite hmr bust vue cache for user damian desktop vite app app vue 4s vite rewrite app vue cache bust 25 vite sfc user damian desktop vite app app vue parse in 3ms 25 vite hmr update vite hmr type vue rerender vite hmr path app vue vite hmr timestamp 1589114690075 vite hmr 6ms vite sfc user damian desktop vite app app vue parse cache hit 6ms vite sfc app vue template compile in 6ms 6ms vite rewrite skip app vue type template t 1589114690075 17ms reproduction 1 run create vite app cmd and start the app 2 click on the increase button 3 add change remove attribute of the same button you click
vuejscore
transitiongroup child with same key but render in different v for be not treat as same child
Bug
version 3 0 0 beta 9 reproduction link step to reproduce html toggle toggle in the example above with item i d 1 do false toggling item do would trigger a v move transition in vue 2 as it d see the item as be the same and just have change its position despite move from one v for to the other vue 3 treat the item that s remove from the one v for and the one that now be a part of the other v for as separate despite the equal key and trigger v leave and v enter transition respectively this break the transition in this todo app for example which work in vue 2 but doesn t in vue 3 what be expect due to the same key the item be move instead of be remove only to reappear in its new position what be actually happen the item be remove from its old position and reappear in its new position I guess this happen because they actually aren t direct child of anymore but instead there s a in between
vuejscore
reactive and ref type infer be wrong
Bug
version 3 0 0 beta 7 reproduction link step to reproduce typescript const state reactive foo value 1 label bar console log state foo label property label do not exist on type number what be expect no lint error what be actually happen property label do not exist on type number ref have same problem
vuejscore
teleport content be not remove
Bug
version 3 0 0 beta 4 reproduction link step to reproduce create a component use teleport add this component to the app via v if then make sure that the v if evaluate to false the component will be unmounte what be expect the teleport content be unmounted when the component be unmounted what be actually happen the teleport content be not remove they stay there comment it have something to do with dynamicchildren it s an empty array on the component while in reality it contain the teleport as a child so child be not unmounted as be teleport
vuejscore
prop be cast null ish value to empty string
Bug
version 3 0 0 beta 4 reproduction link step to reproduce 1 create template 2 see the error what be expect set the srcobject of the video dom to null what be actually happen it be try to set it to error come from here l35
vuejscore
functional component in dev mode do not work
Bug
version 3 0 0 beta 3 reproduction link l81 l81 step to reproduce create a vue component js export default function prop what be expect component work what be actually happen error in console typeerror can not assign to read only property name of function function code at resolveasset link to a line of code in reproduction link in vue 3 0 0 beta 2 everything work
vuejscore
componet be
Bug
version 3 0 0 beta 3 reproduction link step to reproduce component be what be expect component tag what be actually happen component tag
vuejscore
mixin datum be lose when a previous mixin watch that datum
Bug
version 3 0 0 beta 3 reproduction link step to reproduce none what be expect text should read mixin1 mixin3 what be actually happen text read mixin1 and vue esm browser js 4685 vue warn property mixin3data be access during render but be not define on instance be print in the console if the mixin order be mixin1 mixin3 mixin2 instead the text be display correctly
vuejscore
dynamic component duplicate sible node instead of appear with v if
Bug
version 3 0 0 beta 3 reproduction link step to reproduce click the button click I to break thing what be expect head hello should appear what be actually happen the clicked button be duplicate
vuejscore
re evaluation of prop default when use render function and on prop update trigger from parent
Bug
version 3 0 0 beta 2 reproduction link step to reproduce I do setup a setinterval that update a reactive value that be send as a prop to a child I also add a way to see if the prop be re evaluate so it add a text log in the dom this way you can see the behaviour directly what be expect each prop update doesn t trigger a re evaluation of the default function what be actually happen a prop update trigger a re evaluation of the default function I be try to make something like react three fiber I can t hoist value of the default because this mean all box that rely on a default value will share the same datum oddly it work well use the template and not the render function
vuejscore
ref type break for htmlelement
Bug
version 3 0 0 alpha 12 reproduction link step to reproduce note the error on line 4 of index ts what be expect ref null be assignable to a variable of type ref what be actually happen type ref accesskey string readonly accesskeylabel string autocapitalize string dir string draggable boolean hide boolean innertext string lang string readonly offsetheight number 232 more focus option focusoption void be not assignable to type ref perhaps some issue with symbol in the unwrapref declaration the full error include the line property symbol iterator be miss in type
vuejscore
nbsp be convert to normal space character
Bug
version 3 0 0 alpha 11 reproduction link link to 7b 22src 22 3a 22 3cdiv 3e 5cn 20 20 20 20 3cspan 3ehi 3c 2fspan 3e 26nbsp 3b 26nbsp 3b 26amp 3b 26nbsp 3b 26nbsp 3b 3cspan 3ethere 3c 2fspan 3e 5cn 20 20 20 20 3chr 3e 5cn 20 20 20 20i 20wan t 20lot s 20of 20 26nbsp 3b 26nbsp 3b 26nbsp 3b 26nbsp 3b 26nbsp 3b 26nbsp 3bspace 5cn 3c 2fdiv 3e 22 2c 22ssr 22 3afalse 2c 22options 22 3a 7b 22mode 22 3a 22module 22 2c 22prefixidentifiers 22 3afalse 2c 22optimizebinding 22 3afalse 2c 22hoiststatic 22 3afalse 2c 22cachehandlers 22 3atrue 2c 22scopeid 22 3anull 7d 7d step to reproduce use anywhere see compile template what be expect among other special character should not just be convert to their normal counterpart there s lot of info here but non break space 00a0 be not actually a space 0020 what be actually happen the be convert to a normal space it s not even escape
vuejscore
effect don t work when use reactive to proxy a map which have reactive object as member s key
Bug
version 3 0 0 alpha 11 reproduction link step to reproduce with devtool open it obvious that the trigger should be print twice however it just print once what be expect the function wrap by effcet should rerun what be actually happen it lose responsiveness
vuejscore
internal vue crash v if on component
Bug
version 3 0 0 alpha 10 reproduction link step to reproduce open the console click on the button vue do some bad stuff what be expect v if be replace by v else no error what be actually happen stuff crash apparently because slot content be evaluate before v if which result in null dereference and then bad stuff this minimal repro stay usable after the crash you could set a new some value and recover my own app be more complex and be unusable after this bug
vuejscore
the lead newline character immediately follow should be strip
Bug
version 3 0 0 alpha 10 reproduction link 7b 22src 22 3a 22 3cpre 3e 5cn123 3c 2fpre 3e 5cn 22 2c 22options 22 3a 7b 22mode 22 3a 22module 22 2c 22prefixidentifiers 22 3afalse 2c 22optimizebinding 22 3afalse 2c 22hoiststatic 22 3afalse 2c 22cachehandlers 22 3afalse 2c 22scopeid 22 3anull 7d 7d 7b 22src 22 3a 22 3cpre 3e 5cn123 3c 2fpre 3e 5cn 22 2c 22options 22 3a 7b 22mode 22 3a 22module 22 2c 22prefixidentifiers 22 3afalse 2c 22optimizebinding 22 3afalse 2c 22hoiststatic 22 3afalse 2c 22cachehandlers 22 3afalse 2c 22scopeid 22 3anull 7d 7d step to reproduce see the compile output what be expect per html spec the pre element a lead newline character immediately follow should be strip what be actually happen the lead newline character immediately follow be preserve the newline character be strip in vue 2 3cpre 3e 0a123 3c 2fpre 3e 0a
vuejscore
transition not work with custom dynamic component
Bug
version 3 0 0 alpha 10 reproduction link step to reproduce click on the toggle button multiple time what be expect the three element to animate the same way the second toggle should use a different animation for all element what be actually happen the 1st element router view seem to enter right away without leave transition only use the fade transition initial value for transitionname use a key component name attribute on router view change the behavior but do not fix it it work on vue 2 with key
vuejscore
v cloak not work when use on the root element
Bug
version 3 0 0 alpha 10 reproduction link step to reproduce usual v cloak step what be expect the cloak element should become visible after compilation what be actually happen nothing become visible I see at that v cloak be add to vue 3 but it do not work as expect
vuejscore
dynamic content be not correctly update in certain case
Bug
version 3 0 0 alpha 9 reproduction link step to reproduce click the click I button what be expect 2 2 2 2 2 show on the screen what be actually happen 2 1 1 1 2 show on the screen
vuejscore
warn on incompatible node version for devs
Good First Issue
what problem do this feature solve let devs know which node version to use for dev and test when contributor first clone the repo and run yarn they might not have read contribute md yet where it say node 10 what do the propose api look like 1 something like package devengine that react use to enforce via installation hook 2 nvmrc file for nvm to easily identify which version of node devs should use
vuejscore
vue reactivity map and set lookup behave differently after be proxie
Bug
version 3 0 0 alpha 7 reproduction link step to reproduce insert some reactive value into a non reactive set or map wrap the set or map in reactive the insert value will no long match as key make get have and delete return undefined false what be expect after be wrap in a reactive proxy the observable behaviour of a set or map remain unchanged what be actually happen wrap set and map behave differently as describe this behavioural bug be a result of the proxy code in vue reactivity unwrap proxie key before pass they to the underlie map and set method even if the key already present be proxie because they be insert into the raw map set
vuejscore
improve proptype function
Help Wanted
what problem do this feature solve a typescript error occure on this example code ts import definecomponent proptype from vue export default definecomponent prop category onselect type function as proptype val number void typescript error require true conversion of type functionconstructor to type proptype val number void may be a mistake because neither type sufficiently overlap with the other if this be intentional convert the expression to unknown first we have to write with as unknown but I feel this be redundant ts function as unknown as proptype val number void work as well what do the propose api look like ts function as proptype val number void
vuejscore
watch handler not execute immediately if source return undefine
Bug
version 3 0 0 alpha 4 reproduction link step to reproduce ts watch null console log null watch watch undefined console log undefined watch what be expect null watch and undefined watch in console what be actually happen only null watch in console
vuejscore
key inside update during leave transition
Bug
version 3 0 0 alpha 4 reproduction link step to reproduce 1 add a 2 wrap it in a with mode out in 3 key the be destroy but keep alive until the leave transition be do then the new appear what be actually happen the leave transition be apply but the new or the old node but new child be immediately show
vuejscore
be not allow in attribute value
Bug
version 16 0 0 alpha 2 reproduction link 7b 22src 22 3a 22 3ca 20href 3d 5c 22github 2com 2flogin 2fo2adu4th 2faduthorize 3fclient i d 3da 26e 5c 22 20target 3d 5c 22 blank 5c 22 3ehello 20 3c 2fa 3e 5cn 22 2c 22options 22 3a 7b 22mode 22 3a 22module 22 2c 22prefixidentifiers 22 3afalse 2c 22hoiststatic 22 3atrue 2c 22cachehandlers 22 3afalse 2c 22scopeid 22 3anull 7d 7d step to reproduce html code hello what be expect no error what be actually happen fail to compile vuecompilererror unknown entity name error in character in the href
vuejscore
attribute get convert to camelcase without explicit prop declaration
Bug
version 3 0 0 alpha 1 reproduction link vue 2 for comparison step to reproduce ts import h createapp from vue function button prop attrs console log prop attrs const app setup return h button datum i d 1 aria label close createapp mount app root what be expect button s prop should be in camelcase and attrs in kebab case what be actually happen without explicit prop declaration button attrs key be convert to camelcase
vuejscore
behavior change in the rendering of primitive value
Bug
version 3 0 0 alpha 1 reproduction link 2 x 3 0 0 alpha 1 step to reproduce just open these above reproduction link what be expect there be an inconsistency between vue 2 x and the late alpha 3 x should not render a boolean value in the render function what be actually happen render 0falsetruenan instead of 0nan
vuejscore
template compiler 2 x incompatibility multiple statement inside event handler have invalid generate code
Bug
version 3 0 0 alpha 1 reproduction link none step to reproduce attempt to compile this template click what be expect it should compile without any error or warning and when the button be click it should call the foo and bar function sequentially what be actually happen the generate code be invalid the follow template fragment work in 2 x click the click event handler get compile into something like this click event foo bar but in 3 0 0 alpha 1 the compile code be this click cache 0 cache 0 event foo bar which be invalid js syntax it can be fix on the user s end by not have code like that in the template and if you need to execute multiple statement it should be extract into a method use a comma instead of a semicolon
vuejscore
test e2e enhance and fix test of svg example
Bug
currently the fail test be expect I find a bug that be probably cause by v model I try but I could t fix it so I add test to cover this issue the new assertstat part therefore the test of this pr should fail now I can rebase and push after it s fix to reproduce the issue 1 open the svg example 2 remove the first range control a 3 drag other range they be now break for now it can be fix by add key but in vue 2 0 key be not require if it be require in vue 3 0 I can help with fix the example in this pr I also add the assertlabel which can catch the issue of 552 for regression the same svg example in vue 2 0 pass all the newly add test change rename setvalue to more accurate typevalue add setvalue which will set value and dispatch input event add assertlabel test fix the broken assertstat it didn t assert anything sorry for my stupid mistake rename assertstat to assertpolygon because there be a new assertstat
vuejscore
refactor reactivity add warn for readonly compute setter
Duplicate
add warn for readonly compute setter
vuejscore
fix fix issimpleidentifi output on reserve word
Bug
issimpleidentifier currently return true when pass true false and other reserved keyword result in ctx true appear in the render function when thing like v if true be use I ve use the allowedglobal l9 list from vue 2 with a few addition for boolean and null so that the behaviour be hopefully similar to of vue 2 once again not 100 sure on my test also thank to pikax for point this issue out to I and let I work on it
vuejscore
fix runtime dom nodeop insert unkeyed element
Bug
build distribution global problem when add new item to a v for list nodeop insert recieve the app container element inste of the parent of the child here be the error image context I just be play around the new composition api and make a simple app to custom task I find out it render the list but fail to render the new item add to the list my code html document
tensorflowtensorflow
systemerror in tf constant when dtype be tf uint64 and value be nagetive
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 15 custom code no os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I know that negative number be an unexpected input but when the dtype be tf uint32 and tf uint64 tf constant can map negative number to the corresponding unsigned integer in fact tf uint32 do exactly this take tf constant 10 dtype tf uint64 as an example in eager mode the code will trigger a systemerror error when it be call for the first time but subsequent call will no long trigger the error I think this be a bug standalone code to reproduce the issue shell scrollto ubwqmqny4ohb relevant log output shell systemerror return a result with an exception set
tensorflowtensorflow
what be the effect of tf guard by mu for variable like tensor
Bug
source source tensorflow version tf2 14 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior what be the significance of tf guard by mu of tensor in this file mkl conv op cc l1393 how be it affect the memory use be it preserve the memory of the node for the next iteration iteration example batch size or memory of the same node resue in the same iteration standalone code to reproduce the issue shell private std share ptr fuse add src std share ptr fuse add dst std vector stride std vector dilation std vector padding list bool be filter const mutex mu padding padding string data format str tensorformat data format tensor cache filter datum tf guard by mu ifndef enable onednn v3 tensor cache filter md tf guard by mu else filtermemorydesc cache filter md tf guard by mu endif enable onednn v3 relevant log output no response
tensorflowtensorflow
tensorflow entity initialize variable at 0x000001f58f9464c8 could not be transform and will be execute as be
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 0 0 custom code yes os platform and distribution window 11 mobile device no response python version 3 7 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the code should be execute without any warning standalone code to reproduce the issue shell train a neural network on fashion mnist use 2 regularization import essential package import numpy as np import tensorflow as tf from tensorflow import kera import matplotlib pyplot as plt loading fashion mnist dataset and preprocesse x train full y train full x test y test keras datasets fashion mnist load datum scale pixel value to the range 0 1 x train full x train full 255 0 x test x test 255 0 splitting training set into validation and training subset x valid x train x train full 5000 x train full 5000 y valid y train y train full 5000 y train full 5000 calculate mean and standard deviation of training pixel value pixel mean x train mean axis 0 keepdim true pixel stds x train std axis 0 keepdim true standardize the input feature use mean and standard deviation x train scale x train pixel mean pixel stds x valid scale x valid pixel mean pixel stds x test scale x test pixel mean pixel stds set random seed for reproducibility tf random set seed 42 np random seed 42 create a sequential model model keras model sequential input layer flatten layer to convert 2d input 28x28 to 1d array keras layer flatten input shape 28 28 hide layer 1 dense layer with 300 neuron elu activation function he initialization and 2 regularization with a regularization strength of 0 01 kera layer dense 300 activation elu kernel initializer he normal kernel regularizer keras regularizer l2 0 01 hide layer 2 dense layer with 100 neuron elu activation function he initialization and 2 regularization with a regularization strength of 0 01 kera layer dense 100 activation elu kernel initializer he normal kernel regularizer keras regularizer l2 0 01 output layer dense layer with 10 neuron for 10 class softmax activation function and 2 regularization with a regularization strength of 0 01 kera layer dense 10 activation softmax kernel regularizer keras regularizer l2 0 01 compile the model model compile loss sparse categorical crossentropy optimizer nadam metric accuracy define number of epoch n epoch 2 training the model history model fit x train scale y train epoch n epoch validation datum x valid scale y valid relevant log output shell train on 55000 sample validate on 5000 sample epoch 1 2 warn tensorflow entity initialize variable at 0x000001f58f9464c8 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause no module name tensorflow core estimator warning entity initialize variable at 0x000001f58f9464c8 could not be transform and will be execute as be please report this to the autograph team when file the bug set the verbosity to 10 on linux export autograph verbosity 10 and attach the full output cause no module name tensorflow core estimator 55000 55000 5s 95us sample loss 1 6073 accuracy 0 8114 val loss 0 7551 val accuracy 0 8054 epoch 2 2 55000 55000 4s 80us sample loss 0 7145 accuracy 0 8279 val loss 0 7187 val accuracy 0 8230
tensorflowtensorflow
efficient serve ipynb have some bug
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 16 1 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hi I be try to run this guide on tensorflow site and get error when try to create the movielensmodel I didn t change any code and just run it on a google colab server the code that I m get error be on 10th cell model movielensmodel which be suppose to create the retrieval model however I get below error I try to work it out but I couldn t fix it I find out that the installation of scann bring this problem if we don t install scann the project will run just fine valueerror traceback most recent call last in 1 model movielensmodel 2 model compile optimizer tf keras optimizers adagrad learning rate 0 1 6 frame usr local lib python3 10 dist package keras src backend common variable py in standardize shape shape 460 continue 461 if not be int dtype type e 462 raise valueerror 463 f can not convert shape to a shape 464 f find invalid entry e of type type e valueerror can not convert c o u n t e r to a shape find invalid entry c of type standalone code to reproduce the issue shell relevant log output no response
tensorflowtensorflow
generate image with biggan demo in doc no long compatible with tensorflow hub
Bug
issue type documentation bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf 1 7 0 custom code no os platform and distribution no response mobile device no response python version 3 7 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when first run demo in doc the follow bug be encounter when run cell under section load a biggan generator module from tf hub attributeerror module tensorflow hub have no attribute module follow johnny tang s solution to change hub module to hub load the below error be meet attributeerror autotrackable object have no attribute get input info dict this persist after try multiple version of tf tensorflow hub as well as python standalone code to reproduce the issue shell the demo notebook relevant log output shell attributeerror autotrackable object have no attribute get input info dict
tensorflowtensorflow
tf trt warning could not find tensorrt
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 15 0 custom code no os platform and distribution ubuntu 22 04 mobile device no response python version 3 11 bazel version no response gcc compiler version 11 2 cuda cudnn version 12 4 gpu model and memory rtx 3050 ti current behavior hey everybody I have try instal tensorflow from my ubuntu 22 04 terminal with anaconda environment I be get can not find tensorrt I have a 4 monitor setup with ubuntu and the 550 nvidia driver for the rtx 3050 ti do not work on my machine I have to run the 535 driver and I think 12 4 cuda be support it instal the 550 driver automatically and I revert back to the 535 driver but after all that I still end up with the error below I have try uninstalle reinstall and can t get it to work I be a graduate student take a machine learning course I be spend more time debug this issue then actually code could somebody help I resolve standalone code to reproduce the issue shell nvidia smi 535 161 08 driver version 535 161 08 cuda version 12 2 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 nvidia geforce rtx 3050 off 00000000 01 00 0 on n a n a 66c p8 8w 35w 1011mib 4096mib 0 default n a process gpu gi ci pid type process name gpu memory i d i d usage 0 n a n a 2812 g usr lib xorg xorg 648mib 0 n a n a 3023 g usr bin gnome shell 148mib 0 n a n a 78082 g az anaconda3 envs skynet bin python 1mib 0 n a n a 78475 g seed version 20240329 165146 919000 96mib 0 n a n a 79970 g gnu webkit2gtk 4 0 webkitwebprocess 68mib base thenaz skynet download lspci grep I nvidia 01 00 0 vga compatible controller nvidia corporation ga107 m geforce rtx 3050 ti mobile rev a1 01 00 1 audio device nvidia corporation device 2291 rev a1 base thenaz skynet download nvcc version nvcc nvidia r cuda compiler driver copyright c 2005 2024 nvidia corporation build on tue feb 27 16 19 38 pst 2024 cuda compilation tool release 12 4 v12 4 99 build cuda 12 4 r12 4 compiler 33961263 0 relevant log output shell python3 c import tensorflow as tf print tf config list physical device gpu 2024 03 30 14 29 58 668656 I tensorflow core util port cc 113 onednn custom operation be on you may see slightly different numerical result due to float point round off error from different computation order to turn they off set the environment variable tf enable onednn opt 0 2024 03 30 14 29 58 691384 e external local xla xla stream executor cuda cuda dnn cc 9261 unable to register cudnn factory attempt to register factory for plugin cudnn when one have already be register 2024 03 30 14 29 58 691405 e external local xla xla stream executor cuda cuda fft cc 607 unable to register cufft factory attempt to register factory for plugin cufft when one have already be register 2024 03 30 14 29 58 692006 e external local xla xla stream executor cuda cuda blas cc 1515 unable to register cubla factory attempt to register factory for plugin cubla when one have already be register 2024 03 30 14 29 58 695703 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 avx vnni fma in other operation rebuild tensorflow with the appropriate compiler flag 2024 03 30 14 29 59 119119 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt 2024 03 30 14 29 59 660586 I external local xla xla stream executor cuda cuda executor cc 901 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 30 14 29 59 675221 I external local xla xla stream executor cuda cuda executor cc 901 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 30 14 29 59 675340 I external local xla xla stream executor cuda cuda executor cc 901 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 physicaldevice name physical device gpu 0 device type gpu
tensorflowtensorflow
check fail ret 0 11 vs 0 thread tf data private threadpool creation via pthread create fail abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 post1 custom code yes os platform and distribution rocky linux 8 9 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version 8 9 2 26 gpu model and memory nvidia a100 sxm4 80 gb current behavior I be try to run a simple denoising autoencoder my training datum and label datum be 900 sample of healpy map with nside 64 resolution load as numpy array after normalise the map I use tf datum dataset from tensor slice to create dataset when I use random noise to create these map and run on jupyter notebook although take age to initiate training after do model fit but it do run and produce some result know that the model work I try to run on gpu with real datum this be where the issue start it show the follow error check fail ret 0 11 vs 0 thread tf data private threadpool creation via pthread create fail abort core dump and the process stop standalone code to reproduce the issue shell here be a colab link it run on colab but it doesn t run in the terminal relevant log output shell 2024 03 27 12 14 07 288975 f external local tsl tsl platform default env cc 74 check fail ret 0 11 vs 0 thread tf data private threadpool creation via pthread create fail abort core dump
tensorflowtensorflow
abort core dump with tf raw op loadandremapmatrix
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 16 1 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op tf raw op loadandremapmatrix encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf num row 7 num col 16 max row in memory 1 ckpt path tf constant tmp shape 1 dtype tf string old tensor name tf constant tmp shape 1 dtype tf string row remappe tf constant 1 shape 7 dtype tf int64 col remappe tf constant 0 shape dtype tf int64 initialize value tf constant 42 shape 112 dtype tf float32 tf raw op loadandremapmatrix ckpt path ckpt path old tensor name old tensor name row remappe row remappe col remappe col remappe initialize value initialize value num row num row num col num col max row in memory max row in memory relevant log output shell 2024 03 28 07 29 28 849010 f tensorflow core framework tensor shape cc 45 check fail ndim dim 1 vs 0 ask for tensor of 1 dimension from a tensor of 0 dimension abort core dump
tensorflowtensorflow
call submodel in train step throw valueerror when save and loading error
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 15 custom code yes os platform and distribution linux ubuntu 23 10 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when save and load a nest model I get this error valueerror layer dense 3 expect 0 variable but receive 2 variable during loading expect which be weird because there be no dense 3 layer call the submodel by themselves over self net1 and self net2 inside the train step would throw the error but have they call by the call method of the high model and return their respective value do not throw an error how this could lead to this error message be beyond I I manage to build the follow minimal example standalone code to reproduce the issue shell keras import numpy as np import tensorflow as tf from tensorflow keras model import model from tensorflow keras layer import dense activation input batchnormalization dropout flatten identity args activation relu batch norm true kera saving register keras serializable class custommodel1 model def init self super init self dense dense 32 def call self input x self dense input return x keras saving register keras serializable class custommodel2 model def init self super init self dense dense 32 def call self input x self dense input return x keras saving register keras serializable class custommodel3 model def init self super init self net1 custommodel1 self net2 custommodel2 def call self input z self net1 input x self net2 z return z x def train step self datum x y datum with tf gradienttape as tape z y pre self x this fix it instead y pre self net2 self net1 x this line throw the error loss self compile loss y y pre trainable var self trainable weight gradient tape gradient loss trainable var self optimizer apply gradient zip gradient trainable var self compile metric update state y y pre return m name m result for m in self metric instantiate the model model custommodel3 compile the model model compile optimizer adam loss sparse categorical crossentropy metric accuracy create some dummy datum for training x train np random random 1000 32 y train np random randint 10 size 1000 train the model for one epoch model fit x train y train epoch 1 save the model model save custom model keras save format keras load the model again load model tf keras model load model custom model keras generate some sample datum for prediction x sample np random random 10 32 assume 10 sample with 32 feature each make prediction use the load model prediction load model predict x sample print prediction print the prediction print model summary relevant log output shell home georg anaconda3 envs vae bin python3 11 home georg python project vae test test save basic py 2024 03 27 10 02 21 156169 I tensorflow core platform cpu feature guard cc 182 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction sse4 1 sse4 2 avx avx2 fma in other operation rebuild tensorflow with the appropriate compiler flag 32 32 1s 2ms step loss 9 2085 accuracy 0 0400 home georg anaconda3 envs vae lib python3 11 site package keras src save save api py 164 userwarne you be save a model that have not yet be build it might not contain any weight yet consider build the model first by call it on some datum saving lib save model model filepath warn absl skip variable loading for optimizer adam because it have 1 variable whereas the save optimizer have 9 variable traceback most recent call last file home georg python project vae test test save basic py line 79 in load model tf keras model load model custom model keras file home georg anaconda3 envs vae lib python3 11 site package keras src save save api py line 254 in load model return save lib load model file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 281 in load model raise e file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 269 in load model load state file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 457 in load state load state file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 466 in load state load container state file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 534 in load container state load state file home georg anaconda3 envs vae lib python3 11 site package keras src save saving lib py line 435 in load state trackable load own variable weight store get inner path file home georg anaconda3 envs vae lib python3 11 site package keras src engine base layer py line 3531 in load own variable raise valueerror valueerror layer dense 3 expect 0 variable but receive 2 variable during loading expect process finish with exit code 1
tensorflowtensorflow
importerror when use tensorflow 2 8 0 and python 3 9 16 undefined symbol pycmethod new
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 8 0 custom code yes os platform and distribution linux amd64 mobile device emr cluster python version 3 9 16 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I be run a python script udf from pyspark from emr node that be launch use a custom docker image I have python 3 9 instal and tensorflow 2 8 0 instal when run this particular python script as a spark submit job I have an error with tenosrflow pycmethod new standalone code to reproduce the issue shell below be the simple python code that I be test as a spark submit job that use docker as runtime environment and error be throw while import this package itself from pyspark sql import sparksession from pyspark sql function import udf from pyspark sql type import stringtype import sys sys path append home ec2 user venv lib python3 9 site package import zooperetl import tensorflow as tf initialize spark session spark sparksession builder appname mysparkjob getorcreate create a sample dataframe datum zooperetl numpy tensorflow column package df spark createdataframe data column define a custom udf to import a library inside udf def get tensorflow version return tf version register the udf get tf version udf udf get tensorflow version stringtype apply the udf to create a new column with tensorflow version df with tf version df withcolumn tensorflow version get tf version udf show the dataframe with the add column df with tf version show submit the spark job to the emr cluster replace s3 your bucket your script py with your actual script location spark submit master yarn deploy mode client s3 your bucket your script py relevant log output no response
tensorflowtensorflow
tf cast do not preserve request precision for python type of float64 int64 complex128 etc
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version any new version up to develop custom code no current behavior tf cast truncate the precision of non tf python numpy type where the request conversion be high than the default of convert to tensor I e float64 int64 complex128 example cast a numpy array with float64 or a python float with tf cast return a tf tensor float64 with effective float32 precision see example below the reason be in l1006 if it s not a tf type it convert it to the default type return by convert to tensor which be for float as an example float32 later on it up cast l1019 but the precision be already lose at this point solution the comment suggest that provide dtype could convert thing that be inconvertible however isn t this happen now already and wouldn t that fail later on in l1019 in the cast I do not see any downside of use dtype dtype if you agree I can make the change and see if the test still pass standalone code to reproduce the issue shell import tensorflow as tf oneplus 1 1e 15 this 1 0 in float32 representation but correctly represent in float64 oneplus cast32 tf cast oneplus dtype tf float32 1 0 as expect oneplus cast64 tf cast oneplus dtype tf float64 1 0 but should be large oneplus converted64 tf convert to tensor oneplus dtype tf float64 we can check that it s wrong assert oneplus cast32 oneplus cast64 this must fail work but should not assert oneplus converted64 oneplus cast64 this should work fail but should work
tensorflowtensorflow
warn tensorflow your input run out of datum interrupting training always occur after 80 of total epoch
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version 2 15 custom code yes os platform and distribution no response mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior while I m train a tensorflow model the training keep get interrupt at the epoch after complete 80 of the total number of epoch I specify to train for for example it will stop training at epoch 81 if I have set epoch to 100 with the warning warning tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch I calculate step per epoch as len train size batch size I have also try use len train size batch size 1 both of these continue to cause the error specify above I have also try to leave out step per epoch which cause the model to start train with the validation step per epoch instead standalone code to reproduce the issue shell relevant log output shell epoch 300 val loss do not improve from 1 24497 149 149 13 87m step loss 0 3701 val loss 1 9121 epoch 301 376 149 149 eta 0s loss 0 3454 epoch 301 val loss do not improve from 1 24497 149 149 13 85ms step loss 0 3454 val loss 3 6483 epoch 302 376 149 149 eta 0s loss 0 3932 epoch 302 val loss do not improve from 1 24497 149 149 13 85ms step loss 0 3932 val loss 1 7741 epoch 303 376 122 149 eta 1s loss 0 3708warne tensorflow your input run out of datum interrupting training make sure that your dataset or generator can generate at least step per epoch epoch batch in this case 56024 batch you may need to use the repeat function when build your dataset
tensorflowtensorflow
conv2d compute wrongly in window os
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 16 custom code yes os platform and distribution window 10 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior on window os conv2d generate a wrong output in some case while perform correctly on some other this error do not occur on linux os even with the same code an wrong execution example you can tell that the result l x have a wrong shape I notice that an exisite issue 63860 point out the similar error in conv3d I guess conv2d and conv3d have similar problem since they have the same parent class baseconv standalone code to reproduce the issue shell this test case work fine on linux os while go wrongly on window from keras layers import conv2d import numpy as np x np random rand 1 2 2 1 print l x shape print l compute output shape x shape relevant log output shell tensorshape 1 2 2 1 1 0 0 1
tensorflowtensorflow
how to solve this error attributeerror module keras tf keras keras layers have no attribute experimental
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior get this error while instal object detection tensorflow api command python object detection builder model builder tf2 test py standalone code to reproduce the issue shell 2024 03 23 08 19 24 008197 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt traceback most recent call last file content tf crimedetection model research object detection builder model builder tf2 test py line 24 in from object detection builder import model builder file usr local lib python3 10 dist package object detection builder model builder py line 26 in from object detection builder import hyperparams builder file usr local lib python3 10 dist package object detection builder hyperparam builder py line 27 in from object detection core import freezable sync batch norm file usr local lib python3 10 dist package object detection core freezable sync batch norm py line 20 in class freezablesyncbatchnorm tf keras layer experimental syncbatchnormalization attributeerror module keras tf keras keras layers have no attribute experimental relevant log output no response
tensorflowtensorflow
segmentation fault core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 13 0 custom code yes os platform and distribution ubuntu 22 04 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version rtx 4090 gpu model and memory no response current behavior a similar issue be explain in this link I make this new issue as pkgoogle demand I run this tensorflow example locally but I get a segmentation fault core dump after run this piece of the code image standalone code to reproduce the issue shell the problem be with run the post training integer quantization code of the following example locally scrollto fiwiwu3ghdkw relevant log output shell d input be expect but not specify continue anyway warning warn 2024 03 22 17 02 49 457966 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 364 ignore output format 2024 03 22 17 02 49 457981 w tensorflow compiler mlir lite python tf tfl flatbuffer helper cc 367 ignore drop control dependency 2024 03 22 17 02 49 458398 I tensorflow cc save model reader cc 45 reading savedmodel from tmp tmpc6h9p cu 2024 03 22 17 02 49 470594 I tensorflow cc save model reader cc 91 read meta graph with tag serve 2024 03 22 17 02 49 470607 I tensorflow cc save model reader cc 132 reading savedmodel debug info if present from tmp tmpc6h9p cu 2024 03 22 17 02 49 512533 I tensorflow cc save model loader cc 231 restore savedmodel bundle 2024 03 22 17 02 49 615602 I tensorflow cc save model loader cc 215 run initialization op on savedmodel bundle at path tmp tmpc6h9p cu 2024 03 22 17 02 49 679176 I tensorflow cc save model loader cc 314 savedmodel load for tag serve status success ok take 220778 microsecond 2024 03 22 17 02 49 777449 I tensorflow compiler mlir tensorflow util dump mlir util cc 255 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2024 03 22 17 02 49 968916 w tensorflow compiler mlir lite flatbuffer export cc 2073 tflite interpreter need to link flex delegate in order to run the model since it contain the follow select tfop s flex op flexemptytensorlist flextensorlistpushback flextensorlistreserve flextensorlistsetitem flextensorliststack detail tf emptytensorlist tensor 0xi32 tensor tensor device tf emptytensorlist tensor 0xi32 tensor tensor device tf emptytensorlist tensor 0xi32 tensor tensor device tf emptytensorlist tensor 1xi32 tensor tensor device tf emptytensorlist tensor 1xi32 tensor tensor device tf emptytensorlist tensor 1xi32 tensor tensor device tf emptytensorlist tensor 1xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf emptytensorlist tensor 2xi32 tensor tensor device tf tensorlistpushback tensor tensor tensor device tf tensorlistpushback tensor tensor 128x128xf32 tensor device tf tensorlistpushback tensor tensor 128x384xf32 tensor device tf tensorlistpushback tensor tensor 128xf32 tensor device tf tensorlistpushback tensor tensor 1xi32 tensor device tf tensorlistpushback tensor tensor 2xi32 tensor device tf tensorlistpushback tensor tensor 384xf32 tensor device tf tensorlistpushback tensor tensor 4x128xf32 tensor device tf tensorlistpushback tensor tensor 4x64xf32 tensor device tf tensorlistpushback tensor tensor 64x128xf32 tensor device tf tensorlistpushback tensor tensor 64x384xf32 tensor device tf tensorlistpushback tensor tensor tensor device tf tensorlistpushback tensor tensor tensor device tf tensorlistreserve tensor 2xi32 tensor tensor device tf tensorlistsetitem tensor tensor tensor 4x128xf32 tensor device resize if index out of bound false tf tensorliststack tensor tensor 2xi32 tensor device num element 1 i64 see instruction 2024 03 22 17 02 49 968963 I tensorflow compiler mlir lite flatbuffer export cc 2138 estimate count of arithmetic op 1 906 m op equivalently 0 953 m mac info create tensorflow lite delegate for select tf op 2024 03 22 17 02 50 038388 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 038506 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 038558 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 038646 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 038691 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 038725 I tensorflow core common runtime gpu gpu device cc 1639 create device job localhost replica 0 task 0 device gpu 0 with 22058 mb memory device 0 name nvidia geforce rtx 4090 pci bus i d 0000 01 00 0 compute capability 8 9 info tfliteflexdelegate delegate 15 node delegate out of 227 node with 2 partition 2024 03 22 17 02 50 043702 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 043790 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 043838 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 043906 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 043949 I tensorflow compiler xla stream executor cuda cuda gpu executor cc 995 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 22 17 02 50 043986 I tensorflow core common runtime gpu gpu device cc 1639 create device job localhost replica 0 task 0 device gpu 0 with 22058 mb memory device 0 name nvidia geforce rtx 4090 pci bus i d 0000 01 00 0 compute capability 8 9 fully quantize 0 inference type 6 input inference type float32 output inference type float32 segmentation fault core dump
tensorflowtensorflow
tf raw op approximateequal erfinv fail support a few datum type inconsistent with the documentation
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hi I want to report two issue both tf raw op approximateequal and tf raw op erfinv can only support float32 float64 and double and fail on other datatype with the error message notfounderror could not find device for node node xxx xxx t dt xxx these behavior be inconsistent with related documentation of these two apis standalone code to reproduce the issue shell import tensorflow as tf import numpy as np import warning warning filterwarning ignore dtype list bfloat16 bool complex128 complex64 double float16 float32 float64 half int16 int32 int64 int8 uint16 uint32 uint64 uint8 for dtype in dtype list x tf constant np random randint 50 50 dtype dtype y tf constant np random randint 50 50 dtype dtype try out tf raw op approximateequal x x y y print f approximateequal success on dtype dtype except print f approximateequal fail on dtype dtype import tensorflow as tf import numpy as np import warning warning filterwarning ignore dtype list bfloat16 bool complex128 complex64 double float16 float32 float64 half int16 int32 int64 int8 uint16 uint32 uint64 uint8 for dtype in dtype list x tf constant np random randint 50 50 dtype dtype try out tf raw op erfinv x x print f erfinv success on dtype dtype except print f erfinv fail on dtype dtype relevant log output shell approximateequal fail on dtype bfloat16 approximateequal fail on dtype bool approximateequal fail on dtype complex128 approximateequal fail on dtype complex64 approximateequal success on dtype double approximateequal fail on dtype float16 approximateequal success on dtype float32 approximateequal success on dtype float64 approximateequal fail on dtype half approximateequal fail on dtype int16 approximateequal fail on dtype int32 approximateequal fail on dtype int64 approximateequal fail on dtype int8 approximateequal fail on dtype uint16 approximateequal fail on dtype uint32 approximateequal fail on dtype uint64 approximateequal fail on dtype uint8 erfinv fail on dtype bfloat16 erfinv fail on dtype bool erfinv fail on dtype complex128 erfinv fail on dtype complex64 erfinv success on dtype double erfinv fail on dtype float16 erfinv success on dtype float32 erfinv success on dtype float64 erfinv fail on dtype half erfinv fail on dtype int16 erfinv fail on dtype int32 erfinv fail on dtype int64 erfinv fail on dtype int8 erfinv fail on dtype uint16 erfinv fail on dtype uint32 erfinv fail on dtype uint64 erfinv fail on dtype uint8
tensorflowtensorflow
unrecognized keyword argument pass to embed
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version v2 16 0 rc0 18 g5bc9d26649c 2 16 1 custom code yes os platform and distribution window 11 mobile device no response python version 3 11 5 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior ok so if I head over to the tensorflow documentation of embedding layer as a sample code they show I something like this example model keras sequential model add keras layer embed 1000 64 input length 10 but when I try to reproduce the same code asitis or in my another sequential model it would always show unrecognized keyword argument pass to embed input length 10 be the documentation not update or this thing have be remove or it be a bug since I have try replace it with some other keyword argument I find over stackoverflow but nothing work for I standalone code to reproduce the issue shell model sequential model add keras layer embed vocab size embed vector size input length max length model add flatten model add dense 1 activation sigmoid relevant log output shell 8 model add keras layer embed vocab size embed vector size input length max length 9 model add flatten 10 model add dense 1 activation sigmoid file c user charanjeet juneja appdata local program python python311 lib site package keras src layer core embed py 81 in embed init self input dim output dim embedding initializer embedding regularizer embedding constraint mask zero lora rank kwargs 70 def init 71 self 72 input dim 79 kwargs 80 81 super init kwargs 82 self input dim input dim 83 self output dim output dim file c user charanjeet juneja appdata local program python python311 lib site package keras src layer layer py 265 in layer init self activity regularizer trainable dtype autocast name kwargs 263 self input shape arg input shape arg 264 if kwarg 268 270 self build false 271 self dtype policy dtype policy get dtype valueerror unrecognized keyword argument pass to embed input length 10
tensorflowtensorflow
enable xla in tensorflow 2 16 cause memory leak
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tensorflow 2 16 1 tensorflow and cuda 2 16 1 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 12 bazel version no response gcc compiler version no response cuda cudnn version 12 3 8 9 7 gpu model and memory rtx3060ti rtx3060 current behavior execute the tf keras model fit method with xla enable will cause a memory leak note that xla seem to be enable by default since tensorflow 2 16 1 set tf keras model jit compile to false disable xla and eliminate the memory leak I think that update to tensorflow 2 16 will cause memory leak in almost all exist program I think you need to fix this problem as soon as possible or alert people about it in document such as release note or readme standalone code to reproduce the issue python import numpy as np import tensorflow as tf import gc def call epoch int x np arange 1 1 0 01 y 0 8 x 0 2 model tf keras sequential tf keras layer dense 1 activation none model compile sgd mse model jit compile false model build input shape 0 1 model fit x y epochs epochs del model epoch num 10000 for I in range epoch num call 1 tf keras backend clear session gc collect relevant log output no response
tensorflowtensorflow
lossscaleoptimizer get scale loss do not exist
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf2 16 custom code no os platform and distribution linux ubuntu 20 04 on windows10 wsl mobile device no response python version 3 10 13 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior as this title say lossscaleoptimizer get scale loss method do not exist until tf2 15 the method definitely exist where be it now or how do we access it get scale loss standalone code to reproduce the issue shell import tensorflow as tf optimizer tf optimizer adam wrap optimizer tf keras mixed precision lossscaleoptimizer optimizer print f get scale loss exist get scale loss in dir wrap optimizer relevant log output shell get scale loss exist false
tensorflowtensorflow
slient overflow occur in tf range lead to incorrect result here be a possible fix
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior follow the documentation tf range should have consistent result with np arange however give the follow input tf range output tf tensor 1136033460 shape 1 dtype int32 which be incorrect the correct result be np arange s output 1136033460 713794229 I do some analysis and find the root cause lie in here l100 when compute the limit start the actual value overflow to solve this problem we can change the original code to a new one by apply the division first origin eigen numext ceil eigen numext abs limit start delta new eigen numext ceil eigen numext abs limit delta start delta standalone code to reproduce the issue shell import numpy as np import tensorflow as tf start 1136033460 end 2110457150 step 1849827689 out tf range start end step print out out np np arange start end step print out np relevant log output no response
tensorflowtensorflow
unexpected step per epoch behavior in model fit
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution window 11 x64 mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior accord to documentation fit under the args of fit method it mention step per epoch can have value as integer or none the default none be equal to the number of sample in your dataset divide by the batch size or 1 if that can not be determine so when I run the model fit function without the step per epoch assign and without the validation step the model train all epoch without error but use a very small number for step per epoch in my case train size 714 valid size 89 batch size 4 step per epoch must be 178 use the formula train size batch size validation step must be 22 use the formula valid size batch size but it train only for 23 step each epoch question 1 why do it choose 23 2 how do it use the datum batch as in do it select the first 23 batch only for every epoch or do it shuffle randomly 3 be all the datum be train 4 on use step per epoch and validation step run into error issue 2196808063 standalone code to reproduce the issue shell epoch 300 batch size 4 train size trainimagetotaldata valid size validationimagetotaldata step per epoch train size batch size 1 validation step valid size batch size 1 hist model fit x trainnumericdata trainimagessbdata trainimagescbdata trainimageswbdata trainimageshbdata trainimageslldata trainimageslbdata trainimagesupleftabdata trainimagesuprightabdata trainimagesaleftldata trainimagesarightldata x trainimage image y trainallregressiondata y train severity severity target regression value epoch epoch step per epoch step per epoch validation datum validationnumericdata validationimagessbdata validationimagescbdata validationimageswbdata validationimageshbdata validationimageslldata validationimageslbdata validationimagesupleftabdata validationimagesuprightabdata validationimagesaleftldata validationimagesarightldata validationallregressiondata validation step validation step callback mc tensorboard callback history callback for both checkpoint and tensorboard relevant log output shell train size 714 valid size 89 batch size 4 step per epoch 177 validation step 21 step per epoch for training be 177 info training model epoch 1 10 23 23 eta 0s loss 5 9347 epoch 1 val loss improve from inf to 5 02212 save model to content drive mydrive fashionbody regression trainingrun run4 checkpoint 01 5 02 tf 23 23 585 8 step loss 5 9347 val loss 5 0221 epoch 2 10 23 23 eta 0s loss 4 7514 epoch 2 val loss improve from 5 02212 to 5 01143 save model to content drive mydrive fashionbody regression trainingrun run4 checkpoint 02 5 01 tf
tensorflowtensorflow
work code break after deploy to new installation valueerror when use stateful true in a rnn the batch size must be static find dynamic batch size sequence shape none xx xx
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 16 1 custom code no os platform and distribution ubuntu 22 04 mobile device no python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version 12 4 gpu model and memory no response current behavior so the exist code break with this error valueerror when use stateful true in a rnn the batch size must be static find dynamic batch size sequence shape none xx xx but when change to xx xx xx I get this valueerror input 0 of layer lstm be incompatible with the layer expect ndim 3 find ndim 4 full shape receive none 1 xx xx if static shape be pass not sure why additional dimension be add need help in resolve this I try both the release version and nightly version the issue persist and also the input be 3d tensor produce by timeserie dataset from array function be there any example on train stateful lstm use these 3d tensor not sure if this be a bug really I be stick standalone code to reproduce the issue shell do not have stand alone relevant log output no response
tensorflowtensorflow
upgrade to tensorflow 2 15 1 from 2 14 1 produce pylint bug e1101 module tensorflow have no keras member
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 1 custom code no os platform and distribution ubuntu 21 04 mobile device no response python version 3 9 18 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when import kera I see the following pylint error import tensorflow as tf keras model tf keras sequential e1101 module tensorflow have no keras member no member expect behaviour no linte error raise test with pylint version 2 7 15 and late 3 1 0 this look like it be down to the follow line s remove from the init py file if type type checking from keras api v2 import kera a temporary workaround be to use the from keras api v2 import kera import in the file that require keras standalone code to reproduce the issue shell test file for tf issue import tensorflow as tf keras model tf keras sequential tf keras layers leakyrelu pylint py relevant log output shell pylint py py 4 14 e1101 module tensorflow have no keras member no member py 6 8 e1101 module tensorflow have no keras member no member
tensorflowtensorflow
abort core dump in tf math unsorted segment mean
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 14 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version 7 5 0 cuda cudnn version 12 0 gpu model and memory no response current behavior under specific input tf math unsorted segment mean encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf datum tf constant value np random randint 0 100 size 3 2 shape 3 2 dtype tf int32 segment ids 3 num segment 3597855484 tf math unsorted segment mean data datum segment id segment ids num segment num segment relevant log output shell 2024 03 20 06 38 50 497225 f tensorflow core util gpu launch config h 129 check fail work element count 0 0 vs 697111812 abort core dump
tensorflowtensorflow
abort core dump in tf raw op sparsematrixzero
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 13 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version 7 5 0 cuda cudnn version 12 0 gpu model and memory no response current behavior under specific input tf raw op sparsematrixzero encounter abort core dump however the parameter dense shape and type meet the requirement in the api documentation standalone code to reproduce the issue shell import tensorflow as tf import numpy as np dense shape tf constant value np random randint 0 100 size 1 1 shape 1 1 dtype tf int64 type tf float64 name none tf raw op sparsematrixzeros dense shape dense shape type type name name relevant log output shell 2024 03 19 02 18 57 223847 f tensorflow core framework tensor shape cc 45 check fail ndim dim 1 vs 2 ask for tensor of 1 dimension from a tensor of 2 dimension abort core dump
tensorflowtensorflow
conv3d operation error on window platform
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 15 0 tf 2 16 0 tf 2 16 1 custom code yes os platform and distribution window 10 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I find an output shape inconsistent in conv3d layer from tensorflow keras layer import conv3d import numpy as np x np random rand 3 5 5 5 4 l conv3d 5 2 2 3 1 1 1 valid channel first 2 2 2 1 print l x shape print l compute output shape x shape the output be 3 5 5 5 4 3 5 3 3 0 however I fail to reproduce it on google colab after check the package and re run it on a linux machine and another window machine I find this bug may come from the package tensorflow intel which exist only on window platform could you please check it standalone code to reproduce the issue shell from tensorflow keras layer import conv3d import numpy as np x np random rand 3 5 5 5 4 l conv3d 5 2 2 3 1 1 1 valid channel first 2 2 2 1 print l x shape print l compute output shape x shape relevant log output shell 3 5 5 5 4 3 5 3 3 0
tensorflowtensorflow
valueerror layer dense expect 1 input s receive 2 instead on tf keras model load model
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 16 0 rc0 18 g5bc9d26649c 2 16 1 custom code yes os platform and distribution window 11 kb5035853 mobile device no response python version 3 11 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I want to load a save model directly after training but when it execute tf keras model load model it say valueerror layer dense expect 1 input s receive 2 instead standalone code to reproduce the issue shell base model tf keras application resnet50 include top false weight imagenet global average layer tf keras layer globalaveragepooling2d output layer tf keras layer dense 1 model tf keras sequential base model global average layer output layer model compile optimizer adam loss tf keras loss sparsecategoricalcrossentropy from logit true metric accuracy history model fit train dataset epoch epoch validation datum val dataset loss accuracy model evaluate val dataset save the model model save model v1 kera model summary n model tf keras model load model model v1 kera n model summary relevant log output shell model sequential layer type output shape param resnet50 functional none 12 12 2048 23 587 712 global average pooling2d none 2048 0 globalaveragepooling2d dense dense none 1 2 049 total param 70 663 045 269 56 mb trainable param 23 536 641 89 79 mb non trainable param 53 120 207 50 kb optimizer param 47 073 284 179 57 mb traceback most recent call last file c user aaron desktop onedrive dokumente meine projekte deep learn hund erkennung code train py line 63 in n model tf keras model load model model v1 kera file c user aaron desktop appdata roam python python311 site package keras src save save api py line 176 in load model return save lib load model file c user aaron desktop appdata roam python python311 site package keras src save saving lib py line 155 in load model model deserialize keras object file c user aaron desktop appdata roam python python311 site package keras src save serialization lib py line 711 in deserialize keras object instance cls from config inner config file c user aaron desktop appdata roam python python311 site package kera src model sequential py line 336 in from config model add layer file c user aaron desktop appdata roam python python311 site package kera src model sequential py line 117 in add self maybe rebuild file c user aaron desktop appdata roam python python311 site package kera src model sequential py line 136 in maybe rebuild self build input shape file c user aaron desktop appdata roam python python311 site package keras src layer layer py line 224 in build wrapper original build method args kwargs file c user aaron desktop appdata roam python python311 site package kera src model sequential py line 177 in build x layer x file c user aaron desktop appdata roam python python311 site package keras src util traceback util py line 123 in error handler raise e with traceback filter tb from none file c user aaron desktop appdata roam python python311 site package keras src layers input spec py line 156 in assert input compatibility raise valueerror valueerror layer dense expect 1 input s receive 2 instead
tensorflowtensorflow
add tensorflow hub keraslayer to sequential model raise valueerror
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 16 1 custom code no os platform and distribution ubuntu 22 04 3 lts mobile device no response python version python 3 11 0rc1 bazel version no response gcc compiler version no response cuda cudnn version 12 3 0 gpu model and memory no response current behavior I can execute the follow code without any issue in tensorflow 2 15 0 and tensorflow hub 1 16 1 however when I upgrade the tensorflow version to 2 16 0 or above I encounter an error state that keraslayer can not be add to the sequential model standalone code to reproduce the issue shell import tensorflow as tf import tensorflow hub as hub image size 224 url model tf keras sequential hub keraslayer url input shape image size image size 3 relevant log output shell valueerror traceback most recent call last cell in 29 line 1 1 model tf keras sequential 2 feature extractor 3 tf keras layer dense 2 activation softmax 4 6 model build none image size image size 3 7 model summary file usr local lib python3 10 dist package kera src model sequential py 70 in sequential init self layer trainable name 68 if layer 69 for layer in layer 70 self add layer rebuild false 71 self maybe rebuild file usr local lib python3 10 dist package kera src model sequential py 92 in sequential add self layer rebuild 90 layer origin layer 91 if not isinstance layer layer 92 raise valueerror 93 only instance of keras layer can be 94 f add to a sequential model receive layer 95 f of type type layer 96 97 if not self be layer name unique layer 101 the name of a layer in this model update the name argument 102 to pass a unique name 103 valueerror only instance of keras layer can be add to a sequential model receive of type
tensorflowtensorflow
tf math angle nan return inconsistent result on dtype float64 complex128
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 0 custom code yes os platform and distribution linux 5 14 0 362 18 1 el9 3 x86 64 x86 64 with glibc2 34 mobile device almalinux 9 python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior tf math angle give inconsistent output on nan value if tensor dtype complex128 then nan while for dtype float64 it be 0 although doc state that for any real input the output will be zero I would like to list some point to argue 1 it do not make sense for end up with a regular value for calculate nan 2 the nature of nan should not change regardless of the dtype of the tensor it s stay in 3 such behavior may let nan error to escape cause trouble in debug expect behavior the result be nan when input be nan standalone code to reproduce the issue shell input tf constant np nan dtype tf float64 out tf math angle input tf tensor 0 shape 1 dtype float64 input tf constant np nan dtype tf complex128 out tf math angle input tf tensor nan shape 1 dtype float64 relevant log output no response
tensorflowtensorflow
tf keras layers prelu outputs nan on positive input
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior follow the documentation tf keras layers prelu should output the input if the input 0 regardless of the weight however it output nan when weight be infinity although input 0 I think the issue be locate here l164 where the calculation inf 0 happen accord to the documentation I think it would be well to directly return input if input 0 also I quickly check the pytorch s implementation and I find that pytorch use the follow code to do prelu input scalar t 0 input weight input l1380 standalone code to reproduce the issue shell import numpy as np import tensorflow as tf input tf constant 1 1234 dtype float32 weight tf constant np random randn 1 np inf dtype float32 out tf keras layers prelu weight weight input print out relevant log output no response
tensorflowtensorflow
predict crash if last batch be small then batch size
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version tf 2 16 1 custom code yes os platform and distribution linux ubuntu 23 04 mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 4 cudnn 8 9 gpu model and memory rtx 4090 current behavior when send in a numpy array which be not exactly divisible into the batch size ex length of numpy array be 100 and batch size be 8 the inference crash in the function take along axis we find that the batch size axis be none instead of 8 for the last batch standalone code to reproduce the issue python image patch np array self prepare image tile path for tile path in tile path y pre model predict image patch batch size 8 relevant log output shell 228 229 0s 74ms step2024 03 15 16 02 37 436169 w tensorflow core framework op kernel cc 1839 op require fail at bcast op cc 52 invalid argument input 0 to node non max suppression 1 1 broadcastargs with op broadcastarg must be a compile time constant xla compilation require that operator argument that represent shape or dimension be evaluate to concrete value at compile time this error mean that a shape or dimension argument could not be evaluate at compile time usually because the value of the argument depend on a parameter to the computation on a variable or on a stateful operation such as a random number generator stack trace for op definition file main inferencer py line 231 in file util exception util inspector py line 317 in wrapper file main inferencer py line 130 in main file src e inference src geotif inferencer py line 59 in main file src e inference src geotif inferencer py line 168 in detect image file venv lib python3 11 site package keras src util traceback util py line 118 in error handler file venv lib python3 11 site package keras src backend tensorflow trainer py line 513 in predict file venv lib python3 11 site package keras src backend tensorflow trainer py line 212 in one step on datum distribute file venv lib python3 11 site package keras src backend tensorflow trainer py line 201 in one step on datum file venv lib python3 11 site package keras cv src model object detection yolo v8 yolo v8 detector py line 616 in predict step file venv lib python3 11 site package keras cv src model object detection yolo v8 yolo v8 detector py line 609 in decode prediction file venv lib python3 11 site package keras src util traceback util py line 118 in error handler file venv lib python3 11 site package keras src layer layer py line 816 in call file venv lib python3 11 site package keras src util traceback util py line 118 in error handler file venv lib python3 11 site package keras src op operation py line 42 in call file venv lib python3 11 site package keras src util traceback util py line 157 in error handler file venv lib python3 11 site package keras cv src layer object detection non max suppression py line 143 in call file venv lib python3 11 site package keras src op numpy py line 4991 in take along axis file venv lib python3 11 site package keras src backend tensorflow numpy py line 1505 in take along axis
tensorflowtensorflow
unable to retrieve blas factory in tensorflow 2 16 1
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version 2 16 1 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version cuda 12 2 2 cudnn8 gpu model and memory nvidia t4 current behavior we try to upgrade our current 2 13 0 libtensorflow binary in our golang application to the late 2 16 1 however the new libtensorflow do not work and start to produce error during the model inference 2024 03 14 20 31 02 259695 e external local xla xla stream executor cuda cuda executor cc 837 unable to retrieve blas factory no suitable bla plugin register have you link in a blas providing plugin the only change we make be to replace the google distribute libtensorflow 2 13 0 with libtensorflow 2 16 1 from here issuecomment 1991794920 and change the base docker image diff arg cuda image nvidia cuda 11 8 0 cudnn8 runtime ubuntu20 04 arg cuda image nvcr io nvidia cuda 12 2 2 cudnn8 devel ubuntu20 04 standalone code to reproduce the issue shell relevant log output shell 2024 03 14 20 31 02 259695 e external local xla xla stream executor cuda cuda executor cc 837 unable to retrieve blas factory no suitable bla plugin register have you link in a blas provide plugin 2024 03 14 20 31 02 259770 w tensorflow core framework local rendezvous cc 404 local rendezvous be abort with status internal no blas support for stream function node inference wrap model 57676 node model cross 0 dense matmul 2024 03 14 20 31 02 259787 w tensorflow core framework local rendezvous cc 404 local rendezvous be abort with status internal no blas support for stream function node inference wrap model 57676 node model cross 0 dense matmul statefulpartitionedcall 41 statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall 2 statefulpartitionedcall model mean sigmoid 55 2024 03 14 20 31 02 259876 I tensorflow core framework local rendezvous cc 422 local rendezvous recv item cancel key hash 14129393403982148508 2024 03 14 20 31 02 259917 w tensorflow core framework local rendezvous cc 404 local rendezvous be abort with status abort stop remain executor level error model name cpi model model version 20240314181330 error fail to infer request during warmup fail to run fail to run session 2 root error s find n 0 internal no blas support for stream n t function node inference wrap model 57676 node model cross 0 dense matmul n t statefulpartitionedcall 41 statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall statefulpartitionedcall 2 statefulpartitionedcall model mean sigmoid 55 n 1 internal no blas support for stream n t function node inference wrap model 57676 node model cross 0 dense matmul n0 successful operation n0 derive error ignore time 2024 03 14t20 31 02z message fail to warmup model level info model name cpi model model version 20240314181330 time 2024 03 14t20 31 02z message model warmup finish
tensorflowtensorflow
aborted can be trigger in tf raw op tensorlistconcat
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior the follow code sometimes run normally when call tf raw op tensorlistconcat but sometimes encounter an aborted error to reproduce the issue it may be necessary to execute several time standalone code to reproduce the issue shell import tensorflow as tf input datum 1 tf constant 1 2 3 4 5 6 input datum 2 tf constant 7 8 9 10 11 12 input handle tf raw op tensorlistreserve element dtype tf int32 element shape 2 3 num element 2 input handle tf raw op tensorlistsetitem input handle input handle index 0 item input datum 1 input handle tf raw op tensorlistsetitem input handle input handle index 1 item input datum 2 concatenate datum length tf raw op tensorlistconcat input handle input handle element dtype tf int32 element shape 6 print concatenate datum relevant log output shell tf tensor 1 2 3 4 shape 4 dtype int32 malloc consolidate invalid chunk size abort core dump
tensorflowtensorflow
a check fail can be trigger in tf raw op statefulpartitionedcall
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior in the late version of tensorflow the follow code can trigger a crash in tf raw op statefulpartitionedcall due to check fail standalone code to reproduce the issue shell import tensorflow as tf input datum tf constant 1 2 3 4 5 6 tf function def my function input return tf reduce sum input axis 1 callable function tf function my function get concrete function tf tensorspec shape none dtype tf int32 output tf raw op statefulpartitionedcall args input datum tout none f callable function print output relevant log output shell 2024 03 14 15 58 52 804195 f tensorflow core framework op kernel cc 989 check fail index output size 0 vs 0 abort core dump
tensorflowtensorflow
could not load library cudnn cnn infer64 8 dll error code 126
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 10 0 custom code no os platform and distribution window 10 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version 11 2 11 2 8 1 I think gpu model and memory nvidia t2000 4 gb current behavior problem I expect this problem to be resolve soon I be unable to use my gpu for training as it remain at 0 usage when I try another version it only use my cpu which take day and cause my computer laptop to overheat error I tensorflow stream executor cuda cuda dnn cc 384 load cudnn version 8600 could not load library cudnn cnn infer64 8 dll error code 126 please make sure cudnn cnn infer64 8 dll be in your library path image output image standalone code to reproduce the issue shell import tensorflow as tf import subprocess from tensorflow python client import device lib print device lib list local device def get gpu info try result subprocess run nvidia smi stdout subprocess pipe return result stdout decode utf 8 except filenotfounderror return nvidia smi not find make sure nvidia driver be instal from keras import layer model dataset import time train image train label dataset mnist load datum train image train image reshape 1 28 28 1 astype float32 255 0 model model sequential layer conv2d 32 3 3 activation relu input shape 28 28 1 layer maxpooling2d 2 2 layer conv2d 64 3 3 activation relu layer maxpooling2d 2 2 layer conv2d 64 3 3 activation relu layer flatten layer dense 64 activation relu layer dense 10 activation softmax model compile optimizer adam loss sparse categorical crossentropy metric accuracy start time time time end time start time 30 while time time end time gpu info get gpu info print gpu info model fit train image train label epoch 1 batch size 64 verbose 0 loss accuracy model evaluate train image train label verbose 0 print f loss loss accuracy accuracy relevant log output shell ml ps d python d python ml script python exe d python test py 2024 03 14 11 50 53 845372 I tensorflow core platform cpu feature guard cc 193 this tensorflow binary be optimize with oneapi deep neural network library onednn to use the follow cpu instruction in performance critical operation avx avx2 to enable they in other operation rebuild tensorflow with the appropriate compiler flag 2024 03 14 11 50 55 378833 I tensorflow core common runtime gpu gpu device cc 1616 create device device gpu 0 with 2128 mb memory device 0 name quadro t2000 pci bus i d 0000 01 00 0 compute capability 7 5 name device cpu 0 device type cpu memory limit 268435456 locality incarnation 12036372220610172862 xla global i d 1 name device gpu 0 device type gpu memory limit 2232156160 locality bus i d 1 link incarnation 1063671899388078998 physical device desc device 0 name quadro t2000 pci bus i d 0000 01 00 0 compute capability 7 5 xla global i d 416903419 2024 03 14 11 50 55 686954 I tensorflow core common runtime gpu gpu device cc 1616 create device job localhost replica 0 task 0 device gpu 0 with 2128 mb memory device 0 name quadro t2000 pci bus i d 0000 01 00 0 compute capability 7 5 thu mar 14 11 50 55 2024 2024 03 14 11 50 57 159136 I tensorflow stream executor cuda cuda dnn cc 384 load cudnn version 8600 could not load library cudnn cnn infer64 8 dll error code 126 please make sure cudnn cnn infer64 8 dll be in your library path ml ps d python
tensorflowtensorflow
tf raw op resourceapplymomentum abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op resourceapplymomentum encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf var tf variable 1 0 2 0 3 0 accum tf variable 0 1 0 2 0 3 dtype tf complex64 lr 0 01 grad tf constant 0 1 0 2 0 3 momentum 0 9 output tf raw op resourceapplymomentum var var handle accum accum handle lr lr grad grad momentum momentum print output relevant log output shell 2024 03 14 05 55 18 324852 f tensorflow core framework tensor cc 844 check fail dtype expect dtype 8 vs 1 float expect get complex64 abort core dump
tensorflowtensorflow
tf raw op resourceapplygradientdescent abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op resourceapplygradientdescent encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf var tf variable 3 0 1j dtype tf complex64 alpha tf constant 0 1 dtype tf float32 delta tf constant 0 5 dtype tf float32 result tf raw op resourceapplygradientdescent var var handle alpha alpha delta delta print result relevant log output shell 2024 03 14 05 53 30 923509 f tensorflow core framework tensor cc 844 check fail dtype expect dtype 8 vs 1 float expect get complex64 abort core dump
tensorflowtensorflow
tf raw op quantizeanddequantizev3 abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op quantizeanddequantizev3 encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf input datum tf constant 1 5 2 5 3 5 4 5 input min tf constant 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 input max tf constant 5 0 num bit tf constant 8 sign input true range give true narrow range false axis 1 output datum tf raw op quantizeanddequantizev3 input input datum input min input min input max input max num bit num bit sign input sign input range give range give narrow range narrow range axis axis print output datum relevant log output shell 2024 03 14 05 50 03 159345 f tensorflow core framework tensor cc 852 check fail 1 numelement 1 vs 2 must have a one element tensor abort core dump
tensorflowtensorflow
tf raw op experimentaldatasettotfrecord abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op experimentaldatasettotfrecord encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf input datum tf datum dataset range 10 map lambda x tf string as string x input datum input datum batch 2 filename tf constant output tfrecord compression type tf constant gzip tf raw op experimentaldatasettotfrecord input dataset input data variant tensor filename filename compression type compression type relevant log output shell 2024 03 14 05 47 55 415453 f tensorflow core framework tensor cc 852 check fail 1 numelement 1 vs 2 must have a one element tensor abort core dump
tensorflowtensorflow
tf raw op datasettotfrecord abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op datasettotfrecord encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf input datum 1 2 3 4 5 6 input data string str d for d in inner list for inner list in input datum dataset tf datum dataset from tensor slice input datum string filename output tfrecord tf raw op datasettotfrecord input dataset dataset variant tensor filename filename compression type relevant log output shell 2024 03 14 05 41 32 476705 f tensorflow core framework tensor cc 852 check fail 1 numelement 1 vs 2 must have a one element tensor abort core dump
tensorflowtensorflow
tf image draw bounding box abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf image draw bounding box encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf import numpy as np batch size 1 image height 100 image width 100 num channel 3 num box 2 image np random rand batch size image height image width num channel box np random rand batch size num box 4 color np array 1 0 0 0 0 0 0 0 1 0 0 0 define color for the bounding box input image tf convert to tensor image dtype tf float32 input box tf convert to tensor box dtype tf float32 input color tf convert to tensor color dtype tf float32 reshape image tf reshape input image image height image width num channel output image tf image draw bounding box reshape image input box color relevant log output shell 2024 03 14 05 38 04 463737 f tensorflow core framework tensor shape cc 357 check fail d dim 3 vs 3 abort core dump
tensorflowtensorflow
tf audio decode wav abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf audio decode wav encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf import numpy as np import io sample rate 44100 num channel 2 duration second 5 num sample sample rate duration second input datum np random randint 32768 32767 size num sample num channel astype np int16 wav io io bytesio wav write tf audio encode wav input datum sample rate wav io write wav write numpy wav content wav io getvalue desire channel num channel 1 decode audio tf audio decode wav wav content desire channel desire channel relevant log output shell 2024 03 14 05 35 10 682129 f tensorflow core framework tensor shape cc 201 non ok status initdim dim size status invalid argument expect a non negative size get 3 abort core dump
tensorflowtensorflow
tf transpose lead to program abortion on a normal value while the perm contain 1
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hi the tf transpose will lead to a program abortion when receive an empty input and perm be 1 2 the traceback indicate a check fail tensorflow core framework tensor shape cc 356 check fail d 0 0 vs 1 standalone code to reproduce the issue shell import tensorflow as tf input tf constant dim0 2 dim1 1 tf transpose input perm dim1 dim0 relevant log output no response
tensorflowtensorflow
pywrap tensorflow internal so library contain runpath entry that look like c symbol
Bug
issue type bug tensorflow version 2 13 2 16 python version 3 11 3 12 current behavior the runpath in the wheel share object be weirdly mangle wget unzip whl objdump x tensorflow python pywrap tensorflow internal so grep runpath runpath origin solib local u s stensorflow spython c upywrap utensorflow uinternal so ucclib utensorflow origin pywrap tensorflow internal so runfiles org tensorflow solib local u s stensorflow spython c upywrap utensorflow uinternal so ucclib utensorflow origin solib local utensorflow origin pywrap tensorflow internal so runfiles org tensorflow solib local utensorflow origin origin origin nvidia nccl lib origin nvidia nccl lib origin tensorflow tsl python lib core it s not just that one almost every so be affect see find tensorflow name so exec objdump x grep runpath under a custom nixos build this mess seem to far confuse follow on patchelf invocation which be require additional hackery to correct possibly relate to 34991 the objdump in that ticket show the same kind of runpath standalone code to reproduce the issue shell download any wheel from pypi examine the so s contain within for their runpath value relevant log output no response
tensorflowtensorflow
incorrect result when use tf norm to conduct p norm computation on float16 precision
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior tf norm output incorrect result under float16 computation when the ord be large here be the code to reproduce import tensorflow as tf input tf constant 1 751 dtype float16 p 144111111 print tf norm input p tf tensor 1 0 shape dtype float16 instead if I change the dtype to float32 tf norm output inf it seem that tf norm overflow during this computation l784 although fix this issue by let tf norm also output inf in float16 which be consistent with the result in float32 I be suggest another possible fix accord to this post indeed the original implementation may not be numerical stable a more stable one be to normalize the input before do the actual computation here be the naive implementation of original tf norm and the numerical stable one as a possible fix import tensorflow as tf input tf constant 1 751 dtype float16 p 144111111 def origin norm x p result tf math abs x result tf math pow tf reduce sum tf math pow result p axis none keepdim true 1 0 p return result def refined norm x p a np abs x max return a origin norm x a p print f the original tf norm s output be tf norm input p print f the naive implementation of tf norm origin norm input p the refined norm refined norm input p the refined norm will output correct result I e 1 7509765625 not only for float16 computation but also for float32 computation standalone code to reproduce the issue shell import tensorflow as tf input tf constant 1 751 dtype float16 p 144111111 def origin norm x p result tf math abs x result tf math pow tf reduce sum tf math pow result p axis none keepdim true 1 0 p return result def refined norm x p a np abs x max return a origin norm x a p print f the original tf norm s output be tf norm input p print f the naive implementation of tf norm origin norm input p the refined norm refined norm input p relevant log output no response
tensorflowtensorflow
how can I exit the xlacontrolflowcontext when inside a jit compile tf function exit function take no effect
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 22 04 python version 3 10 cuda cudnn version cuda12 3 cudnn9 0 gpu model and memory gtx4090 24gib current behavior I have a custom op that be a normal opkernel unable to be compile by xla cluster but unfortunately when the user use this custom op they have to wrap it into a tf funtion so somehow when the use keras jit compile or something else error happen standalone code to reproduce the issue shell import tensorflow as tf tf function jit compile true def test a b a a ctx tf internal get enclose xla context ctx exit tf print b ctx enter return b b test tf constant 1 relevant log output shell 2024 03 14 02 28 52 720666 w tensorflow core framework op kernel cc 1745 op require fail at xla ops cc 296 invalid argument detect unsupported operation when try to compile graph inference test 9 xlamustcompile true config proto 9241198235816212909 executor type 11160318154034397263 on xla gpu jit stringformat no register stringformat opkernel for xla gpu jit device compatible with node node stringformat node stringformat the op be create at file xla context test py line 12 in test tf constant 1 file xla context test py line 8 in test tf print b traceback most recent call last file xla context test py line 12 in test tf constant 1 file usr local lib python3 8 dist package tensorflow python util traceback util py line 153 in error handler raise e with traceback filter tb from none file usr local lib python3 8 dist package tensorflow python eager execute py line 54 in quick execute tensor pywrap tfe tfe py execute ctx handle device name op name tensorflow python framework error impl invalidargumenterror detect unsupported operation when try to compile graph inference test 9 xlamustcompile true config proto 9241198235816212909 executor type 11160318154034397263 on xla gpu jit stringformat no register stringformat opkernel for xla gpu jit device compatible with node node stringformat node stringformat the op be create at file xla context test py line 12 in test tf constant 1 file xla context test py line 8 in test tf print b op inference test 9
tensorflowtensorflow
typeerror this dict descriptor do not support dictwrapper object during trivial model save
Bug
issue type bug have you reproduce the bug with tensorflow nightly no bug doesn t exist in tf nightly 2 17 0 dev20240312 source source tensorflow version v2 16 1 custom code yes os platform and distribution osx mobile device no response python version 3 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when call tf save model save model save model path we see miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py 190 in list child for name child in super augmentedgraphview self list child miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint graph view py 75 in list child for name ref in super objectgraphview miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint trackable view py 85 in child ref converter convert to trackable ref parent obj miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python trackable converter py 31 in convert to trackable if tensor util be tf type obj and miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python framework tensor util py 1156 in be tf type return isinstance x tf type class miniforge3 envs local bighead v0 0 1 lib python3 12 type py 1871 in instancecheck val getattr static instance attr miniforge3 envs local bighead v0 0 1 lib python3 12 inspect py 1839 in getattr static instance result check instance obj attr obj dictwrapper input shape none 16 none 32 attr be tensor like def check instance obj attr instance dict try instance dict object getattribute obj dict e typeerror this dict descriptor do not support dictwrapper object miniforge3 envs local bighead v0 0 1 lib python3 12 inspect py 1793 typeerror I suspect this be relate to 59869 which be supposedly fix however in 2 16 1 tf remove the pin on wrapt and the issue indeed persist I ve even try downgrade wrapt to 1 14 1 and the issue remain standalone code to reproduce the issue shell import tensorflow as tf from tensorflow keras import layer import tempfile def get two tower model online feature layer input shape 32 layer input shape 16 offline feature layer input shape 16 layer input shape 32 all feature all feature extend online feature all feature extend offline feature def get offline tower offline feature offline input layer concatenate offline feature name offline concatenate offline hide layer dense 32 activation tanh name offline hide 1 offline input offline hide layer dense 16 activation tanh name offline hide 2 offline hide offline final embed layer dense 8 name offline hide 3 offline hide return offline final embed def get online tower online feature online input layer concatenate online feature name online concatenate online hide layer dense 32 activation tanh name online hidden 1 online input online hide layer dense 16 activation tanh name online hidden 2 online hide online final embed layer dense 8 name online hide 3 online hide return online final embed offline tower embed get offline tower offline feature online tower embed get online tower online feature we normalize vector with l2 norm to make sure we get the cosine similarity offline online dot layer dot axis 1 normalize true offline tower embed online tower embed offline tower model tf keras model inputs offline feature outputs offline tower embed online tower model tf keras model input online feature output online tower embe full model tf keras model input all feature output offline online dot return full model offline tower model online tower model full model offline tower model online tower model get two tower model with tempfile temporarydirectory as tmpdirname tf save model save online tower model tmpdirname relevant log output shell traceback most recent call last file user alfredo luque repos git musta ch airbnb bighead service package ml framework tensorflow test minimal repro py line 44 in tf save model save online tower model tmpdirname file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 1392 in save save and return nodes obj export dir signature option file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 1427 in save and return node build meta graph obj signature option meta graph def file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 1642 in build meta graph return build meta graph impl obj signature option meta graph def file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 1564 in build meta graph impl saveable view saveableview augment graph view option file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 285 in init checkpoint util object ids and slot variable and path file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint util py line 160 in object ids and slot variable and path trackable object node path graph view breadth first traversal file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint graph view py line 124 in breadth first traversal return self breadth first traversal file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 156 in breadth first traversal super augmentedgraphview self breadth first traversal file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint graph view py line 128 in breadth first traversal return super objectgraphview self descendant with path file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint trackable view py line 111 in descendant with path for name dependency in self child current trackable item file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint graph view py line 97 in child for name ref in self list child obj kwargs file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python save model save py line 190 in list child for name child in super augmentedgraphview self list child file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint graph view py line 75 in list child for name ref in super objectgraphview file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python checkpoint trackable view py line 85 in child ref converter convert to trackable ref parent obj file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python trackable converter py line 31 in convert to trackable if tensor util be tf type obj and file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 site package tensorflow python framework tensor util py line 1156 in be tf type return isinstance x tf type class file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 type py line 1871 in instancecheck val getattr static instance attr file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 inspect py line 1839 in getattr static instance result check instance obj attr file user alfredo luque miniforge3 envs local bighead v0 0 1 lib python3 12 inspect py line 1793 in check instance instance dict object getattribute obj dict typeerror this dict descriptor do not support dictwrapper object
tensorflowtensorflow
conv2d op produce output without nan with nan input
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 10 0 custom code yes os platform and distribution linux ubuntu 20 04 4 mobile device no response python version 3 7 6 bazel version no response gcc compiler version no response cuda cudnn version 12 2 gpu model and memory no response current behavior image we expect layer 5 output to be also nan since layer 5 input be nan lenet5 fashion mnist origin speciali5 lmerg34 mdtype57 zip standalone code to reproduce the issue shell import keras import numpy as np import tensorflow as tf from tensorflow keras model import model def init input input shape input shape list input shape input shape 0 10 input shape tuple input shape input np random rand input shape return input class customcastlayer keras layers layer def init self target dtype kwargs self target dtype target dtype super init kwargs def build self input shape do nothing super build input shape def call self input kwargs output tf cast input self target dtype return output def get config self config super get config config update target dtype self target dtype return config def compute output shape self input shape do nothing return input shape class custompadlayer keras layers layer def init self padding 0 0 constant value 0 kwargs self padding padding add 0 0 to pad so we will not change the shape of batch dimension self constant value constant value super init kwargs def build self input shape super build input shape def call self input kwargs output tf pad input padding tf constant 0 0 self padding mode constant constant value self constant value return output def get config self config super get config config update padding self padding constant value self constant value return config def compute output shape self input shape formula to calculate the output shape suppose the input shape be none n w c the padding be a b c d the output shape be none n a b w c d c input shape list list input shape padding 0 0 self padding assert len input shape list len padding two dimension should match output shape none for I pad in zip input shape list 1 padding 1 output shape append I pad 0 pad 1 return tuple output shape class customcroplayer keras layers layer def init self cropping kwargs self cropping crop super init kwargs def build self input shape super build input shape def call self input kwargs input shape input shape as list indice slice none crop 0 0 self cropping add 0 0 to pad so we will not change the shape of batch dimension for shape crop in zip input shape 1 cropping 1 indice append slice 0 crop 0 shape crop 1 return input indice def get config self config super get config config update cropping self cropping return config def compute output shape self input shape formula to calculate the output shape suppose the input shape be none n w c the cropping be a b c d the output shape be none n a b w c d c input shape list list input shape cropping 0 0 self cropping assert len input shape list len cropping two dimension should match output shape none for I crop in zip input shape 1 cropping 1 output shape append I crop 0 crop 1 return tuple output shape class customexpandlayer keras layers layer def init self axis 1 kwargs self axis axis super init kwargs def build self input shape super build input shape def call self input kwargs return tf expand dim input self axis def get config self config super get config config update axis self axis return config def compute output shape self input shape formula to calculate the output shape suppose the input shape be none n w c axis 0 output shape 1 none n w c axis 1 default output shape none 1 n w c axis 2 output shape none n 1 w c axis 3 output shape none n w 1 c axis 4 output shape none n w c 1 axis 5 raise exception input shape list list input shape if self axis len input shape list raise valueerror f axis self axis should be small than input shape 1 len input shape list 1 output shape input shape list 0 self axis 1 input shape list self axis return tuple output shape we should use tuple not list class customdropdimlayer keras layers layer def init self axis 1 kwargs self axis axis super init kwargs def build self input shape super build input shape def call self input kwargs something magic to automatically generate index for array slice to determine a specific axis we can use slice none to replace dim len input shape if self axis dim 1 or self axis 1 raise valueerror f axis self axis should be within the range 1 dim 1 for dim d tensor indice slice none for I in range dim indice self axis 0 return input indice def get config self config super get config config update axis self axis return config def compute output shape self input shape formula to calculate the output shape suppose the input shape be none n w c axis 0 although it be feasible we don t allow this to happen raise exception axis 1 default output shape none w c axis 2 output shape none n c axis 3 output shape none n w axis 4 raise exception input shape list list input shape output shape input shape list 0 self axis input shape list self axis 1 return tuple output shape def custom object mode custom def no activation x return x def leakyrelu x import kera backend as k return k relu x alpha 0 01 object object no activation no activation leakyrelu leakyrelu if mode custom object custompadlayer custompadlayer object customcroplayer customcroplayer object customdropdimlayer customdropdimlayer object customexpandlayer customexpandlayer object customcastlayer customcastlayer return object model keras model load model lenet5 fashion mnist origin speciali5 lmerg34 mdtype57 h5 custom object custom object input layer model input layer output layer output for layer in model layer print layer output print new model model input input layer output layer output tensor 1 3 32 32 with tf device cpu input tensor init input model input shape use this model to predict all layer output new model predict input tensor print all layer output print type all layer output print all layer output 0 print for I layer output in enumerate all layer output print f output shape of layer I layer output shape print f name of layer I model layer I name print f output if nan of layer I np isnan layer output any print relevant log output shell name of layer 0 conv2d 1 input output if nan of layer 0 false name of layer 1 conv2d 1 copy speciali copy lmerg copy mdtype output if nan of layer 1 false name of layer 2 max pooling2d 1 copy speciali copy lmerg copy mdtype output if nan of layer 2 false name of layer 3 lambda copy lmerg copy mdtype output if nan of layer 3 true name of layer 4 dropout 1 copy speciali copy lmerg copy mdtype output if nan of layer 4 true name of layer 5 conv2d 2 copy speciali copy lmerg copy mdtype output if nan of layer 5 false name of layer 6 max pooling2d 2 copy speciali copy lmerg copy mdtype output if nan of layer 6 false name of layer 7 dropout 2 copy speciali copy lmerg merge1 copy mdtype output if nan of layer 7 false name of layer 8 subtract copy ml copy mdtype output if nan of layer 8 false name of layer 9 custom cast layer output if nan of layer 9 false name of layer 10 flatten 1 copy speciali copy lmerg copy mdtype output if nan of layer 10 false name of layer 11 custom cast layer 1 output if nan of layer 11 false name of layer 12 dense 1 copy speciali copy lmerg copy mdtype output if nan of layer 12 false name of layer 13 dense 2 copy speciali copy lmerg copy mdtype output if nan of layer 13 false name of layer 14 dense 3 copy speciali copy lmerg copy mdtype output if nan of layer 14 false name of layer 15 dense insert output if nan of layer 15 false
tensorflowtensorflow
tf tensor scatter nd add abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf tensor scatter nd add encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf generate input data input tensor tf zero 15 15 15 indice tf constant 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 update tf constant 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 10 0 11 0 12 0 13 0 14 0 15 0 cast update to float invoke tf tensor scatter nd add result tf tensor scatter nd add input tensor indice update print the result print result relevant log output shell 2024 03 10 14 59 51 853766 f tensorflow core framework tensor shape cc 357 check fail d dim 1 vs 1 abort core dump
tensorflowtensorflow
tf raw op tensorscattersub abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op tensorscattersub encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf generate input data tensor tf constant 1 2 3 4 5 indice tf constant 1 3 0 2 nest structure for index update tf constant 10 20 invoke tf raw op tensorscattersub result tf raw op tensorscattersub tensor tensor indice index update update print the result print result relevant log output shell 2024 03 10 14 55 41 958738 f tensorflow core framework tensor shape cc 357 check fail d dim 1 vs 1 abort core dump
tensorflowtensorflow
tf raw op fusedpadconv2d abort core dump
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior under specific input tf raw op fusedpadconv2d encounter abort core dump standalone code to reproduce the issue shell import tensorflow as tf generate input data input datum tf random normal 3 10 10 define padding padding tf constant 0 0 1 1 1 1 define filter filter tf random normal 3 3 3 16 define mode mode reflect change mode to reflect or symmetric define stride stride 1 1 1 1 define padding padding valid invoke tf raw op fusedpadconv2d output tf raw op fusedpadconv2d input input datum padding padding filter filter mode mode stride stride padding padding print output relevant log output shell 2024 03 10 14 49 28 555826 f tensorflow core framework tensor shape cc 357 check fail d dim 3 vs 3 abort core dump
tensorflowtensorflow
save model win t load unable to synchronously open object bad local heap signature
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version 2 16 1 custom code yes os platform and distribution window 10 mobile device no response python version 3 12 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior model save from python 3 12 tensorflow 2 16 1 model save my model kera overwrite true after this the model do not load standalone code to reproduce the issue shell model tf keras model load model my model keras custom object none compile true safe mode true relevant log output shell traceback most recent call last file d project main py line 391 in model tf keras model load model my model keras custom object none compile true safe mode true file d project venv lib site package keras src save save api py line 176 in load model return save lib load model file d project venv lib site package keras src save saving lib py line 192 in load model raise loading failure error msgs file d project venv lib site package keras src save saving lib py line 273 in raise loading failure raise valueerror msg valueerror a total of 13 object could not be load example error message for object unable to synchronously open object bad local heap signature list of object that could not be load
tensorflowtensorflow
tf 2 16 1 fail to work with gpu
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source binary tensorflow version tf 2 16 1 custom code no os platform and distribution linux ubuntu 22 04 4 lts mobile device no response python version 3 10 12 bazel version no response gcc compiler version no response cuda cudnn version 12 4 gpu model and memory no response current behavior I create a python venv in which I instal tf 2 16 1 follow your instruction pip install tensorflow when I run python import tf and issue tf config list physical device gpu I get an empty list I create another python venv instal tf 2 16 1 only this time with the instruction python3 m pip install tensorflow and cuda when I run that version import tensorflow as tf and issue tf config list physical device gpu I also get an empty list btw I have no problem run on my box tf 2 15 1 with gpu julia also work just fine with gpu and so do pytorch the standalone code to reproduce the issue shell python 3 10 12 main nov 20 2023 15 14 05 gcc 11 4 0 on linux type help copyright credit or license for more information import tensorflow as tf 2024 03 09 19 15 45 018171 I tensorflow core platform cpu feature guard cc 210 this tensorflow binary be optimize to use available cpu instruction in performance critical operation to enable the follow instruction avx2 fma in other operation rebuild tensorflow with the appropriate compiler flag 2024 03 09 19 15 50 412646 w tensorflow compiler tf2tensorrt util py util cc 38 tf trt warning could not find tensorrt tf version 2 16 1 tf config list physical device gpu 2024 03 09 19 16 28 923792 I external local xla xla stream executor cuda cuda executor cc 998 successful numa node read from sysfs have negative value 1 but there must be at least one numa node so return numa node zero see more at l344 l355 2024 03 09 19 16 29 078379 w tensorflow core common runtime gpu gpu device cc 2251 can not dlopen some gpu library please make sure the miss library mention above be instal properly if you would like to use gpu follow the guide at for how to download and setup the require library for your platform skip register gpu device relevant log output no response
tensorflowtensorflow
core dump with tf raw ops fakequantwithminmaxvarsperchannel
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior core dump error with specific input parameter standalone code to reproduce the issue shell import tensorflow as tf generate input data input datum tf constant 1 5 2 5 3 5 4 5 5 5 6 5 define min and max value per channel min per channel tf constant 1 0 2 0 3 0 max per channel tf constant 2 0 3 0 4 0 invoke tf raw op fakequantwithminmaxvarsperchannel with input as 0 dimensional tensor and max as a 1x3 tensor quantize output tf raw ops fakequantwithminmaxvarsperchannel input tf constant 0 0 min min per channel max max per channel num bit 8 narrow range false print the quantize output print quantize output relevant log output shell 2024 03 09 15 02 07 858055 f tensorflow core framework tensor shape cc 356 check fail d 0 0 vs 1 abort core dump
tensorflowtensorflow
abort core dump with tf raw op avgpoolgrad
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior core dump error with specific input parameter standalone code to reproduce the issue shell import tensorflow as tf generate input data input datum tf random normal 1 28 28 3 grad tf random normal 1 14 14 6 change the number of channel in grad tensor perform average pooling result tf nn avg pool2d input datum ksize 1 2 2 1 stride 1 2 2 1 padding valid datum format nhwc compute gradient grad result tf raw ops avgpoolgrad orig input shape tf shape input data grad grad ksize 1 2 2 1 stride 1 2 2 1 padding valid datum format nhwc print grad result relevant log output shell free corrupt unsorted chunk abort core dump
tensorflowtensorflow
segmentation fault with tf raw op audiospectrogram
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior segmentation fault error with specific input parameter standalone code to reproduce the issue shell import tensorflow as tf generate input data input datum tf random normal 1 44100 dtype tf float32 invoke tf raw op audiospectrogram with a negative window size spectrogram tf raw op audiospectrogram input input datum window size 1024 stride 64 magnitude square false print the spectrogram print spectrogram relevant log output shell segmentation fault core dump
tensorflowtensorflow
abort core dump in tf io tfrecordwriter
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 9 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I have test on tensorflow nightly there be five bug relate to tf io tfrecordwriter bug 1 compression method 3 abort core dump code import tensorflow as tf input datum b input datum example option tf io tfrecordoption compression type zlib compression method 3 with tf io tfrecordwriter output tfrecord option option as writer writer write input datum bug 2 window bit 8 abort core dump code import tensorflow as tf input datum b input datum example option tf io tfrecordoption compression type zlib window bit 8 negative window bit with tf io tfrecordwriter output tfrecord option option as writer writer write input datum bug3 compression level 9 abort core dump code import tensorflow as tf input datum b input datum example option tf io tfrecordoption compression type zlib compression level 9 with tf io tfrecordwriter output tfrecord option option as writer writer write input datum bug 4 mem level 10 abort core dump code import tensorflow as tf input datum b input datum example option tf io tfrecordoption compression type zlib mem level 10 with tf io tfrecordwriter output tfrecord option option as writer writer write input datum bug 5 compression strategy 2 abort core dump import tensorflow as tf input datum b input datum example option tf io tfrecordoption compression type zlib compression strategy 2 with tf io tfrecordwriter output tfrecord option option as writer writer write input datum standalone code to reproduce the issue shell the reproducible test case be provide above relevant log output shell all the above code output abort core dump
tensorflowtensorflow
the result of tf quantization fake quant with min max args be inconsistent between cpu and gpu
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 15 custom code yes os platform and distribution linux ubuntu 20 04 mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version cuda 12 2 gpu model and memory no response current behavior for the same input tf quantization fake quant with min max args produce inconsistent result on cpu and gpu standalone code to reproduce the issue shell import tensorflow as tf input datum tf constant 0 0 1 0 2 0 3 0 4 0 with tf device cpu 0 quantize input datum tf quantization fake quant with min max args input data min 6 0 max 6 0 print quantize input datum with tf device gpu 0 quantize input datum tf quantization fake quant with min max args input data min 6 0 max 6 0 print quantize input datum relevant log output shell tf tensor 0 0 9882353 2 0235295 3 0117648 4 shape 5 dtype float32 tf tensor 0 0 9882353 1 9764706 3 0117648 4 shape 5 dtype float32
tensorflowtensorflow
jit compile keep fill gpu memory to crash with simple dense and small datum with official docker image
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version v2 15 0 2 g0b15fdfcb3f 2 15 0 custom code no os platform and distribution linux ubuntu 22 docker mobile device no response python version 3 11 bazel version no response gcc compiler version no response cuda cudnn version 12 3 gpu model and memory no response current behavior to narrow down the problem I simplify everything as much as possible I use official late docker image to isolate as much as possible this be the gpu stat thu mar 7 17 16 10 2024 nvidia smi 535 161 07 driver version 535 161 07 cuda version 12 3 gpu name persistence m bus i d disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m 0 tesla m60 off 00000000 00 1e 0 off 0 n a 30c p8 14w 150w 0mib 7680mib 0 default n a process gpu gi ci pid type process name gpu memory i d i d usage no running process find I m run these few simple line import tensorflow as tf physical device tf config experimental list physical device gpu if len physical device 0 tf config experimental set memory growth physical device 0 true with tf device cpu train dataset tf datum dataset from tensor slice tf random uniform 1000 000 400 tf random uniform 1000 000 1 batch 4000 model tf keras sequential tf keras input shape 400 tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 1 activation sigmoid model compile optimizer tf keras optimizer adam loss tf keras loss binarycrossentropy from logit false jit compile true model fit train dataset epoch 50 verbose true and the code crash after 15 epoch because memory be fill here be the notebook for full error detail break xla ipynb zip standalone code to reproduce the issue shell import tensorflow as tf physical device tf config experimental list physical device gpu if len physical device 0 tf config experimental set memory growth physical device 0 true with tf device cpu train dataset tf datum dataset from tensor slice tf random uniform 1000 000 400 tf random uniform 1000 000 1 batch 4000 model tf keras sequential tf keras input shape 400 tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 400 activation relu tf keras layer dense 1 activation sigmoid model compile optimizer tf keras optimizer adam loss tf keras loss binarycrossentropy from logit false jit compile true model fit train dataset epoch 50 verbose true relevant log output shell epoch 1 50 2024 03 07 17 28 52 944900 I external local xla xla service service cc 168 xla service 0x7fd9180c3f10 initialize for platform cuda this do not guarantee that xla will be use device 2024 03 07 17 28 52 944951 I external local xla xla service service cc 176 streamexecutor device 0 tesla m60 compute capability 5 2 2024 03 07 17 28 52 980936 I tensorflow compiler mlir tensorflow util dump mlir util cc 269 disable mlir crash reproducer set env var mlir crash reproducer directory to enable 2024 03 07 17 28 53 293838 I external local xla xla stream executor cuda cuda dnn cc 454 load cudnn version 8906 13 250 eta 3s loss 0 7202 warning all log message before absl initializelog be call be write to stderr i0000 00 00 1709832534 134273 948 device compiler h 186 compile cluster use xla this line be log at most once for the lifetime of the process 250 250 5s 11ms step loss 0 6946 epoch 2 50 250 250 3s 10ms step loss 0 6932 epoch 3 50 250 250 3s 10ms step loss 0 6932 epoch 4 50 250 250 3s 10ms step loss 0 6932 epoch 5 50 250 250 2s 10ms step loss 0 6932 epoch 6 50 250 250 3s 10ms step loss 0 6931 epoch 7 50 250 250 3s 10ms step loss 0 6931 epoch 8 50 250 250 3s 10ms step loss 0 6931 epoch 9 50 250 250 3s 10ms step loss 0 6931 epoch 10 50 250 250 2s 10ms step loss 0 6931 epoch 11 50 250 250 3s 10ms step loss 0 6931 epoch 12 50 250 250 3s 10ms step loss 0 6931 epoch 13 50 250 250 3s 10ms step loss 0 6931 epoch 14 50 250 250 3s 10ms step loss 0 6931 epoch 15 50 193 250 eta 0s loss 0 6931 2024 03 07 17 29 31 761561 w external local xla xla service gpu runtime support cc 58 intercept xla runtime error internal the request functionality be not support 2024 03 07 17 29 31 761643 w external local xla xla service gpu runtime support cc 58 intercept xla runtime error internal capturegpugraph fail the request functionality be not support current tracing scope custom call internal fail to end stream capture cuda error stream capture invalidate operation fail due to a previous error during capture internalerror traceback most recent call last cell in 5 line 17 1 model tf keras sequential 2 3 tf keras input shape 400 9 10 11 model compile 12 optimizer tf keras optimizer adam 13 loss tf keras loss binarycrossentropy from logit false 14 jit compile true 15 17 model fit train dataset 18 epoch 50 19 verbose true 20 file usr local lib python3 11 dist package keras src util traceback util py 70 in filter traceback error handler args kwargs 67 filter tb process traceback frames e traceback 68 to get the full stack trace call 69 tf debug disable traceback filtering 70 raise e with traceback filter tb from none 71 finally 72 del filter tb file usr local lib python3 11 dist package tensorflow python eager execute py 53 in quick execute op name num output input attrs ctx name 51 try 52 ctx ensure initialize 53 tensor pywrap tfe tfe py execute ctx handle device name op name 54 input attrs num output 55 except core notokstatusexception as e 56 if name be not none internalerror graph execution error detect at node statefulpartitionedcall define at most recent call last file line 198 in run module as main file line 88 in run code file usr local lib python3 11 dist package ipykernel launcher py line 17 in file usr local lib python3 11 dist package traitlet config application py line 1077 in launch instance file usr local lib python3 11 dist package ipykernel kernelapp py line 739 in start file usr local lib python3 11 dist package tornado platform asyncio py line 205 in start file usr lib python3 11 asyncio base event py line 604 in run forever file usr lib python3 11 asyncio base event py line 1909 in run once file usr lib python3 11 asyncio event py line 80 in run file usr local lib python3 11 dist package ipykernel kernelbase py line 529 in dispatch queue file usr local lib python3 11 dist package ipykernel kernelbase py line 518 in process one file usr local lib python3 11 dist package ipykernel kernelbase py line 424 in dispatch shell file usr local lib python3 11 dist package ipykernel kernelbase py line 766 in execute request file usr local lib python3 11 dist package ipykernel ipkernel py line 429 in do execute file usr local lib python3 11 dist package ipykernel zmqshell py line 549 in run cell file usr local lib python3 11 dist package ipython core interactiveshell py line 3048 in run cell file usr local lib python3 11 dist package ipython core interactiveshell py line 3103 in run cell file usr local lib python3 11 dist package ipython core async helper py line 129 in pseudo sync runner file usr local lib python3 11 dist package ipython core interactiveshell py line 3308 in run cell async file usr local lib python3 11 dist package ipython core interactiveshell py line 3490 in run ast node file usr local lib python3 11 dist package ipython core interactiveshell py line 3550 in run code file tmp ipykernel 879 3821501683 py line 17 in file usr local lib python3 11 dist package keras src util traceback util py line 65 in error handler file usr local lib python3 11 dist package keras src engine training py line 1807 in fit file usr local lib python3 11 dist package keras src engine training py line 1401 in train function file usr local lib python3 11 dist package keras src engine training py line 1384 in step function fail to execute xla runtime executable run time error custom call xla gpu graph launch fail capturegpugraph fail the request functionality be not support current tracing scope custom call internal fail to end stream capture cuda error stream capture invalidate operation fail due to a previous error during capture current profiling annotation xlamodule hlo module a inference run step 1288 701 program i d 132 node statefulpartitionedcall op inference train function 1361
tensorflowtensorflow
tensorflow memory leak during inference
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 13 0 17 gf841394b1b7 2 13 1 custom code yes os platform and distribution ubuntu 20 04 6 lts mobile device no response python version 3 8 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior I m run the follow code and notice a never end increase in ram usage eventually the script terminate with an out of memory error I can t understand what the issue be I also try use tf keras backend clear session once every 10 000 iteration but it didn t help I monitor the specific ram usage of the pid script tensorflow ver be 2 13 1 I would appreciate any insight standalone code to reproduce the issue shell import os import tensorflow as tf import numpy as np import cv2 import time main script pid os getpid print pid of the main script s process main script pid model path model model ch 0 trt dummy frame np random randint 0 255 size 128 128 3 dtype np uint8 img cv2 cvtcolor dummy frame cv2 color bgr2gray img np expand dim img axis 0 img np expand dim img axis 1 img img 255 0 normalize pixel value to 0 1 trt save model tf save model load model path inference function trt save model signature serve default input tensor name list inference function structure input signature 1 key 0 output tensor name list inference function structure output key 0 while true prediction inference function input tensor name tf constant img dtype tf float32 output tensor name numpy relevant log output no response
tensorflowtensorflow
overlap window with tf datum experimental make csv dataset
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 8 custom code no os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior window dataset come from the tf datum experimental make csv dataset be not work as expect standalone code to reproduce the issue shell I m try to transform some datum read from csv file use tf data pipeline and overlap window and its not work as expect all the documentation be not provide clear explanation on how to deal with this case the column of the csv file be timestamp open high low close volume dataset tf datum experimental make csv dataset file pattern path stock 1min csv batch size 1 num epoch 1 shuffle false header false column name timestamp open high low close volume column default tf string tf float32 tf float32 tf float32 tf float32 tf float32 window size 5 number of row per window shift 1 stride for overlap window stride 1 this produce the follow structure windowdataset ordereddict variantdataset tensor single element tensor this be not allow I to transform in a simple way because ordereddict have not batch method and I can not flatten follow the documentation dataset tf datum experimental make csv dataset file pattern path stock 1min csv batch size 1 num epoch 1 shuffle false header false column name timestamp open high low close volume column default tf string tf float32 tf float32 tf float32 tf float32 tf float32 window size 5 number of row per window shift 1 stride for overlap window stride 1 flat map lambda window window batch 5 give the follow error attributeerror traceback most recent call last in 11 shift 1 stride for overlap window 12 stride 1 13 flat map lambda window window batch 5 19 frame tmp autograph generate filersrgq3 km py in lscope 3 4 def inner factory ag 5 tf lam lambda window ag with function scope lambda lscope ag convert call window batch 5 none lscope lscope ag std 6 return tf lam 7 return inner factory attributeerror in user code file line 13 in none lambda window window batch 5 attributeerror collection ordereddict object have no attribute batch if I try to batch the dataset of the ordereddict I get the follow error attributeerror traceback most recent call last in 23 24 25 datum dataset map extract 26 27 35 frame usr local lib python3 10 dist package tensorflow python framework tensor py in getattr self name 259 tf experimental numpy experimental enable numpy behavior 260 261 self getattribute name 262 263 property attributeerror in user code file line 3 in extract open datum get open flat map lambda x x batch 5 attributeerror symbolictensor object have no attribute batch this be become extremely confusing what would be a the right way to transform this structure so that I can later apply well transformation to build a timeserie dataset relevant log output no response
tensorflowtensorflow
notfounderror home chengjun stylegan2 master dnnlib tflib cudacache fuse bias act 347e82e8919aeb0d5c7dc989e996c091 so undefined symbol zn10tensorflow12opdefbuilder4attrenst7 cxx1112basic stringicst11char traitsicesaiceee
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 1 14 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 6 bazel version no response gcc compiler version 11 4 0 cuda cudnn version 10 0 7 6 5 gpu model and memory no response current behavior when I try to run stylegan2 on the server I configure all the environment as require but get an error at runtime standalone code to reproduce the issue shell cuda visible device 0 python run train py num gpu 1 datum dir home chengjun dataset config config f dataset my custom dataset total kimg 100000 this be my running command relevant log output shell tensorflow python framework error impl notfounderror home chengjun stylegan2 master dnnlib tflib cudacache fuse bias act 347e82e8919aeb0d5c7dc989e996c091 so undefined symbol zn10tensorflow12opdefbuilder4attrenst7 cxx1112basic stringicst11char traitsicesaiceee
tensorflowtensorflow
how the data member datum of tensorbuffer be destroy
Bug
from the tensorbuffer definition I notice there be a member datum l108 I be curious about how this void pointer will be destroy I find refcounte unref have the logic of delete especially the follow line l336 but it seem the above line do not delete what the void pointer datum be point to
tensorflowtensorflow
load model can not load model from previous version
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version tf 2 14 and tf 2 15 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior hello I be try to load a model I previously save with tf 2 14 and get the display error I realize that the model be save with tf 2 14 and the environment I be try to load the model in be tf 2 15 I m not sure if this be an expected behavior but it seem odd since the version difference wasn t that big standalone code to reproduce the issue shell previously save model with tf 2 14 import tensorflow as tf from tensorflow import kera import numpy as np print version print tf version random datum datum np random random 100 10 label np random random 100 simple model normalizer keras layer normalization normalizer adapt data model keras sequential model add normalizer model add keras layer dense 10 model add keras layer dense 1 model compile loss mean squared error optimizer adam model fit datum label epoch 10 verbose 2 model save test keras load with tf 2 15 import tensorflow as tf from tensorflow import keras print version print tf version load model model keras model load model test keras relevant log output shell 2 15 0 traceback most recent call last file work user e n enesk phakinpro test load py line 8 in model keras model load model test keras file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src save save api py line 254 in load model return save lib load model file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src save saving lib py line 281 in load model raise e file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src save saving lib py line 246 in load model model deserialize keras object file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src save serialization lib py line 728 in deserialize keras object instance cls from config inner config file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src engine sequential py line 471 in from config model add layer file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package tensorflow python trackable base py line 204 in method wrapper result method self args kwargs file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none file nas longleaf home enesk miniforge3 envs tf lib python3 9 site package keras src layer preprocesse normalization py line 188 in build raise valueerror valueerror all axis value to be keep must have know shape get axis 1 input shape none none with unknown axis at index 1
tensorflowtensorflow
could not find device for node generateboundingboxproposal
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source source tensorflow version 2 15 0 custom code yes os platform and distribution no response mobile device no response python version no response bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior when use tf image generate bounding box proposal in cpu it raise the follow error notfounderror could not find device for node node generateboundingboxproposal generateboundingboxproposal post nms topn 300 all kernel register for op generateboundingboxproposal device gpu op generateboundingboxproposal name standalone code to reproduce the issue shell import tensorflow as tf score tf constant 0 9 0 8 0 7 0 6 0 5 0 4 bbox delta tf constant 1 1 1 1 2 2 2 2 image info tf constant 100 100 1 anchor tf constant 10 10 20 20 30 30 40 40 50 50 60 60 result tf image generate bounding box proposal score bbox delta image info anchor print result relevant log output no response
tensorflowtensorflow
local rendezvous be abort with status out of range end of sequence warn when iterate over a dataset
Bug
issue type bug have you reproduce the bug with tensorflow nightly yes source binary tensorflow version tf 2 16 custom code yes os platform and distribution linux ubuntu 22 04 mobile device no response python version 3 10 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior there be a warning which appear after the last iteration over a dataset w tensorflow core framework local rendezvous cc 404 local rendezvous be abort with status out of range end of sequence this warning report be introduce by this commit I believe that simple iteration over a dataset shouldn t cause such behavior standalone code to reproduce the issue shell import tensorflow as tf range ds tf datum dataset range 10 for d in range ds print d relevant log output shell tf tensor 0 shape dtype int64 tf tensor 1 shape dtype int64 tf tensor 2 shape dtype int64 tf tensor 3 shape dtype int64 tf tensor 4 shape dtype int64 tf tensor 5 shape dtype int64 tf tensor 6 shape dtype int64 tf tensor 7 shape dtype int64 tf tensor 8 shape dtype int64 tf tensor 9 shape dtype int64 2024 02 15 08 27 36 782604 w tensorflow core framework local rendezvous cc 404 local rendezvous be abort with status out of range end of sequence
tensorflowtensorflow
error with custom keras model
Bug
issue type bug have you reproduce the bug with tensorflow nightly no source source tensorflow version v2 13 0 17 gf841394b1b7 2 13 1 custom code yes os platform and distribution linux ubuntu 20 04 6 lts mobile device no response python version 3 9 18 bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behavior expect output run use v2 8 3 90 g1b8f5c396f0 2 8 4 name custom model layer class name inputlayer config batch input shape none 96 96 3 dtype float32 sparse false ragged false name input 1 name input 1 inbound nodes class name conv2d config name conv2d trainable true dtype float32 filter 8 kernel size 3 3 stride 1 1 padding valid datum format channel last dilation rate 1 1 group 1 activation linear use bias true kernel initializer class name glorotuniform config seed none bias initializer class name zero config kernel regularizer none bias regularizer none activity regularizer none kernel constraint none bias constraint none name conv2d inbound node input 1 0 0 input layer input 1 0 0 output layer conv2d 0 0 name model layer class name inputlayer config batch input shape none 96 96 3 dtype float32 sparse false ragged false name input 1 name input 1 inbound nodes class name conv2d config name conv2d trainable true dtype float32 filter 8 kernel size 3 3 stride 1 1 padding valid datum format channel last dilation rate 1 1 group 1 activation linear use bias true kernel initializer class name glorotuniform config seed none bias initializer class name zero config kernel regularizer none bias regularizer none activity regularizer none kernel constraint none bias constraint none name conv2d inbound node input 1 0 0 input layer input 1 0 0 output layer conv2d 0 0 warning tensorflow compile the loaded model but the compile metric have yet to be build model compile metric will be empty until you train or evaluate the model warning tensorflow compile the loaded model but the compile metric have yet to be build model compile metric will be empty until you train or evaluate the model warn tensorflow no training configuration find in the save file so the model be not compile compile it manually warn tensorflow no training configuration find in the save file so the model be not compile compile it manually model custom model layer type output shape param input 1 inputlayer none 96 96 3 0 conv2d conv2d none 94 94 8 224 total param 224 trainable param 224 non trainable param 0 model model layer type output shape param input 1 inputlayer none 96 96 3 0 conv2d conv2d none 94 94 8 224 total param 224 trainable param 224 non trainable param 0 standalone code to reproduce the issue shell import os os environ tf cpp min log level 2 import tensorflow as tf noqa e402 class custommodel tf keras model def init self kwargs super custommodel self init kwargs input shape 96 96 3 input tf keras layers input shape input shape output tf keras layer conv2d filter 8 kernel size 3 stride 1 input cust model custommodel inputs input output output print cust model get config model tf keras model inputs input output output print model get config cust model path home ubuntu automltraine my custom model h5 cust model save cust model path model path home ubuntu automltraine my model h5 model save model path cust model2 tf keras model load model cust model path custom object custommodel custommodel model2 tf keras model load model model path cust model2 summary model2 summary relevant log output shell name custom model trainable true name model trainable true layer module keras layer class name inputlayer config batch input shape none 96 96 3 dtype float32 sparse false ragged false name input 1 register name none name input 1 inbound nodes module keras layer class name conv2d config name conv2d trainable true dtype float32 filter 8 kernel size 3 3 stride 1 1 padding valid datum format channel last dilation rate 1 1 group 1 activation linear use bias true kernel initializer module kera initializer class name glorotuniform config seed none register name none bias initializer module kera initializer class name zero config register name none kernel regularizer none bias regularizer none activity regularizer none kernel constraint none bias constraint none register name none build config input shape none 96 96 3 name conv2d inbound node input 1 0 0 input layer input 1 0 0 output layer conv2d 0 0 home ubuntu miniconda3 envs automl lib python3 9 site package keras src engine training py 3000 userwarne you be save your model as an hdf5 file via model save this file format be consider legacy we recommend use instead the native keras format e g model save my model keras save api save model warn tensorflow compile the loaded model but the compile metric have yet to be build model compile metric will be empty until you train or evaluate the model warning tensorflow compile the loaded model but the compile metric have yet to be build model compile metric will be empty until you train or evaluate the model traceback most recent call last file home ubuntu automltraining get config debug py line 28 in cust model2 tf keras model load model file home ubuntu miniconda3 envs automl lib python3 9 site package keras src save save api py line 238 in load model return legacy sm save lib load model file home ubuntu miniconda3 envs automl lib python3 9 site package keras src util traceback util py line 70 in error handler raise e with traceback filter tb from none file home ubuntu miniconda3 envs automl lib python3 9 site package keras src engine training py line 3246 in from config raise typeerror typeerror unable to revive model from config when override the get config method make sure that the return config contain all item use as argument in the constructor to which be the default behavior you can override this default behavior by define a from config cls config class method to specify how to create an instance of custommodel from its config receive config name custom model trainable true error encounter during deserialization init miss 2 require positional argument input and output