{"_id":"doc-en-pytorch-e5fa52a9fec54a72031dcbcbd77c314c0adae88ac93041cdefc836ec6c564b04","title":"","text":"Traceback (most recent call last): File \"\", line 322, in <moduleroislabel = fasterRCNN(imdata, iminfo, gtboxes, numboxes) File \"/usr/local/lib/python2.7/dist-\", line 491, in call result = self.forward(input, kwargs) File \"/home/faster-\", line 50, in forward rois, rpnlosscls, rpnlossbbox = self.RCNNrpn(basefeat, iminfo, gtboxes, numboxes) File \"/usr/local/lib/python2.7/dist-\", line 491, in call result = self.forward(input, kwargs) File \"/home/faster-\", line 87, in forward rpndata = self.RPNanchortarget((, gtboxes, iminfo, numboxes)) File \"/usr/local/lib/python2.7/dist-\", line 491, in call result = self.forward(input, kwargs) File \"/home/faster-\", line 157, in forward positiveweights = 1.0 / numexamples File \"/usr/local/lib/python2.7/dist-\", line 320, in rdiv return self.reciprocal() other RuntimeError: reciprocal is not implemented for type Exception NameError: \"global name 'FileNotFoundError' is not defined\" in <bound method of object at 0x7fb842e6e3d0ignored This error is coming while implementing pytorch faster-rcnn repository. Any solutions please??\ncan you use the issue template, we need more information to help you.\nUsing Pytorch 0.3.0 solves the problem.\nare you using python 2? AFAIK FileNotFoundError only exists in python 3, so this should be fixed Also, this error is hiding the real bug in the code, which seems to be the following: Did you mean to take the reciprocal of a LongTensor?\ndoes the shutdown workers FileNotFoundError happen on python 2.7 as well?\nI haven't tested on py2. But I believe it should be an with errno.\nYup I am using Python 2. To overcome this error of FileNotFoundError I have downgraded Pytorch from 0.4.0 version to 0.3.0. Use Pytorch 0.3.0 with Python 2, it will work fine.\nalternatively, upgrading to Python 3 will fix the issue (and Python 3 is nicer than Python 2)."} {"_id":"doc-en-pytorch-2800f71e1f12755ba55e3a8e70ff398118a06d101c21171bb009eed5375fe533","title":"","text":"We'll fix this though.\nFor a quick fix you can make python2 compatible by defining FileNotFoundError somewhere\nThis error still exist in pytorch 0.4\nYes it is fixed after 0.4\nI am still getting it...\ndid you build pytorch from source? It was fixed after 0.4 is released, so it will be in the next pytorch release.\nI updated it via\nCan anyone help me with my problem?\nI was also working with faster r-cnn and got the error. Downgrading to 0.3.0 solved FileNotFoundError but will produce another error: self.ratiolistbatch[leftidx:(rightidx+1)] = ((np.float64)) # trainset ratio list ,each batch is same number TypeError: 'module' object is not callable Where this error is related to in the above error message. Upgrading can solve this error but will produce the previous one. How do you cope with the second error?"} {"_id":"doc-en-pytorch-4979a1a04833169240c0bad7b12e4a15cd4efb4b3d5ef9e5517488d63a0cc554","title":"","text":"only supports fully named inputs; all input dimensions must have a name. When passing it an unnamed input, it errors out with the following message: It should really say \"Found unnamed dim at index 0 of Tensor[None, None]\".\nfixed in"} {"_id":"doc-en-pytorch-1de20edd2f7958b4095dd34eb2e3927620d004f0d1dfec850faeeac1d6e4b60e","title":"","text":"This is an older implementation that I think doesn't make any sense anymore. We should throw a NYI exception for it. The expected behavior for this should be:\nfixed in"} {"_id":"doc-en-pytorch-f32bbc53b4f3b92b53b5ef9ed6754d57475c03612f4070c1bfd9266ba097f7ab","title":"","text":"A = (5, 4) B = (0, 9).view(3, 3) C = (0, 15).view(3, 5) idxs = torch.LongTensor([0, 2, 4]) A.indexadd(0, idxs, B) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [3] to have the same number of elements, but got 4, 4 and 3 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 A.indexadd(0, idxs, C) # RuntimeError: inconsistent tensor size, expected r [4], t [4] and src [5] to have the same number of elements, but got 4, 4 and 5 elements respectively at (...)/aten/src/TH/generic/THTensorMath.c:1008 So far so good. But if we use CUDA... A = (5, 4).cuda() B = (0, 9).view(3, 3).cuda() C = (0, 15).view(3, 5).cuda() idxs = torch.LongTensor([0, 2, 4]).cuda() A.indexadd(0, idxs, B) print(A) # 0 1 2 0 # 0 0 0 0 # 3 4 5 0 # 0 0 0 0 # 6 7 8 0 # [ of size 5x4 (GPU 0)] OK, this looks wrong... A.zero() A.indexadd(0, idxs, C) print(A) # 0 1 2 3 # 4 0 0 0 # 5 6 7 8 # 9 0 0 0 # 10 11 12 13 # [ of size 5x4 (GPU 0)] Now this looks definitely wrong. Increase C's dimension to something like (3, 500), and it overwrites other tensors or triggers asserts. Same thing happens with indexcopy_.\nI'll take it if no one's looking at it yet."} {"_id":"doc-en-pytorch-946f6b9e14808f1c7f57d50e741fb79df2db8e2dc593391fafd4a2221b5e2b8b","title":"","text":"The math rendering doesn't end where it should, making the text after it difficult to read: