{"_id":"doc-en-pytorch-b6c54f2eecc6fc5c9dd06c86954ad437cfcdd2a7e5b2a92006d42bbc302684a3","title":"","text":"(old title) When build C extension, the error: FileNotFoundError was got. OS: Windows 10 pro PyTorch version: 0.4.0a0+ How you installed PyTorch (conda, pip, source): source Python version: 3.6.4 CUDA/cuDNN version: CUDA 9.0 GCC version (if compiling from source): msvc 14/15 (then compiling with CUDA I use VS2015, but when build extension, the program automatically use vs2017) When building c extension on Windows, I got the error: (The Chinese above means compiler success compile the library, and generated .lib and .exp) And the same error was got on a Linux Work Station. (gcc is 5.4.0)\nIn , copy the linked file from , for example , but in my Windows, and a Linux work station, the linked one is in: ,for example . So, there must be a bug, or error in pytorch's ffi or python's ffi. (pyhton 3.6.4, cffi 1.11.4), If it broke down because of change in cffi, I think I can create a PR. I do think this is because of the change of cffi api,\nCC who enabled extension build for Windows on\nIs this change documented in somewhere like Python SDK?"} {"_id":"doc-en-pytorch-73afc555bf492c7072a4f9994675674d19a7420810f08eaca2d074197550e516","title":"","text":"max pooling functions are not consistent with max functions. Below an example, every max pooling (be it 1d, 2d or 3d, adaptive or not) acts the same, on cpu or on cuda. Essentially, there are two fondamental differences : max pooling of all values is while for it's max pooling of nan and valid values is valid values, which means s get ignored, while for , as soon as there is a value, the result is . More generally, choosing explicetely how to deal with as in numpy () could be a solution, but maybe this is related to CuDNN's max pooling ? Built from latest sources (as of 05/17) PyTorch version: 0.5.0a0+ Is debug build: No CUDA used to build PyTorch: 9.1.85 OS: Ubuntu 16.04.4 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 CMake version: version 3.9.4 Python version: 3.6 Is CUDA available: Yes CUDA runtime version: 9.1.85 GPU models and configuration: GPU 0: Quadro M1000M Nvidia driver version: 390.30 cuDNN version: Probably one of the following: /usr/lib/x8664-linux- /usr/lib/x8664-linux-gnu/libcudnnstaticv7.a Versions of relevant libraries: [conda] magma-cuda91 2.3.0 1 pytorch [conda] torch 0.5.0a0+