在Jetson Nano Ubuntu 18上安装PyTorch

2024-04-25 06:35:06 发布

您现在位置:Python中文网/ 问答频道 /正文

我正试图在Jetson Nano上安装PyTorch,这会毁掉Ubuntu1804。我的参考号是https://dev.to/evanilukhin/guide-to-install-pytorch-with-cuda-on-ubuntu-18-04-5217

当我尝试以下命令时,我得到的是:

(my_env) crigano@crigano-desktop:~$ python3.8 -m pip install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing
Collecting numpy
  Using cached numpy-1.20.2-cp38-cp38-manylinux2014_aarch64.whl (12.7 MB)
Collecting ninja
  Using cached ninja-1.10.0.post2.tar.gz (25 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Collecting pyyaml
  Using cached PyYAML-5.4.1-cp38-cp38-manylinux2014_aarch64.whl (818 kB)
ERROR: Could not find a version that satisfies the requirement mkl
ERROR: No matching distribution found for mkl

Tags: installtonumpypyyamlusingcachedaarch64done
1条回答
网友
1楼 · 发布于 2024-04-25 06:35:06

如果您只想在裸金属Jetson Nano上使用PyTorch,只需使用NVIDIA's pre-compiled binary wheel安装它即可。其他包可以在Jetson Zoo中找到

MKL是由英特尔公司开发的,“用于优化当前和未来几代英特尔CPU和GPU的代码”。显然,它确实可以在其他基于x86的芯片上运行,如AMD(尽管英特尔在历史上有意破坏非英特尔芯片库),但毫不奇怪,英特尔对支持ARM设备不感兴趣,也没有将MKL移植到ARM架构

如果您的目标是在numpy中使用MKL进行数学优化,那么openblas是ARM的一个可行替代方案libopenblas-base:arm64libopenblas-dev:arm64预装在NVIDIA的"L4T PyTorch" Docker images上。您可以确认numpy使用numpy.__config__.show()检测到它们。这是我在l4t-pytorch:r32.5.0-pth1.6-py3图像上使用python 3.69中的numpy 1.12得到的结果:

blas_mkl_info:
  NOT AVAILABLE
blis_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/lib/aarch64-linux-gnu']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/lib/aarch64-linux-gnu']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/lib/aarch64-linux-gnu']
    language = c
    define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    library_dirs = ['/usr/lib/aarch64-linux-gnu']
    language = c
    define_macros = [('HAVE_CBLAS', None)]

因此,它可能会使用openblas代替MKL进行数学优化。如果您的用例也是针对numpy优化的,那么您同样可以使用openblas而不需要MKL。。。这是幸运的,因为它无论如何都不可用。😅

相关问题 更多 >