Checklist
GitHub
- [x] I've given this PR a concise, self-descriptive, and meaningful title
- [x] I've linked relevant issues in the PR body
- [x] I've applied the relevant labels to this PR
- [x] I've assigned a reviewer
PR contents
Description
~Use the newish --extra-index that torch put up at https://download.pytorch.org/whl in preference to --find-links. This means a lot of the installation gymnastics can be streamlined.~
Streamline the installation by moving all dependencies back into setup.py. This essentially redoes #1000 but without the disallowed direct .whl links or the pip install ivadomed[cpu]
/ pip install ivadomed[gpu]
extras.
Effects:
-
Each requirements_x.txt
becomes an “extra”, giving a consistent installation process no matter whether people are installing from git, from pypi, or as a dependency of another package. There's no fumbling with pip install -r requirements_dev.txt
or pip install -r requirements_gpu.txt
-
Drops requirements_gpu.txt
entirely, declaring our dependency on torch
/torchvision
in setup.py
directly.
Doing this fixes #1130 and #861, making sure downstream packages (like SCT) can safely depend on ivadomed. This even resolves #996.
-
torch
is loosened to torch~=1.8.1
(meaning 1.8.1, 1.8.2, 1.8.3, …), which enables --extra-index-url https://download.pytorch.org/whl/....
for a wider set of platforms (see below)
-
CUDA 11 is no longer mandatory -- it never really was.
Dropping requirements_gpu.txt
is, IMO, the biggest change here. Instead of trying to second-guess torch, we will conform to their decision to put CPU-only Windows/Mac builds, but CUDA 10.* GPU Linux builds on PyPI.
The CUDA 10 GPU builds are a waste of space for the majority of Linux users (e.g.), while for a minority (i.e. us, when working on romane.neuro.polymtl.ca
) they are broken because those users need CUDA 11 instead; on the other hand, the CPU builds mean Windows users can't train models if that's something they want to try to help out with.
But @kanishk16 noticed that torch provides an escape hatch to cover all these cases now, one that's better than us trying to handle it in our setup.py (i.e. #1000's experiment with pip install ivadomed[cpu]
/ pip install ivadomed[gpu]
): torch hosts custom pip repos people can use like
pip install --extra-index-url https://download.pytorch.org/whl/cpu ivadomed # e.g. for linux users without GPUs
pip install --extra-index-url https://download.pytorch.org/whl/cu102 ivadomed # e.g. for Windows users with GPUs
pip install --extra-index-url https://download.pytorch.org/whl/cu113 ivadomed # e.g. for Linux users with very new GPUs
This PR clears the way for people to use those by getting ivadomed out of the business of assuming what hardware people have. torch
's default (=PyPI) builds work for 99% of systems; people on the bleeding edge (notably: our romane.neuro.polymtl.ca
system, or people who want to do deep learning natively on Windows) can adapt their pip.conf to it; but that's not and shouldn't be in ivadomed's scope. We will document "some amount" of this in followup that's already started: https://github.com/ivadomed/ivadomed/pull/1125 but, again, mostly we should stick close to torch's decisions to avoid the friction.
(please ignore the branch name; it was a naive name from the first version of this branch, something like 'torch-install' would have been better)
Linked issues
Resolves #996, Fixes https://github.com/ivadomed/ivadomed/issues/1130 as well as fixes #861 which is part of fixing https://github.com/spinalcordtoolbox/spinalcordtoolbox/pull/3790.
To be followed by https://github.com/ivadomed/ivadomed/pull/1125.
dependencies installation