We currently hardwire what components to download and use in the CI:
|
populate_cuda_path cuda_nvcc |
|
populate_cuda_path cuda_cudart |
|
populate_cuda_path cuda_nvrtc |
|
populate_cuda_path cuda_profiler_api |
|
populate_cuda_path cuda_cccl |
|
if [[ "$(cut -d '.' -f 1 <<< ${{ inputs.cuda-version }})" -ge 12 ]]; then |
|
populate_cuda_path libnvjitlink |
|
fi |
The artifacts are cached in GitHub Caches space for reuse:
https://github.com/NVIDIA/cuda-python/actions/caches
However, it means that if we need to update this component list, it involves a maintainer (me 😓) using admin privilege to remove the cache first. I can do that no problem, but it is better to make it developer-configurable so that
- the action takes a list of components as inputs, same as the other 3rd-party action we use (as a bandaid!)
|
uses: Jimver/cuda-toolkit@v0.2.21 |
|
with: |
|
cuda: ${{ inputs.cuda-version }} |
|
method: 'network' |
|
sub-packages: ${{ env.MINI_CTK_DEPS }} |
- the action computes the cache key using the component names, so that when the list is updated the cache is invalidated automatically
cc @cryos @carterbox for vis
We currently hardwire what components to download and use in the CI:
cuda-python/.github/actions/fetch_ctk/action.yml
Lines 85 to 92 in d425a88
The artifacts are cached in GitHub Caches space for reuse:
https://github.com/NVIDIA/cuda-python/actions/caches
However, it means that if we need to update this component list, it involves a maintainer (me 😓) using admin privilege to remove the cache first. I can do that no problem, but it is better to make it developer-configurable so that
cuda-python/.github/workflows/test-wheel-windows.yml
Lines 161 to 165 in 0dfae43
cc @cryos @carterbox for vis