Skip to content

Instantly share code, notes, and snippets.

@jaypeche
Created January 28, 2026 20:18
Show Gist options
  • Select an option

  • Save jaypeche/b7b749ea77d2b8864c1e142c1b26e97a to your computer and use it in GitHub Desktop.

Select an option

Save jaypeche/b7b749ea77d2b8864c1e142c1b26e97a to your computer and use it in GitHub Desktop.
[107/522] /opt/cuda/bin/nvcc -forward-unknown-to-host-compiler -ccbin=/usr/x86_64-pc-linux-gnu/gcc-bin/14 -DGGML_BACKEND_BUILD -DGGML_BACKEND_DL -DGGML_BACKEND_SHARED -DGGML_COMMIT=0x0 -DGGML_CUDA_PEER_MAX_BATCH_SIZE=128 -DGGML_CUDA_USE_GRAPHS -DGGML_SHARED -DGGML_VERSION=0x0 -DNDEBUG -Dggml_cuda_EXPORTS -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/include -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/ggml-cpu -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/ggml-cpu/amx -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/ggml-cuda/.. -I/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/../include -isystem /opt/cuda/targets/x86_64-linux/include -O2 -g -DNDEBUG -std=c++17 -arch=all -Xcompiler=-fPIC -use_fast_math -extended-lambda -compress-mode=default -Xcompiler -Wno-pedantic -MD -MT ml/backend/ggml/ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/binbcast.cu.o -MF ml/backend/ggml/ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/binbcast.cu.o.d -x cu -c /var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu -o ml/backend/ggml/ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/binbcast.cu.o
nvcc warning : Support for offline compilation for architectures prior to '<compute/sm/lto>_75' will be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).
ninja: build stopped: subcommand failed.
* ERROR: sci-ml/ollama-0.14.2::guru failed (compile phase):
* ninja -v -l14 -j15 failed
*
* Call stack:
* ebuild.sh, line 143: Called src_compile
* environment, line 2762: Called cmake_src_compile
* environment, line 1305: Called cmake_build
* environment, line 1217: Called eninja
* environment, line 1685: Called die
* The specific snippet of code:
* "$@" || die -n "${*} failed"
*
* If you need support, post the output of `emerge --info '=sci-ml/ollama-0.14.2::guru'`,
* the complete build log and the output of `emerge -pqv '=sci-ml/ollama-0.14.2::guru'`.
* The complete build log is located at '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/temp/build.log'.
* The ebuild environment file is located at '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/temp/environment'.
* Working directory: '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2_build'
* S: '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2'
>>> Failed to emerge sci-ml/ollama-0.14.2, Log file:
>>> '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/temp/build.log'
* Messages for package sci-ml/ollama-0.14.2:
* ERROR: sci-ml/ollama-0.14.2::guru failed (compile phase):
* ninja -v -l14 -j15 failed
* Call stack:
* ebuild.sh, line 143: Called src_compile
* environment, line 2762: Called cmake_src_compile
* environment, line 1305: Called cmake_build
* environment, line 1217: Called eninja
* environment, line 1685: Called die
* The specific snippet of code:
* "$@" || die -n "${*} failed"
* If you need support, post the output of `emerge --info '=sci-ml/ollama-0.14.2::guru'`,
* the complete build log and the output of `emerge -pqv '=sci-ml/ollama-0.14.2::guru'`.
* The complete build log is located at '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/temp/build.log'.
* The ebuild environment file is located at '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/temp/environment'.
* Working directory: '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2_build'
* S: '/var/tmp/notmpfs/portage/sci-ml/ollama-0.14.2/work/ollama-0.14.2'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment