Skip to content

Set xla_tpu_enable_flash_attention=false to enable libtpu pin update … #10027

Set xla_tpu_enable_flash_attention=false to enable libtpu pin update …

Set xla_tpu_enable_flash_attention=false to enable libtpu pin update … #10027

Triggered via push September 23, 2024 18:38
Status Failure
Total duration 1d 1h 7m 4s
Artifacts 4
get-torch-commit
1s
get-torch-commit
Build XLA CUDA plugin  /  build
23m 29s
Build XLA CUDA plugin / build
Build PyTorch/XLA  /  build
41m 37s
Build PyTorch/XLA / build
Build PyTorch with CUDA  /  build
24m 30s
Build PyTorch with CUDA / build
TPU tests  /  tpu-test
0s
TPU tests / tpu-test
Build docs  /  build-docs
1m 33s
Build docs / build-docs
Matrix: GPU tests / test
Matrix: CPU tests / test
Matrix: GPU tests requiring torch CUDA / test
Fit to window
Zoom out
Zoom in

Annotations

3 errors
GPU tests requiring torch CUDA / test (triton_tests, linux.g5.4xlarge.nvidia.gpu)
Process completed with exit code 1.
GPU tests requiring torch CUDA / test (python_tests, linux.8xlarge.nvidia.gpu)
The job was canceled because "triton_tests_linux_g5_4xl" failed.
TPU tests / tpu-test
This request was automatically failed because there were no enabled runners online to process the request for more than 1 days.

Artifacts

Produced during runtime
Name Size
cpp-test-bin
696 MB
cuda-plugin
130 MB
torch-with-cuda
342 MB
torch-xla-wheels
216 MB