Skip to content

Support logits_soft_cap parameter in paged_attention (#7704) #9391

Support logits_soft_cap parameter in paged_attention (#7704)

Support logits_soft_cap parameter in paged_attention (#7704) #9391

Triggered via push July 17, 2024 20:17
Status Success
Total duration 1h 18m 39s
Artifacts 4
get-torch-commit
2s
get-torch-commit
Build XLA CUDA plugin  /  build
5m 15s
Build XLA CUDA plugin / build
Build PyTorch/XLA  /  build
23m 11s
Build PyTorch/XLA / build
Build PyTorch with CUDA  /  build
24m 49s
Build PyTorch with CUDA / build
Matrix: GPU tests / test
Matrix: CPU tests / test
Matrix: GPU tests requiring torch CUDA / test
Fit to window
Zoom out
Zoom in

Annotations

2 errors
GPU tests requiring torch CUDA / test (triton_tests, linux.g5.4xlarge.nvidia.gpu)
unable to access 'https://gitlab.com/libeigen/eigen.git/': The requested URL returned error: 502
GPU tests requiring torch CUDA / test (triton_tests, linux.g5.4xlarge.nvidia.gpu)
clone of 'https://gitlab.com/libeigen/eigen.git' into submodule path '/__w/xla/xla/pytorch/third_party/eigen' failed

Artifacts

Produced during runtime
Name Size
cpp-test-bin
664 MB
cuda-plugin
115 MB
torch-with-cuda
339 MB
torch-xla-wheels
210 MB