Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dist.all_to_all_single #8064

Merged
merged 3 commits into from
Sep 25, 2024
Merged

Support dist.all_to_all_single #8064

merged 3 commits into from
Sep 25, 2024

Conversation

zpcore
Copy link
Collaborator

@zpcore zpcore commented Sep 24, 2024

Support to use torch.distributed.all_to_all_single to both dynamo and nondynamo case.

Note that there is a function signature mismatch between torch's all_to_all_single and xla op's AllToAll. To leverage the AllToAll op, we doesn't support specifying the input_split_sizes and output_split_sizes at this time. Check test_collective_ops_tpu.py for the usage.

@zpcore zpcore marked this pull request as ready for review September 24, 2024 21:08
@will-cromar
Copy link
Collaborator

Thanks!

@zpcore zpcore merged commit b378a28 into master Sep 25, 2024
23 checks passed
@zpcore zpcore deleted the piz/all-to-all branch September 25, 2024 21:54
zpcore added a commit that referenced this pull request Sep 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants