Skip to content

Commit

Permalink
Update doc from commit a6ab3fe
Browse files Browse the repository at this point in the history
  • Loading branch information
torchxlabot2 committed Jul 16, 2024
1 parent cc84561 commit 56220b2
Show file tree
Hide file tree
Showing 22 changed files with 78 additions and 77 deletions.
6 changes: 3 additions & 3 deletions master/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/core/xla_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/debug/metrics.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/distributed/parallel_loader.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/distributed/spmd/xla_sharding.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/experimental/eager.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/runtime.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/_modules/torch_xla/torch_xla.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="../../multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/debug.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -302,10 +302,10 @@
<ul class="current">
<li class="toctree-l1 current"><a class="current reference internal" href="#">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
6 changes: 3 additions & 3 deletions master/genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+gite523fdf )
master (2.5.0+gita6ab3fe )
</div>


Expand Down Expand Up @@ -300,10 +300,10 @@
<ul>
<li class="toctree-l1"><a class="reference internal" href="debug.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="gpu.html">How to run with PyTorch/XLA:GPU</a></li>
<li class="toctree-l1"><a class="reference internal" href="multi_process_distributed.html">How to do <code class="docutils literal notranslate"><span class="pre">DistributedDataParallel</span></code></a></li>
<li class="toctree-l1"><a class="reference internal" href="multi_process_distributed.html">How to do DistributedDataParallel(DDP)</a></li>
<li class="toctree-l1"><a class="reference internal" href="runtime.html">PJRT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html">PyTorch/XLA SPMD User Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#fully-sharded-data-parallel-via-spmd">Fully Sharded Data Parallel via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#fully-sharded-data-parallel-fsdp-via-spmd">Fully Sharded Data Parallel(FSDP) via SPMD</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#pytorch-xla-spmd-advanced-topics">PyTorch/XLA SPMD advanced topics</a></li>
<li class="toctree-l1"><a class="reference internal" href="spmd.html#distributed-checkpointing">Distributed Checkpointing</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch_compile.html">TorchDynamo(torch.compile) integration in PyTorch XLA</a></li>
Expand Down
Loading

0 comments on commit 56220b2

Please sign in to comment.