Skip to content

Commit

Permalink
Update doc from commit bdd00e5
Browse files Browse the repository at this point in the history
  • Loading branch information
torchxlabot2 committed Jul 29, 2024
1 parent b6895e8 commit 35734f5
Show file tree
Hide file tree
Showing 24 changed files with 93 additions and 71 deletions.
2 changes: 1 addition & 1 deletion master/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/core/xla_model.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/debug/metrics.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/distributed/parallel_loader.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
36 changes: 8 additions & 28 deletions master/_modules/torch_xla/experimental/eager.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down Expand Up @@ -387,6 +387,7 @@ <h1>Source code for torch_xla.experimental.eager</h1><div class="highlight"><pre
<span class="kn">from</span> <span class="nn">contextlib</span> <span class="kn">import</span> <span class="n">contextmanager</span>

<span class="kn">import</span> <span class="nn">torch_xla</span>
<span class="kn">import</span> <span class="nn">logging</span>


<div class="viewcode-block" id="eager_mode"><a class="viewcode-back" href="../../../index.html#torch_xla.experimental.eager_mode">[docs]</a><span class="k">def</span> <span class="nf">eager_mode</span><span class="p">(</span><span class="n">enable</span><span class="p">:</span> <span class="nb">bool</span><span class="p">):</span>
Expand Down Expand Up @@ -416,33 +417,12 @@ <h1>Source code for torch_xla.experimental.eager</h1><div class="highlight"><pre
<span class="n">eager_mode</span><span class="p">(</span><span class="n">saved_eager_mode</span><span class="p">)</span>


<div class="viewcode-block" id="compile"><a class="viewcode-back" href="../../../index.html#torch_xla.experimental.compile">[docs]</a><span class="k">def</span> <span class="nf">compile</span><span class="p">(</span><span class="n">func</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Compile the func with Lazy Tensor.</span>

<span class="sd"> Return the optimized function that takes exact same input. Compile will</span>
<span class="sd"> run the target func under the tracing mode using Lazy tensor.</span>
<span class="sd"> &quot;&quot;&quot;</span>

<span class="nd">@functools</span><span class="o">.</span><span class="n">wraps</span><span class="p">(</span><span class="n">func</span><span class="p">)</span> <span class="c1"># Keep function&#39;s name, docstring, etc.</span>
<span class="k">def</span> <span class="nf">wrapper</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="n">saved_eager_mode_status</span> <span class="o">=</span> <span class="n">torch_xla</span><span class="o">.</span><span class="n">_XLAC</span><span class="o">.</span><span class="n">_get_use_eager_mode</span><span class="p">()</span>
<span class="n">torch_xla</span><span class="o">.</span><span class="n">_XLAC</span><span class="o">.</span><span class="n">_set_use_eager_mode</span><span class="p">(</span><span class="kc">False</span><span class="p">)</span>
<span class="c1"># clear the pending graph if any</span>
<span class="n">torch_xla</span><span class="o">.</span><span class="n">sync</span><span class="p">()</span>
<span class="k">try</span><span class="p">:</span>
<span class="c1"># Target Function Execution</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">func</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="c1"># Sync the graph generated by the target function.</span>
<span class="n">torch_xla</span><span class="o">.</span><span class="n">sync</span><span class="p">()</span>
<span class="k">except</span> <span class="ne">Exception</span> <span class="k">as</span> <span class="n">e</span><span class="p">:</span>
<span class="c1"># Handle exceptions (if needed)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Error in target function: </span><span class="si">{</span><span class="n">e</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span>
<span class="k">raise</span> <span class="c1"># Re-raise the exception</span>
<span class="n">torch_xla</span><span class="o">.</span><span class="n">_XLAC</span><span class="o">.</span><span class="n">_set_use_eager_mode</span><span class="p">(</span><span class="n">saved_eager_mode_status</span><span class="p">)</span>

<span class="k">return</span> <span class="n">result</span>

<span class="k">return</span> <span class="n">wrapper</span></div>
<span class="k">def</span> <span class="nf">compile</span><span class="p">(</span><span class="n">func</span><span class="p">):</span>
<span class="c1"># can&#39;s use deprecated wrapper at import time due to circular dependency</span>
<span class="n">logging</span><span class="o">.</span><span class="n">warning</span><span class="p">(</span>
<span class="s1">&#39;torch_xla.experimental.compile is deprecated. Use torch_xla.compile instead.&#39;</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">torch_xla</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">func</span><span class="p">)</span>
</pre></div>

</article>
Expand Down
2 changes: 1 addition & 1 deletion master/_modules/torch_xla/runtime.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
35 changes: 33 additions & 2 deletions master/_modules/torch_xla/torch_xla.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down Expand Up @@ -446,12 +446,43 @@ <h1>Source code for torch_xla.torch_xla</h1><div class="highlight"><pre>
<span class="n">xm</span><span class="o">.</span><span class="n">mark_step</span><span class="p">()</span></div>


<div class="viewcode-block" id="step"><a class="viewcode-back" href="../../index.html#torch_xla.step">[docs]</a><span class="k">def</span> <span class="nf">step</span><span class="p">(</span><span class="n">f</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">Callable</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">):</span>
<span class="k">def</span> <span class="nf">step</span><span class="p">():</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;Wraps code that should be dispatched to the runtime.</span>

<span class="sd"> Experimental: `xla.step` is still a work in progress. Some code that currently</span>
<span class="sd"> works with `xla.step` but does not follow best practices will become errors in</span>
<span class="sd"> future releases. See https://github.com/pytorch/xla/issues/6751 for context.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">return</span> <span class="nb">compile</span><span class="p">()</span>


<div class="viewcode-block" id="compile"><a class="viewcode-back" href="../../index.html#torch_xla.compile">[docs]</a><span class="k">def</span> <span class="nf">compile</span><span class="p">(</span><span class="n">f</span><span class="p">:</span> <span class="n">Optional</span><span class="p">[</span><span class="n">Callable</span><span class="p">]</span> <span class="o">=</span> <span class="kc">None</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Optimizes given model/function using torch_xla&#39;s LazyTensor tracing mode.</span>
<span class="sd"> PyTorch/XLA will trace the given function with given inputs and then generate</span>
<span class="sd"> graphs to represent the pytorch operations happens within this function. This</span>
<span class="sd"> graph will be compiled by the XLA and executed on the accelerator(decided by the</span>
<span class="sd"> tensor&#39;s device). Eager mode will be disabled for the compiled region of the funciton.</span>

<span class="sd"> Args:</span>
<span class="sd"> model (Callable): Module/function to optimize, if not passed this function will</span>
<span class="sd"> act as a context manager.</span>

<span class="sd"> Example::</span>

<span class="sd"> # usage 1</span>
<span class="sd"> @torch_xla.compile()</span>
<span class="sd"> def foo(x):</span>
<span class="sd"> return torch.sin(x) + torch.cos(x)</span>

<span class="sd"> def foo2(x):</span>
<span class="sd"> return torch.sin(x) + torch.cos(x)</span>
<span class="sd"> # usage 2</span>
<span class="sd"> compiled_foo2 = torch_xla.compile(foo2)</span>

<span class="sd"> # usage 3</span>
<span class="sd"> with torch_xla.compile():</span>
<span class="sd"> res = foo2(x)</span>
<span class="sd"> &quot;&quot;&quot;</span>

<span class="nd">@contextlib</span><span class="o">.</span><span class="n">contextmanager</span>
Expand Down
3 changes: 1 addition & 2 deletions master/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ torch_xla
.. autofunction:: devices
.. autofunction:: device_count
.. autofunction:: sync
.. autofunction:: step
.. autofunction:: compile
.. autofunction:: manual_seed

runtime
Expand Down Expand Up @@ -96,7 +96,6 @@ experimental
----------------------------------
.. automodule:: torch_xla.experimental
.. autofunction:: eager_mode
.. autofunction:: compile

debug
----------------------------------
Expand Down
2 changes: 1 addition & 1 deletion master/debug.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
2 changes: 1 addition & 1 deletion master/eager_mode.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
6 changes: 2 additions & 4 deletions master/genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down Expand Up @@ -428,7 +428,7 @@ <h2 id="C">C</h2>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="index.html#torch_xla.distributed.spmd.clear_sharding">clear_sharding() (in module torch_xla.distributed.spmd)</a>
</li>
<li><a href="index.html#torch_xla.experimental.compile">compile() (in module torch_xla.experimental)</a>
<li><a href="index.html#torch_xla.compile">compile() (in module torch_xla)</a>
</li>
</ul></td>
<td style="width: 33%; vertical-align: top;"><ul>
Expand Down Expand Up @@ -610,8 +610,6 @@ <h2 id="S">S</h2>
<li><a href="index.html#torch_xla.debug.metrics.short_metrics_report">short_metrics_report() (in module torch_xla.debug.metrics)</a>
</li>
<li><a href="index.html#torch_xla.distributed.xla_multiprocessing.spawn">spawn() (in module torch_xla.distributed.xla_multiprocessing)</a>
</li>
<li><a href="index.html#torch_xla.step">step() (in module torch_xla)</a>
</li>
<li><a href="index.html#torch_xla.sync">sync() (in module torch_xla)</a>
</li>
Expand Down
2 changes: 1 addition & 1 deletion master/gpu.html
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@


<div class="version">
master (2.5.0+git009e31a )
master (2.5.0+gitbdd00e5 )
</div>


Expand Down
Loading

0 comments on commit 35734f5

Please sign in to comment.