[GH-PAGES] Updated website
This commit is contained in:
@@ -214,11 +214,9 @@ to download the full example code</p>
|
||||
<span class="n">y_ptr</span><span class="p">,</span> <span class="c1"># *Pointer* to second input vector</span>
|
||||
<span class="n">output_ptr</span><span class="p">,</span> <span class="c1"># *Pointer* to output vector</span>
|
||||
<span class="n">n_elements</span><span class="p">,</span> <span class="c1"># Size of the vector</span>
|
||||
<span class="n">time_start_ptr</span><span class="p">,</span> <span class="n">time_end_ptr</span><span class="p">,</span>
|
||||
<span class="n">BLOCK_SIZE</span><span class="p">:</span> <span class="n">tl</span><span class="o">.</span><span class="n">constexpr</span><span class="p">,</span> <span class="c1"># Number of elements each program should process</span>
|
||||
<span class="c1"># NOTE: `constexpr` so it can be used as a shape value</span>
|
||||
<span class="p">):</span>
|
||||
<span class="n">tl</span><span class="o">.</span><span class="n">atomic_min</span><span class="p">(</span><span class="n">time_start_ptr</span><span class="p">,</span> <span class="n">tl</span><span class="o">.</span><span class="n">clock</span><span class="p">())</span>
|
||||
<span class="c1"># There are multiple 'program's processing different data. We identify which program</span>
|
||||
<span class="c1"># we are here</span>
|
||||
<span class="n">pid</span> <span class="o">=</span> <span class="n">tl</span><span class="o">.</span><span class="n">program_id</span><span class="p">(</span><span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="c1"># We use a 1D launch grid so axis is 0</span>
|
||||
@@ -237,14 +235,11 @@ to download the full example code</p>
|
||||
<span class="n">output</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="n">y</span>
|
||||
<span class="c1"># Write x + y back to DRAM</span>
|
||||
<span class="n">tl</span><span class="o">.</span><span class="n">store</span><span class="p">(</span><span class="n">output_ptr</span> <span class="o">+</span> <span class="n">offsets</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">mask</span><span class="o">=</span><span class="n">mask</span><span class="p">)</span>
|
||||
<span class="n">tl</span><span class="o">.</span><span class="n">atomic_max</span><span class="p">(</span><span class="n">time_end_ptr</span><span class="p">,</span> <span class="n">tl</span><span class="o">.</span><span class="n">clock</span><span class="p">())</span>
|
||||
</pre></div>
|
||||
</div>
|
||||
<p>Let’s also declare a helper function to (1) allocate the <cite>z</cite> tensor
|
||||
and (2) enqueue the above kernel with appropriate grid/block sizes.</p>
|
||||
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">add</span><span class="p">(</span><span class="n">x</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">):</span>
|
||||
<span class="n">time_start</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">int64</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s1">'cuda'</span><span class="p">)</span>
|
||||
<span class="n">time_end</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">int64</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s1">'cuda'</span><span class="p">)</span>
|
||||
<span class="c1"># We need to preallocate the output</span>
|
||||
<span class="n">output</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">empty_like</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
|
||||
<span class="k">assert</span> <span class="n">x</span><span class="o">.</span><span class="n">is_cuda</span> <span class="ow">and</span> <span class="n">y</span><span class="o">.</span><span class="n">is_cuda</span> <span class="ow">and</span> <span class="n">output</span><span class="o">.</span><span class="n">is_cuda</span>
|
||||
@@ -257,7 +252,7 @@ and (2) enqueue the above kernel with appropriate grid/block sizes.</p>
|
||||
<span class="c1"># - each torch.tensor object is implicitly converted into a pointer to its first element.</span>
|
||||
<span class="c1"># - `triton.jit`'ed functions can be index with a launch grid to obtain a callable GPU kernel</span>
|
||||
<span class="c1"># - don't forget to pass meta-parameters as keywords arguments</span>
|
||||
<span class="n">add_kernel</span><span class="p">[</span><span class="n">grid</span><span class="p">](</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">n_elements</span><span class="p">,</span> <span class="n">time_start</span><span class="p">,</span> <span class="n">time_end</span><span class="p">,</span> <span class="n">BLOCK_SIZE</span><span class="o">=</span><span class="mi">1024</span><span class="p">)</span>
|
||||
<span class="n">add_kernel</span><span class="p">[</span><span class="n">grid</span><span class="p">](</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">n_elements</span><span class="p">,</span> <span class="n">BLOCK_SIZE</span><span class="o">=</span><span class="mi">1024</span><span class="p">)</span>
|
||||
<span class="c1"># We return a handle to z but, since `torch.cuda.synchronize()` hasn't been called, the kernel is still</span>
|
||||
<span class="c1"># running asynchronously at this point.</span>
|
||||
<span class="k">return</span> <span class="n">output</span>
|
||||
@@ -327,25 +322,25 @@ for different problem sizes.</p>
|
||||
<p class="sphx-glr-script-out">Out:</p>
|
||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector-add-performance:
|
||||
size Triton Torch
|
||||
0 4096.0 4.800000 9.600000
|
||||
1 8192.0 9.600000 19.200000
|
||||
2 16384.0 19.200000 38.400001
|
||||
3 32768.0 34.909091 63.999998
|
||||
4 65536.0 69.818181 127.999995
|
||||
5 131072.0 139.636363 219.428568
|
||||
6 262144.0 219.428568 384.000001
|
||||
7 524288.0 361.411758 472.615390
|
||||
8 1048576.0 491.520012 614.400016
|
||||
9 2097152.0 599.414644 702.171410
|
||||
10 4194304.0 702.171410 780.190482
|
||||
11 8388608.0 774.047204 812.429770
|
||||
12 16777216.0 809.086412 833.084721
|
||||
13 33554432.0 829.569620 842.004273
|
||||
14 67108864.0 840.205105 848.362445
|
||||
15 134217728.0 846.080710 850.656574
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
3 32768.0 63.999998 63.999998
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 341.333321 384.000001
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 702.171410
|
||||
10 4194304.0 780.190482 780.190482
|
||||
11 8388608.0 812.429770 812.429770
|
||||
12 16777216.0 833.084721 833.084721
|
||||
13 33554432.0 842.004273 842.004273
|
||||
14 67108864.0 847.448255 848.362445
|
||||
15 134217728.0 849.737435 850.656574
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 42.289 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 49.775 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
|
||||
|
Reference in New Issue
Block a user