[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2021-07-28 10:15:45 +00:00
parent cfaad88904
commit 040ec5e252
19 changed files with 84 additions and 82 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

@@ -8,6 +8,8 @@ Binary Distributions
You can install the latest stable release of Triton from pip:
.. code-block:: bash
pip install triton
Binary wheels are available for CPython 3.6-3.9 and PyPy 3.6-3.7.

View File

@@ -219,10 +219,10 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.540372 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -239,7 +239,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 10.979 seconds)
**Total running time of the script:** ( 0 minutes 10.980 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -261,17 +261,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 273.066674
1 384.0 585.142862 585.142862 267.130429
0 256.0 512.000001 546.133347 264.258068
1 384.0 585.142862 558.545450 261.446801
2 512.0 630.153853 606.814814 264.258068
3 640.0 682.666684 640.000002 269.473696
3 640.0 682.666684 640.000002 265.974036
4 768.0 702.171410 664.216187 273.066663
.. ... ... ... ...
93 12160.0 812.359066 406.179533 329.483481
94 12288.0 812.429770 415.661740 329.602681
95 12416.0 810.840807 412.149375 328.900662
96 12544.0 810.925276 412.971190 329.022957
97 12672.0 809.389265 412.097543 329.142870
96 12544.0 810.925276 412.971190 329.292871
97 12672.0 811.007961 412.097543 329.142870
[98 rows x 4 columns]
@@ -290,7 +290,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 8.179 seconds)
**Total running time of the script:** ( 1 minutes 8.191 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -371,37 +371,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 128.0 0.455111 ... 0.512000 0.512000
1 256.0 2.978909 ... 2.978909 2.978909
2 384.0 7.372800 ... 7.899428 7.899428
1 256.0 2.730667 ... 3.276800 2.978909
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 15.420235 15.420235
4 640.0 22.260869 ... 24.380953 24.380953
5 768.0 32.768000 ... 34.028308 34.028308
6 896.0 39.025776 ... 39.025776 39.025776
6 896.0 39.025776 ... 39.025776 37.971025
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 45.242181 ... 46.656000 46.656000
9 1280.0 51.200001 ... 56.109587 56.109587
10 1408.0 64.138541 ... 65.684049 59.258433
11 1536.0 79.526831 ... 75.296679 75.296679
12 1664.0 63.372618 ... 62.061463 61.636381
13 1792.0 72.983276 ... 68.953520 68.953520
14 1920.0 66.782607 ... 67.434145 68.435645
15 2048.0 73.262953 ... 75.573044 75.234154
16 2176.0 83.155572 ... 80.494588 78.608000
17 2304.0 68.446623 ... 72.607513 72.607513
18 2432.0 71.125224 ... 80.963875 80.963875
19 2560.0 77.649287 ... 75.851852 76.740048
20 2688.0 81.401408 ... 84.483418 85.051697
21 2816.0 80.617762 ... 77.605356 79.733474
22 2944.0 81.967162 ... 80.902653 77.505492
23 3072.0 82.540970 ... 84.010539 84.638425
24 3200.0 84.432717 ... 88.642656 89.260810
25 3328.0 80.617354 ... 83.323259 86.632127
26 3456.0 82.183044 ... 87.252780 84.420490
27 3584.0 85.797134 ... 95.654673 96.269155
28 3712.0 83.317214 ... 88.404730 84.730571
29 3840.0 81.019778 ... 86.197974 85.730230
30 3968.0 92.652949 ... 87.159957 86.911637
31 4096.0 93.271527 ... 91.616198 91.678778
10 1408.0 64.138541 ... 65.684049 65.684049
11 1536.0 80.430545 ... 76.106321 75.296679
12 1664.0 63.372618 ... 61.636381 61.636381
13 1792.0 72.983276 ... 68.953520 68.533074
14 1920.0 69.467336 ... 69.120002 69.467336
15 2048.0 73.908442 ... 75.234154 75.573044
16 2176.0 83.500614 ... 79.855747 80.494588
17 2304.0 68.446623 ... 72.607513 72.387489
18 2432.0 71.125224 ... 81.197876 80.041209
19 2560.0 77.833728 ... 74.983980 75.328737
20 2688.0 80.880718 ... 78.862903 82.642823
21 2816.0 83.392363 ... 78.726003 77.056904
22 2944.0 83.060049 ... 79.356738 79.610276
23 3072.0 78.643199 ... 82.540970 83.269271
24 3200.0 83.116885 ... 88.888888 87.431696
25 3328.0 82.939284 ... 86.528001 82.181847
26 3456.0 79.351933 ... 84.156124 82.266905
27 3584.0 87.466332 ... 95.960933 96.166193
28 3712.0 80.629426 ... 88.404730 88.326564
29 3840.0 81.919998 ... 80.428721 81.198237
30 3968.0 86.116179 ... 85.331427 85.391135
31 4096.0 92.309303 ... 88.651075 88.243079
[32 rows x 5 columns]
@@ -411,7 +411,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 11.538 seconds)
**Total running time of the script:** ( 2 minutes 8.651 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -5,12 +5,12 @@
Computation times
=================
**03:30.696** total execution time for **getting-started_tutorials** files:
**03:27.823** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:11.538 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:08.651 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.179 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.191 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.979 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.980 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -186,9 +186,9 @@
<div class="section" id="binary-distributions">
<h2>Binary Distributions<a class="headerlink" href="#binary-distributions" title="Permalink to this headline"></a></h2>
<p>You can install the latest stable release of Triton from pip:</p>
<blockquote>
<div><p>pip install triton</p>
</div></blockquote>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip install triton
</pre></div>
</div>
<p>Binary wheels are available for CPython 3.6-3.9 and PyPy 3.6-3.7.</p>
<p>And the latest nightly release:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip install -U --pre triton

View File

@@ -308,10 +308,10 @@ for different problem sizes.</p>
0 4096.0 9.540372 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -323,7 +323,7 @@ for different problem sizes.</p>
15 134217728.0 851.577704 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 10.979 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 10.980 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -346,17 +346,17 @@ We will then compare its performance against (1) <code class="code docutils lite
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 273.066674
1 384.0 585.142862 585.142862 267.130429
0 256.0 512.000001 546.133347 264.258068
1 384.0 585.142862 558.545450 261.446801
2 512.0 630.153853 606.814814 264.258068
3 640.0 682.666684 640.000002 269.473696
3 640.0 682.666684 640.000002 265.974036
4 768.0 702.171410 664.216187 273.066663
.. ... ... ... ...
93 12160.0 812.359066 406.179533 329.483481
94 12288.0 812.429770 415.661740 329.602681
95 12416.0 810.840807 412.149375 328.900662
96 12544.0 810.925276 412.971190 329.022957
97 12672.0 809.389265 412.097543 329.142870
96 12544.0 810.925276 412.971190 329.292871
97 12672.0 811.007961 412.097543 329.142870
[98 rows x 4 columns]
</pre></div>
@@ -370,7 +370,7 @@ This means that when temporary data is too large to fit entirely in the GPU
Note that our Triton kernel is not only faster than PyTorchs CUDA kernel, it is also <strong>easier to read, understand and maintain</strong>.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 8.179 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 8.191 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -476,42 +476,42 @@ tensor(True, device=&#39;cuda:0&#39;)
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 128.0 0.455111 ... 0.512000 0.512000
1 256.0 2.978909 ... 2.978909 2.978909
2 384.0 7.372800 ... 7.899428 7.899428
1 256.0 2.730667 ... 3.276800 2.978909
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 15.420235 15.420235
4 640.0 22.260869 ... 24.380953 24.380953
5 768.0 32.768000 ... 34.028308 34.028308
6 896.0 39.025776 ... 39.025776 39.025776
6 896.0 39.025776 ... 39.025776 37.971025
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 45.242181 ... 46.656000 46.656000
9 1280.0 51.200001 ... 56.109587 56.109587
10 1408.0 64.138541 ... 65.684049 59.258433
11 1536.0 79.526831 ... 75.296679 75.296679
12 1664.0 63.372618 ... 62.061463 61.636381
13 1792.0 72.983276 ... 68.953520 68.953520
14 1920.0 66.782607 ... 67.434145 68.435645
15 2048.0 73.262953 ... 75.573044 75.234154
16 2176.0 83.155572 ... 80.494588 78.608000
17 2304.0 68.446623 ... 72.607513 72.607513
18 2432.0 71.125224 ... 80.963875 80.963875
19 2560.0 77.649287 ... 75.851852 76.740048
20 2688.0 81.401408 ... 84.483418 85.051697
21 2816.0 80.617762 ... 77.605356 79.733474
22 2944.0 81.967162 ... 80.902653 77.505492
23 3072.0 82.540970 ... 84.010539 84.638425
24 3200.0 84.432717 ... 88.642656 89.260810
25 3328.0 80.617354 ... 83.323259 86.632127
26 3456.0 82.183044 ... 87.252780 84.420490
27 3584.0 85.797134 ... 95.654673 96.269155
28 3712.0 83.317214 ... 88.404730 84.730571
29 3840.0 81.019778 ... 86.197974 85.730230
30 3968.0 92.652949 ... 87.159957 86.911637
31 4096.0 93.271527 ... 91.616198 91.678778
10 1408.0 64.138541 ... 65.684049 65.684049
11 1536.0 80.430545 ... 76.106321 75.296679
12 1664.0 63.372618 ... 61.636381 61.636381
13 1792.0 72.983276 ... 68.953520 68.533074
14 1920.0 69.467336 ... 69.120002 69.467336
15 2048.0 73.908442 ... 75.234154 75.573044
16 2176.0 83.500614 ... 79.855747 80.494588
17 2304.0 68.446623 ... 72.607513 72.387489
18 2432.0 71.125224 ... 81.197876 80.041209
19 2560.0 77.833728 ... 74.983980 75.328737
20 2688.0 80.880718 ... 78.862903 82.642823
21 2816.0 83.392363 ... 78.726003 77.056904
22 2944.0 83.060049 ... 79.356738 79.610276
23 3072.0 78.643199 ... 82.540970 83.269271
24 3200.0 83.116885 ... 88.888888 87.431696
25 3328.0 82.939284 ... 86.528001 82.181847
26 3456.0 79.351933 ... 84.156124 82.266905
27 3584.0 87.466332 ... 95.960933 96.166193
28 3712.0 80.629426 ... 88.404730 88.326564
29 3840.0 81.919998 ... 80.428721 81.198237
30 3968.0 86.116179 ... 85.331427 85.391135
31 4096.0 92.309303 ... 88.651075 88.243079
[32 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 11.538 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 8.651 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>03:30.696</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>03:27.823</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,15 +183,15 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>02:11.538</p></td>
<td><p>02:08.651</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>01:08.179</p></td>
<td><p>01:08.191</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>00:10.979</p></td>
<td><p>00:10.980</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>

File diff suppressed because one or more lines are too long