[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2021-07-28 06:03:23 +00:00
parent 1da7ca38c7
commit 3d015bafaf
17 changed files with 81 additions and 81 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

@@ -216,10 +216,10 @@ We can now run the decorated function above. Pass `show_plots=True` to see the p
vector-add-performance:
size Triton Torch
0 4096.0 8.000000 9.600000
0 4096.0 9.540372 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
@@ -239,7 +239,7 @@ We can now run the decorated function above. Pass `show_plots=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 10.970 seconds)
**Total running time of the script:** ( 0 minutes 11.008 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -261,17 +261,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 273.066674
0 256.0 512.000001 546.133347 264.258068
1 384.0 585.142862 585.142862 267.130429
2 512.0 630.153853 606.814814 264.258068
3 640.0 682.666684 640.000002 269.473696
3 640.0 682.666684 640.000002 265.974036
4 768.0 702.171410 664.216187 273.066663
.. ... ... ... ...
93 12160.0 812.359066 405.755985 329.483481
93 12160.0 812.359066 406.179533 329.483481
94 12288.0 812.429770 415.661740 329.602681
95 12416.0 810.840807 411.722274 329.173158
95 12416.0 810.840807 412.149375 328.900662
96 12544.0 810.925276 412.971190 329.292871
97 12672.0 811.007961 412.097543 329.142870
97 12672.0 811.007961 412.516771 329.142870
[98 rows x 4 columns]
@@ -290,7 +290,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 8.170 seconds)
**Total running time of the script:** ( 1 minutes 8.191 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -371,37 +371,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 128.0 0.455111 ... 0.512000 0.512000
1 256.0 2.978909 ... 3.276800 3.276800
1 256.0 2.978909 ... 2.978909 2.978909
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 16.384000 15.420235
4 640.0 22.260869 ... 23.272727 23.272727
3 512.0 14.563555 ... 15.420235 15.420235
4 640.0 22.260869 ... 24.380953 24.380953
5 768.0 32.768000 ... 34.028308 34.028308
6 896.0 37.971025 ... 39.025776 37.971025
7 1024.0 51.150050 ... 52.428801 52.428801
6 896.0 39.025776 ... 39.025776 39.025776
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 44.566925 ... 45.938215 45.938215
9 1280.0 51.200001 ... 56.109587 56.109587
10 1408.0 64.138541 ... 64.902096 58.621246
11 1536.0 77.778988 ... 76.106321 75.296679
10 1408.0 64.138541 ... 65.684049 58.621246
11 1536.0 79.526831 ... 75.296679 75.296679
12 1664.0 62.929456 ... 61.636381 61.636381
13 1792.0 72.983276 ... 68.953520 68.533074
14 1920.0 69.120002 ... 69.467336 69.120002
15 2048.0 73.584279 ... 75.573044 74.898285
16 2176.0 82.137338 ... 80.173899 78.916269
17 2304.0 68.446623 ... 73.275679 72.828879
18 2432.0 71.305746 ... 80.041209 79.362895
19 2560.0 78.019048 ... 76.920185 76.382283
20 2688.0 81.928846 ... 80.708630 83.369354
21 2816.0 82.446516 ... 78.868366 78.584162
22 2944.0 82.102191 ... 79.356738 78.854483
23 3072.0 81.121923 ... 80.544956 82.301023
24 3200.0 84.432717 ... 89.887639 87.431696
25 3328.0 81.622783 ... 85.297742 86.528001
26 3456.0 79.823334 ... 86.134151 81.683457
27 3584.0 85.797134 ... 96.063450 87.808000
28 3712.0 85.163978 ... 83.736248 81.480261
29 3840.0 83.655065 ... 82.716526 84.228485
30 3968.0 89.003603 ... 82.337339 80.017334
31 4096.0 93.271527 ... 90.443212 87.324485
13 1792.0 72.983276 ... 68.533074 68.533074
14 1920.0 66.461539 ... 67.106797 69.467336
15 2048.0 73.584279 ... 75.573044 75.573044
16 2176.0 81.472263 ... 78.608000 80.173899
17 2304.0 68.446623 ... 72.387489 72.607513
18 2432.0 71.125224 ... 81.197876 81.433227
19 2560.0 78.019048 ... 75.676673 76.920185
20 2688.0 81.752274 ... 84.483418 85.051697
21 2816.0 81.981598 ... 79.298560 79.587973
22 2944.0 82.102191 ... 80.902653 78.235527
23 3072.0 81.238312 ... 84.135370 83.269271
24 3200.0 83.224970 ... 83.989503 89.887639
25 3328.0 83.905938 ... 83.323259 84.298943
26 3456.0 82.266905 ... 86.689860 86.503829
27 3584.0 84.033077 ... 91.656871 96.475743
28 3712.0 85.528545 ... 83.526206 88.561477
29 3840.0 82.716526 ... 81.738356 85.399230
30 3968.0 92.793868 ... 86.849777 87.159957
31 4096.0 93.531519 ... 92.056052 92.372834
[32 rows x 5 columns]
@@ -411,7 +411,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 16.316 seconds)
**Total running time of the script:** ( 2 minutes 16.809 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -5,12 +5,12 @@
Computation times
=================
**03:35.456** total execution time for **getting-started_tutorials** files:
**03:36.008** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:16.316 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:16.809 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.170 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.191 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.970 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:11.008 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -305,10 +305,10 @@ for different problem sizes.</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector-add-performance:
size Triton Torch
0 4096.0 8.000000 9.600000
0 4096.0 9.540372 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
@@ -323,7 +323,7 @@ for different problem sizes.</p>
15 134217728.0 851.577704 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 10.970 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 11.008 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -346,17 +346,17 @@ We will then compare its performance against (1) <code class="code docutils lite
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 273.066674
0 256.0 512.000001 546.133347 264.258068
1 384.0 585.142862 585.142862 267.130429
2 512.0 630.153853 606.814814 264.258068
3 640.0 682.666684 640.000002 269.473696
3 640.0 682.666684 640.000002 265.974036
4 768.0 702.171410 664.216187 273.066663
.. ... ... ... ...
93 12160.0 812.359066 405.755985 329.483481
93 12160.0 812.359066 406.179533 329.483481
94 12288.0 812.429770 415.661740 329.602681
95 12416.0 810.840807 411.722274 329.173158
95 12416.0 810.840807 412.149375 328.900662
96 12544.0 810.925276 412.971190 329.292871
97 12672.0 811.007961 412.097543 329.142870
97 12672.0 811.007961 412.516771 329.142870
[98 rows x 4 columns]
</pre></div>
@@ -370,7 +370,7 @@ This means that when temporary data is too large to fit entirely in the GPU
Note that our Triton kernel is not only faster than PyTorchs CUDA kernel, it is also <strong>easier to read, understand and maintain</strong>.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 8.170 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 8.191 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -476,42 +476,42 @@ tensor(True, device=&#39;cuda:0&#39;)
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 128.0 0.455111 ... 0.512000 0.512000
1 256.0 2.978909 ... 3.276800 3.276800
1 256.0 2.978909 ... 2.978909 2.978909
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 16.384000 15.420235
4 640.0 22.260869 ... 23.272727 23.272727
3 512.0 14.563555 ... 15.420235 15.420235
4 640.0 22.260869 ... 24.380953 24.380953
5 768.0 32.768000 ... 34.028308 34.028308
6 896.0 37.971025 ... 39.025776 37.971025
7 1024.0 51.150050 ... 52.428801 52.428801
6 896.0 39.025776 ... 39.025776 39.025776
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 44.566925 ... 45.938215 45.938215
9 1280.0 51.200001 ... 56.109587 56.109587
10 1408.0 64.138541 ... 64.902096 58.621246
11 1536.0 77.778988 ... 76.106321 75.296679
10 1408.0 64.138541 ... 65.684049 58.621246
11 1536.0 79.526831 ... 75.296679 75.296679
12 1664.0 62.929456 ... 61.636381 61.636381
13 1792.0 72.983276 ... 68.953520 68.533074
14 1920.0 69.120002 ... 69.467336 69.120002
15 2048.0 73.584279 ... 75.573044 74.898285
16 2176.0 82.137338 ... 80.173899 78.916269
17 2304.0 68.446623 ... 73.275679 72.828879
18 2432.0 71.305746 ... 80.041209 79.362895
19 2560.0 78.019048 ... 76.920185 76.382283
20 2688.0 81.928846 ... 80.708630 83.369354
21 2816.0 82.446516 ... 78.868366 78.584162
22 2944.0 82.102191 ... 79.356738 78.854483
23 3072.0 81.121923 ... 80.544956 82.301023
24 3200.0 84.432717 ... 89.887639 87.431696
25 3328.0 81.622783 ... 85.297742 86.528001
26 3456.0 79.823334 ... 86.134151 81.683457
27 3584.0 85.797134 ... 96.063450 87.808000
28 3712.0 85.163978 ... 83.736248 81.480261
29 3840.0 83.655065 ... 82.716526 84.228485
30 3968.0 89.003603 ... 82.337339 80.017334
31 4096.0 93.271527 ... 90.443212 87.324485
13 1792.0 72.983276 ... 68.533074 68.533074
14 1920.0 66.461539 ... 67.106797 69.467336
15 2048.0 73.584279 ... 75.573044 75.573044
16 2176.0 81.472263 ... 78.608000 80.173899
17 2304.0 68.446623 ... 72.387489 72.607513
18 2432.0 71.125224 ... 81.197876 81.433227
19 2560.0 78.019048 ... 75.676673 76.920185
20 2688.0 81.752274 ... 84.483418 85.051697
21 2816.0 81.981598 ... 79.298560 79.587973
22 2944.0 82.102191 ... 80.902653 78.235527
23 3072.0 81.238312 ... 84.135370 83.269271
24 3200.0 83.224970 ... 83.989503 89.887639
25 3328.0 83.905938 ... 83.323259 84.298943
26 3456.0 82.266905 ... 86.689860 86.503829
27 3584.0 84.033077 ... 91.656871 96.475743
28 3712.0 85.528545 ... 83.526206 88.561477
29 3840.0 82.716526 ... 81.738356 85.399230
30 3968.0 92.793868 ... 86.849777 87.159957
31 4096.0 93.531519 ... 92.056052 92.372834
[32 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 16.316 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 16.809 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>03:35.456</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>03:36.008</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,15 +183,15 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>02:16.316</p></td>
<td><p>02:16.809</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>01:08.170</p></td>
<td><p>01:08.191</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>00:10.970</p></td>
<td><p>00:11.008</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>

File diff suppressed because one or more lines are too long