[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2021-09-01 00:15:40 +00:00
parent dc5d72f74d
commit 71848bbc35
17 changed files with 79 additions and 79 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

@@ -231,22 +231,22 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
vector-add-performance:
size Triton Torch
0 4096.0 9.540372 9.600000
0 4096.0 8.000000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
7 524288.0 511.999982 472.615390
8 1048576.0 585.142862 614.400016
9 2097152.0 682.666643 722.823517
9 2097152.0 682.666643 702.171410
10 4194304.0 744.727267 780.190482
11 8388608.0 780.190482 812.429770
12 16777216.0 799.219478 833.084721
13 33554432.0 809.086412 843.811163
14 67108864.0 814.111783 848.362445
15 134217728.0 818.773573 850.656574
13 33554432.0 807.425031 842.004273
14 67108864.0 814.955429 848.362445
15 134217728.0 820.481984 850.656574

View File

@@ -301,16 +301,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 186.181817
1 384.0 585.142862 585.142862 153.600004
2 512.0 630.153853 585.142849 154.566038
1 384.0 585.142862 558.545450 153.600004
2 512.0 630.153853 606.814814 154.566038
3 640.0 660.645170 640.000002 160.000000
4 768.0 664.216187 664.216187 163.839992
.. ... ... ... ...
93 12160.0 742.595457 405.333344 199.038365
94 12288.0 753.287332 415.222812 199.197579
95 12416.0 737.128054 411.296057 198.854847
96 12544.0 736.528417 412.971190 199.111113
97 12672.0 738.622965 412.097543 199.167004
93 12160.0 741.180979 406.179533 199.038365
94 12288.0 753.287332 415.222812 199.298541
95 12416.0 737.128054 412.149375 198.854847
96 12544.0 737.882340 412.546756 199.111113
97 12672.0 738.622965 412.097543 199.264875
[98 rows x 4 columns]
@@ -328,7 +328,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 13.336 seconds)
**Total running time of the script:** ( 1 minutes 12.769 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -472,27 +472,27 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
7 1152.0 44.566925 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.109587
9 1408.0 64.138541 ... 63.392744 62.664092
10 1536.0 80.430545 ... 75.296679 75.296679
11 1664.0 63.372618 ... 62.492442 62.061463
12 1792.0 72.983276 ... 62.441243 62.441243
13 1920.0 69.120002 ... 69.120002 67.764707
14 2048.0 73.584279 ... 74.565406 74.235468
15 2176.0 82.473969 ... 79.855747 81.472263
16 2304.0 68.446623 ... 73.275679 73.275679
17 2432.0 71.125224 ... 71.125224 80.499895
18 2560.0 77.649287 ... 75.328737 74.642370
19 2688.0 82.642823 ... 81.227100 84.861423
20 2816.0 82.446516 ... 80.320825 80.469019
21 2944.0 82.784108 ... 77.505492 78.605729
22 3072.0 81.589488 ... 83.391907 79.976138
23 3200.0 83.989503 ... 86.137280 90.014065
24 3328.0 83.130825 ... 84.596116 87.051143
25 3456.0 80.300370 ... 84.156124 81.026701
26 3584.0 83.954614 ... 91.377427 92.696281
27 3712.0 84.230479 ... 83.526206 85.382349
28 3840.0 84.809814 ... 87.148936 87.910967
29 3968.0 92.652949 ... 87.222259 87.597943
30 4096.0 93.727466 ... 90.382307 89.837839
10 1536.0 80.430545 ... 76.106321 75.296679
11 1664.0 63.372618 ... 62.061463 61.636381
12 1792.0 72.983276 ... 62.441243 62.096267
13 1920.0 69.120002 ... 70.172588 69.818184
14 2048.0 73.584279 ... 74.898285 74.565406
15 2176.0 82.137338 ... 78.302130 78.608000
16 2304.0 68.643310 ... 73.051599 72.828879
17 2432.0 70.945618 ... 72.222274 82.147552
18 2560.0 77.833728 ... 76.560748 76.560748
19 2688.0 81.928846 ... 83.369354 79.357857
20 2816.0 79.587973 ... 80.320825 80.173175
21 2944.0 78.854483 ... 76.435630 77.026327
22 3072.0 81.589488 ... 82.661468 82.661468
23 3200.0 84.768213 ... 88.888888 83.989503
24 3328.0 83.130825 ... 86.217120 82.369902
25 3456.0 80.220468 ... 81.271743 82.519518
26 3584.0 87.466332 ... 92.126428 88.499397
27 3712.0 84.301560 ... 83.247783 83.596102
28 3840.0 80.255442 ... 80.432371 82.902547
29 3968.0 86.176998 ... 87.159957 87.035620
30 4096.0 93.662059 ... 83.057130 90.200084
[31 rows x 5 columns]
@@ -502,7 +502,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 28.032 seconds)
**Total running time of the script:** ( 2 minutes 24.022 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -5,12 +5,12 @@
Computation times
=================
**03:52.398** total execution time for **getting-started_tutorials** files:
**03:47.821** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:28.032 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:24.022 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:13.336 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:12.769 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:11.030 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -319,22 +319,22 @@ for different problem sizes.</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector-add-performance:
size Triton Torch
0 4096.0 9.540372 9.600000
0 4096.0 8.000000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
7 524288.0 511.999982 472.615390
8 1048576.0 585.142862 614.400016
9 2097152.0 682.666643 722.823517
9 2097152.0 682.666643 702.171410
10 4194304.0 744.727267 780.190482
11 8388608.0 780.190482 812.429770
12 16777216.0 799.219478 833.084721
13 33554432.0 809.086412 843.811163
14 67108864.0 814.111783 848.362445
15 134217728.0 818.773573 850.656574
13 33554432.0 807.425031 842.004273
14 67108864.0 814.955429 848.362445
15 134217728.0 820.481984 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 11.030 seconds)</p>

View File

@@ -386,16 +386,16 @@ We will then compare its performance against (1) <code class="code docutils lite
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 186.181817
1 384.0 585.142862 585.142862 153.600004
2 512.0 630.153853 585.142849 154.566038
1 384.0 585.142862 558.545450 153.600004
2 512.0 630.153853 606.814814 154.566038
3 640.0 660.645170 640.000002 160.000000
4 768.0 664.216187 664.216187 163.839992
.. ... ... ... ...
93 12160.0 742.595457 405.333344 199.038365
94 12288.0 753.287332 415.222812 199.197579
95 12416.0 737.128054 411.296057 198.854847
96 12544.0 736.528417 412.971190 199.111113
97 12672.0 738.622965 412.097543 199.167004
93 12160.0 741.180979 406.179533 199.038365
94 12288.0 753.287332 415.222812 199.298541
95 12416.0 737.128054 412.149375 198.854847
96 12544.0 737.882340 412.546756 199.111113
97 12672.0 738.622965 412.097543 199.264875
[98 rows x 4 columns]
</pre></div>
@@ -408,7 +408,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 13.336 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 12.769 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -576,32 +576,32 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
7 1152.0 44.566925 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.109587
9 1408.0 64.138541 ... 63.392744 62.664092
10 1536.0 80.430545 ... 75.296679 75.296679
11 1664.0 63.372618 ... 62.492442 62.061463
12 1792.0 72.983276 ... 62.441243 62.441243
13 1920.0 69.120002 ... 69.120002 67.764707
14 2048.0 73.584279 ... 74.565406 74.235468
15 2176.0 82.473969 ... 79.855747 81.472263
16 2304.0 68.446623 ... 73.275679 73.275679
17 2432.0 71.125224 ... 71.125224 80.499895
18 2560.0 77.649287 ... 75.328737 74.642370
19 2688.0 82.642823 ... 81.227100 84.861423
20 2816.0 82.446516 ... 80.320825 80.469019
21 2944.0 82.784108 ... 77.505492 78.605729
22 3072.0 81.589488 ... 83.391907 79.976138
23 3200.0 83.989503 ... 86.137280 90.014065
24 3328.0 83.130825 ... 84.596116 87.051143
25 3456.0 80.300370 ... 84.156124 81.026701
26 3584.0 83.954614 ... 91.377427 92.696281
27 3712.0 84.230479 ... 83.526206 85.382349
28 3840.0 84.809814 ... 87.148936 87.910967
29 3968.0 92.652949 ... 87.222259 87.597943
30 4096.0 93.727466 ... 90.382307 89.837839
10 1536.0 80.430545 ... 76.106321 75.296679
11 1664.0 63.372618 ... 62.061463 61.636381
12 1792.0 72.983276 ... 62.441243 62.096267
13 1920.0 69.120002 ... 70.172588 69.818184
14 2048.0 73.584279 ... 74.898285 74.565406
15 2176.0 82.137338 ... 78.302130 78.608000
16 2304.0 68.643310 ... 73.051599 72.828879
17 2432.0 70.945618 ... 72.222274 82.147552
18 2560.0 77.833728 ... 76.560748 76.560748
19 2688.0 81.928846 ... 83.369354 79.357857
20 2816.0 79.587973 ... 80.320825 80.173175
21 2944.0 78.854483 ... 76.435630 77.026327
22 3072.0 81.589488 ... 82.661468 82.661468
23 3200.0 84.768213 ... 88.888888 83.989503
24 3328.0 83.130825 ... 86.217120 82.369902
25 3456.0 80.220468 ... 81.271743 82.519518
26 3584.0 87.466332 ... 92.126428 88.499397
27 3712.0 84.301560 ... 83.247783 83.596102
28 3840.0 80.255442 ... 80.432371 82.902547
29 3968.0 86.176998 ... 87.159957 87.035620
30 4096.0 93.662059 ... 83.057130 90.200084
[31 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 28.032 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 24.022 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>03:52.398</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>03:47.821</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,11 +183,11 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>02:28.032</p></td>
<td><p>02:24.022</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>01:13.336</p></td>
<td><p>01:12.769</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>

File diff suppressed because one or more lines are too long