[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2021-09-30 00:21:15 +00:00
parent 4e662292ad
commit 5a0c649530
19 changed files with 81 additions and 81 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -231,13 +231,13 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
vector-add-performance:
size Triton Torch
0 4096.0 8.000000 9.600000
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
6 262144.0 341.333321 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -254,7 +254,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 56.166 seconds)
**Total running time of the script:** ( 1 minutes 50.209 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -286,17 +286,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 186.181817
0 256.0 512.000001 546.133347 184.089886
1 384.0 585.142862 585.142862 153.600004
2 512.0 630.153853 606.814814 154.566038
2 512.0 630.153853 585.142849 154.566038
3 640.0 682.666684 640.000002 160.000000
4 768.0 702.171410 664.216187 163.839992
.. ... ... ... ...
93 12160.0 810.666687 406.179533 198.936606
94 12288.0 810.754644 415.222812 199.298541
95 12416.0 809.189387 412.149375 198.854847
96 12544.0 807.661970 412.971190 199.111113
97 12672.0 807.776923 412.097543 199.167004
93 12160.0 810.666687 405.755985 199.140227
94 12288.0 810.754644 415.661740 199.399583
95 12416.0 807.544681 411.296057 198.954424
96 12544.0 807.661970 412.971190 199.209928
97 12672.0 807.776923 412.097543 199.264875
[98 rows x 4 columns]
@@ -314,7 +314,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 26.313 seconds)
**Total running time of the script:** ( 3 minutes 27.286 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -462,37 +462,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.978909 ... 3.276800 2.978909
0 256.0 2.730667 ... 3.276800 3.276800
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 37.971025
5 896.0 39.025776 ... 39.025776 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 44.566925 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.109587
9 1408.0 64.138541 ... 67.305878 66.485074
10 1536.0 80.430545 ... 78.643199 78.643199
11 1664.0 63.372618 ... 62.492442 62.061463
8 1280.0 51.200001 ... 56.109587 56.109587
9 1408.0 64.138541 ... 66.485074 66.485074
10 1536.0 80.430545 ... 79.526831 78.643199
11 1664.0 62.929456 ... 62.061463 62.061463
12 1792.0 72.983276 ... 72.047592 72.047592
13 1920.0 69.120002 ... 70.530615 70.172588
14 2048.0 73.908442 ... 76.959706 76.608294
15 2176.0 83.500614 ... 86.367588 85.632545
16 2304.0 68.446623 ... 76.809875 76.809875
17 2432.0 71.305746 ... 74.918570 85.393507
18 2560.0 78.019048 ... 81.108913 80.709358
19 2688.0 83.552988 ... 89.676257 89.254248
20 2816.0 82.680963 ... 83.233226 83.712490
21 2944.0 82.237674 ... 82.784108 81.967162
22 3072.0 81.707223 ... 86.978653 85.922766
23 3200.0 84.993363 ... 95.238096 95.522391
24 3328.0 83.516586 ... 81.346098 83.710812
25 3456.0 81.932484 ... 90.484366 91.097818
26 3584.0 85.797134 ... 90.549237 94.349836
27 3712.0 85.859341 ... 85.019017 88.561477
28 3840.0 81.019778 ... 88.615388 91.398346
29 3968.0 85.991957 ... 91.472214 84.154440
30 4096.0 93.142072 ... 88.359266 82.392715
15 2176.0 83.500614 ... 86.367588 85.998493
16 2304.0 68.446623 ... 77.057651 77.057651
17 2432.0 71.305746 ... 85.393507 84.877538
18 2560.0 78.019048 ... 81.310171 81.310171
19 2688.0 83.737433 ... 89.254248 89.888756
20 2816.0 82.602666 ... 83.074685 83.233226
21 2944.0 82.237674 ... 82.784108 82.509987
22 3072.0 82.661468 ... 88.750943 85.920732
23 3200.0 84.993363 ... 90.780140 95.522391
24 3328.0 83.613586 ... 81.346098 83.905938
25 3456.0 81.849303 ... 91.304157 87.823058
26 3584.0 83.024371 ... 94.747514 95.451583
27 3712.0 81.682211 ... 87.937800 87.937800
28 3840.0 82.778440 ... 90.723546 88.473602
29 3968.0 91.747320 ... 84.038524 91.335278
30 4096.0 91.741443 ... 85.001726 90.200084
[31 rows x 5 columns]
@@ -502,7 +502,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 57.449 seconds)
**Total running time of the script:** ( 5 minutes 37.929 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -238,7 +238,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.187 seconds)
**Total running time of the script:** ( 0 minutes 0.010 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -5,14 +5,14 @@
Computation times
=================
**11:20.116** total execution time for **getting-started_tutorials** files:
**10:55.434** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 05:57.449 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 05:37.929 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:26.313 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.286 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:56.166 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:50.209 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.187 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.010 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -320,13 +320,13 @@ for different problem sizes.</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>vector-add-performance:
size Triton Torch
0 4096.0 8.000000 9.600000
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
6 262144.0 341.333321 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -338,7 +338,7 @@ for different problem sizes.</p>
15 134217728.0 849.737435 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 56.166 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 50.209 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -373,17 +373,17 @@ We will then compare its performance against (1) <code class="code docutils lite
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 186.181817
0 256.0 512.000001 546.133347 184.089886
1 384.0 585.142862 585.142862 153.600004
2 512.0 630.153853 606.814814 154.566038
2 512.0 630.153853 585.142849 154.566038
3 640.0 682.666684 640.000002 160.000000
4 768.0 702.171410 664.216187 163.839992
.. ... ... ... ...
93 12160.0 810.666687 406.179533 198.936606
94 12288.0 810.754644 415.222812 199.298541
95 12416.0 809.189387 412.149375 198.854847
96 12544.0 807.661970 412.971190 199.111113
97 12672.0 807.776923 412.097543 199.167004
93 12160.0 810.666687 405.755985 199.140227
94 12288.0 810.754644 415.661740 199.399583
95 12416.0 807.544681 411.296057 198.954424
96 12544.0 807.661970 412.971190 199.209928
97 12672.0 807.776923 412.097543 199.264875
[98 rows x 4 columns]
</pre></div>
@@ -396,7 +396,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 26.313 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 27.286 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.978909 ... 3.276800 2.978909
0 256.0 2.730667 ... 3.276800 3.276800
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 37.971025
5 896.0 39.025776 ... 39.025776 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 44.566925 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.109587
9 1408.0 64.138541 ... 67.305878 66.485074
10 1536.0 80.430545 ... 78.643199 78.643199
11 1664.0 63.372618 ... 62.492442 62.061463
8 1280.0 51.200001 ... 56.109587 56.109587
9 1408.0 64.138541 ... 66.485074 66.485074
10 1536.0 80.430545 ... 79.526831 78.643199
11 1664.0 62.929456 ... 62.061463 62.061463
12 1792.0 72.983276 ... 72.047592 72.047592
13 1920.0 69.120002 ... 70.530615 70.172588
14 2048.0 73.908442 ... 76.959706 76.608294
15 2176.0 83.500614 ... 86.367588 85.632545
16 2304.0 68.446623 ... 76.809875 76.809875
17 2432.0 71.305746 ... 74.918570 85.393507
18 2560.0 78.019048 ... 81.108913 80.709358
19 2688.0 83.552988 ... 89.676257 89.254248
20 2816.0 82.680963 ... 83.233226 83.712490
21 2944.0 82.237674 ... 82.784108 81.967162
22 3072.0 81.707223 ... 86.978653 85.922766
23 3200.0 84.993363 ... 95.238096 95.522391
24 3328.0 83.516586 ... 81.346098 83.710812
25 3456.0 81.932484 ... 90.484366 91.097818
26 3584.0 85.797134 ... 90.549237 94.349836
27 3712.0 85.859341 ... 85.019017 88.561477
28 3840.0 81.019778 ... 88.615388 91.398346
29 3968.0 85.991957 ... 91.472214 84.154440
30 4096.0 93.142072 ... 88.359266 82.392715
15 2176.0 83.500614 ... 86.367588 85.998493
16 2304.0 68.446623 ... 77.057651 77.057651
17 2432.0 71.305746 ... 85.393507 84.877538
18 2560.0 78.019048 ... 81.310171 81.310171
19 2688.0 83.737433 ... 89.254248 89.888756
20 2816.0 82.602666 ... 83.074685 83.233226
21 2944.0 82.237674 ... 82.784108 82.509987
22 3072.0 82.661468 ... 88.750943 85.920732
23 3200.0 84.993363 ... 90.780140 95.522391
24 3328.0 83.613586 ... 81.346098 83.905938
25 3456.0 81.849303 ... 91.304157 87.823058
26 3584.0 83.024371 ... 94.747514 95.451583
27 3712.0 81.682211 ... 87.937800 87.937800
28 3840.0 82.778440 ... 90.723546 88.473602
29 3968.0 91.747320 ... 84.038524 91.335278
30 4096.0 91.741443 ... 85.001726 90.200084
[31 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 57.449 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 37.929 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -370,7 +370,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd>
</dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.187 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.010 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>11:20.116</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>10:55.434</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,19 +183,19 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>05:57.449</p></td>
<td><p>05:37.929</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:26.313</p></td>
<td><p>03:27.286</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:56.166</p></td>
<td><p>01:50.209</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.187</p></td>
<td><p>00:00.010</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>

File diff suppressed because one or more lines are too long