[GH-PAGES] Updated website
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 58 KiB |
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB |
@@ -234,10 +234,10 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
3 32768.0 63.999998 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 341.333321 384.000001
|
||||
6 262144.0 341.333321 341.333321
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 722.823517
|
||||
@@ -254,7 +254,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 56.916 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 44.504 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -286,17 +286,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 512.000001 546.133347 186.181817
|
||||
1 384.0 585.142862 585.142862 153.600004
|
||||
0 256.0 512.000001 546.133347 184.089886
|
||||
1 384.0 585.142862 558.545450 151.703707
|
||||
2 512.0 630.153853 606.814814 154.566038
|
||||
3 640.0 682.666684 640.000002 160.000000
|
||||
4 768.0 702.171410 664.216187 163.839992
|
||||
.. ... ... ... ...
|
||||
93 12160.0 810.666687 406.179533 199.038365
|
||||
94 12288.0 810.754644 415.661740 199.298541
|
||||
95 12416.0 809.189387 412.149375 198.954424
|
||||
96 12544.0 807.661970 412.971190 199.209928
|
||||
97 12672.0 807.776923 412.097543 199.167004
|
||||
94 12288.0 810.754644 415.222812 199.298541
|
||||
95 12416.0 809.189387 412.577363 198.854847
|
||||
96 12544.0 807.661970 412.971190 199.061730
|
||||
97 12672.0 807.776923 412.097543 199.264875
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -314,7 +314,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 3 minutes 27.048 seconds)
|
||||
**Total running time of the script:** ( 3 minutes 26.029 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -462,37 +462,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.730667 ... 2.978909 2.978909
|
||||
0 256.0 2.978909 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 34.028308 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
5 896.0 37.971025 ... 39.025776 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 44.566925 ... 46.656000 45.938215
|
||||
8 1280.0 51.200001 ... 56.109587 56.109587
|
||||
9 1408.0 64.138541 ... 66.485074 65.684049
|
||||
10 1536.0 80.430545 ... 78.643199 78.643199
|
||||
11 1664.0 62.929456 ... 62.492442 61.636381
|
||||
12 1792.0 72.983276 ... 72.047592 71.588687
|
||||
13 1920.0 69.120002 ... 70.172588 70.530615
|
||||
9 1408.0 64.138541 ... 66.485074 66.485074
|
||||
10 1536.0 79.526831 ... 78.643199 78.643199
|
||||
11 1664.0 62.929456 ... 62.492442 62.492442
|
||||
12 1792.0 72.983276 ... 72.047592 72.047592
|
||||
13 1920.0 69.120002 ... 70.530615 70.172588
|
||||
14 2048.0 73.908442 ... 76.959706 76.608294
|
||||
15 2176.0 83.155572 ... 85.998493 85.998493
|
||||
16 2304.0 68.446623 ... 77.307030 76.563695
|
||||
17 2432.0 71.305746 ... 85.653855 85.134737
|
||||
18 2560.0 77.833728 ... 81.108913 80.709358
|
||||
19 2688.0 83.737433 ... 89.676257 89.464755
|
||||
20 2816.0 83.074685 ... 82.916747 83.552120
|
||||
21 2944.0 82.102191 ... 82.237674 82.237674
|
||||
22 3072.0 82.540970 ... 89.170242 88.335577
|
||||
23 3200.0 84.768213 ... 95.665176 95.380032
|
||||
24 3328.0 83.034941 ... 85.096096 84.003845
|
||||
25 3456.0 81.932484 ... 86.689860 88.595129
|
||||
26 3584.0 87.296493 ... 98.699661 98.375705
|
||||
27 3712.0 82.423549 ... 88.640059 85.822459
|
||||
28 3840.0 84.421376 ... 91.930177 84.613126
|
||||
29 3968.0 92.267631 ... 84.270676 91.403695
|
||||
30 4096.0 86.592080 ... 86.703957 91.491294
|
||||
16 2304.0 68.446623 ... 76.809875 76.809875
|
||||
17 2432.0 71.305746 ... 74.918570 85.393507
|
||||
18 2560.0 77.833728 ... 81.310171 80.709358
|
||||
19 2688.0 83.552988 ... 89.676257 89.254248
|
||||
20 2816.0 82.759409 ... 83.074685 83.392363
|
||||
21 2944.0 82.784108 ... 81.832567 82.237674
|
||||
22 3072.0 81.943708 ... 87.924073 89.030036
|
||||
23 3200.0 82.368085 ... 89.012517 95.025983
|
||||
24 3328.0 83.613586 ... 81.346098 83.905938
|
||||
25 3456.0 81.766291 ... 90.943675 91.097818
|
||||
26 3584.0 86.540320 ... 91.655413 87.381330
|
||||
27 3712.0 85.163978 ... 84.088676 88.561477
|
||||
28 3840.0 80.960466 ... 86.400002 91.701494
|
||||
29 3968.0 86.083907 ... 91.198760 84.154440
|
||||
30 4096.0 93.498941 ... 93.336389 89.181212
|
||||
|
||||
[31 rows x 5 columns]
|
||||
|
||||
@@ -502,7 +502,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 6 minutes 8.320 seconds)
|
||||
**Total running time of the script:** ( 6 minutes 4.767 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -238,7 +238,7 @@ References
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.189 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.271 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
||||
|
@@ -5,14 +5,14 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**11:32.474** total execution time for **getting-started_tutorials** files:
|
||||
**11:15.571** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:08.320 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:04.767 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.048 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:26.029 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:56.916 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:44.504 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.189 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.271 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
@@ -323,10 +323,10 @@ for different problem sizes.</p>
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
3 32768.0 63.999998 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 341.333321 384.000001
|
||||
6 262144.0 341.333321 341.333321
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 722.823517
|
||||
@@ -338,7 +338,7 @@ for different problem sizes.</p>
|
||||
15 134217728.0 849.737435 850.656574
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 56.916 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 44.504 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
|
||||
|
@@ -373,17 +373,17 @@ We will then compare its performance against (1) <code class="code docutils lite
|
||||
<p class="sphx-glr-script-out">Out:</p>
|
||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 512.000001 546.133347 186.181817
|
||||
1 384.0 585.142862 585.142862 153.600004
|
||||
0 256.0 512.000001 546.133347 184.089886
|
||||
1 384.0 585.142862 558.545450 151.703707
|
||||
2 512.0 630.153853 606.814814 154.566038
|
||||
3 640.0 682.666684 640.000002 160.000000
|
||||
4 768.0 702.171410 664.216187 163.839992
|
||||
.. ... ... ... ...
|
||||
93 12160.0 810.666687 406.179533 199.038365
|
||||
94 12288.0 810.754644 415.661740 199.298541
|
||||
95 12416.0 809.189387 412.149375 198.954424
|
||||
96 12544.0 807.661970 412.971190 199.209928
|
||||
97 12672.0 807.776923 412.097543 199.167004
|
||||
94 12288.0 810.754644 415.222812 199.298541
|
||||
95 12416.0 809.189387 412.577363 198.854847
|
||||
96 12544.0 807.661970 412.971190 199.061730
|
||||
97 12672.0 807.776923 412.097543 199.264875
|
||||
|
||||
[98 rows x 4 columns]
|
||||
</pre></div>
|
||||
@@ -396,7 +396,7 @@ We will then compare its performance against (1) <code class="code docutils lite
|
||||
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
|
||||
</ul>
|
||||
</div></blockquote>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 27.048 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 26.029 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>
|
||||
|
@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
|
||||
<p class="sphx-glr-script-out">Out:</p>
|
||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.730667 ... 2.978909 2.978909
|
||||
0 256.0 2.978909 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 34.028308 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
5 896.0 37.971025 ... 39.025776 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 44.566925 ... 46.656000 45.938215
|
||||
8 1280.0 51.200001 ... 56.109587 56.109587
|
||||
9 1408.0 64.138541 ... 66.485074 65.684049
|
||||
10 1536.0 80.430545 ... 78.643199 78.643199
|
||||
11 1664.0 62.929456 ... 62.492442 61.636381
|
||||
12 1792.0 72.983276 ... 72.047592 71.588687
|
||||
13 1920.0 69.120002 ... 70.172588 70.530615
|
||||
9 1408.0 64.138541 ... 66.485074 66.485074
|
||||
10 1536.0 79.526831 ... 78.643199 78.643199
|
||||
11 1664.0 62.929456 ... 62.492442 62.492442
|
||||
12 1792.0 72.983276 ... 72.047592 72.047592
|
||||
13 1920.0 69.120002 ... 70.530615 70.172588
|
||||
14 2048.0 73.908442 ... 76.959706 76.608294
|
||||
15 2176.0 83.155572 ... 85.998493 85.998493
|
||||
16 2304.0 68.446623 ... 77.307030 76.563695
|
||||
17 2432.0 71.305746 ... 85.653855 85.134737
|
||||
18 2560.0 77.833728 ... 81.108913 80.709358
|
||||
19 2688.0 83.737433 ... 89.676257 89.464755
|
||||
20 2816.0 83.074685 ... 82.916747 83.552120
|
||||
21 2944.0 82.102191 ... 82.237674 82.237674
|
||||
22 3072.0 82.540970 ... 89.170242 88.335577
|
||||
23 3200.0 84.768213 ... 95.665176 95.380032
|
||||
24 3328.0 83.034941 ... 85.096096 84.003845
|
||||
25 3456.0 81.932484 ... 86.689860 88.595129
|
||||
26 3584.0 87.296493 ... 98.699661 98.375705
|
||||
27 3712.0 82.423549 ... 88.640059 85.822459
|
||||
28 3840.0 84.421376 ... 91.930177 84.613126
|
||||
29 3968.0 92.267631 ... 84.270676 91.403695
|
||||
30 4096.0 86.592080 ... 86.703957 91.491294
|
||||
16 2304.0 68.446623 ... 76.809875 76.809875
|
||||
17 2432.0 71.305746 ... 74.918570 85.393507
|
||||
18 2560.0 77.833728 ... 81.310171 80.709358
|
||||
19 2688.0 83.552988 ... 89.676257 89.254248
|
||||
20 2816.0 82.759409 ... 83.074685 83.392363
|
||||
21 2944.0 82.784108 ... 81.832567 82.237674
|
||||
22 3072.0 81.943708 ... 87.924073 89.030036
|
||||
23 3200.0 82.368085 ... 89.012517 95.025983
|
||||
24 3328.0 83.613586 ... 81.346098 83.905938
|
||||
25 3456.0 81.766291 ... 90.943675 91.097818
|
||||
26 3584.0 86.540320 ... 91.655413 87.381330
|
||||
27 3712.0 85.163978 ... 84.088676 88.561477
|
||||
28 3840.0 80.960466 ... 86.400002 91.701494
|
||||
29 3968.0 86.083907 ... 91.198760 84.154440
|
||||
30 4096.0 93.498941 ... 93.336389 89.181212
|
||||
|
||||
[31 rows x 5 columns]
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 8.320 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 4.767 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>
|
||||
|
@@ -370,7 +370,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
|
||||
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
|
||||
</dd>
|
||||
</dl>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.189 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.271 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>
|
||||
|
@@ -174,7 +174,7 @@
|
||||
|
||||
<div class="section" id="computation-times">
|
||||
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
|
||||
<p><strong>11:32.474</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
||||
<p><strong>11:15.571</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
||||
<table class="docutils align-default">
|
||||
<colgroup>
|
||||
<col style="width: 85%" />
|
||||
@@ -183,19 +183,19 @@
|
||||
</colgroup>
|
||||
<tbody>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
|
||||
<td><p>06:08.320</p></td>
|
||||
<td><p>06:04.767</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
|
||||
<td><p>03:27.048</p></td>
|
||||
<td><p>03:26.029</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
|
||||
<td><p>01:56.916</p></td>
|
||||
<td><p>01:44.504</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
|
||||
<td><p>00:00.189</p></td>
|
||||
<td><p>00:00.271</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|