[GH-PAGES] Updated website
@@ -1,4 +1,4 @@
|
||||
# Sphinx build info version 1
|
||||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
||||
config: 42338bc3d56c45da16cbf32113f114b9
|
||||
config: 0084a748dd9b3ca8011ebc226a29adfe
|
||||
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
||||
|
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 59 KiB |
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
@@ -245,7 +245,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
10 4194304.0 780.190482 780.190482
|
||||
11 8388608.0 812.429770 812.429770
|
||||
12 16777216.0 833.084721 833.084721
|
||||
13 33554432.0 842.004273 843.811163
|
||||
13 33554432.0 842.004273 842.004273
|
||||
14 67108864.0 847.448255 848.362445
|
||||
15 134217728.0 849.737435 850.656574
|
||||
|
||||
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 40.502 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 45.454 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 546.133347 546.133347 186.181817
|
||||
0 256.0 546.133347 546.133347 188.321838
|
||||
1 384.0 614.400016 585.142862 151.703707
|
||||
2 512.0 655.360017 606.814814 154.566038
|
||||
3 640.0 706.206879 640.000002 160.000000
|
||||
2 512.0 655.360017 606.814814 156.038096
|
||||
3 640.0 706.206879 640.000002 158.759699
|
||||
4 768.0 722.823517 664.216187 162.754967
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 406.179533 199.140227
|
||||
94 12288.0 812.429770 415.661740 199.399583
|
||||
95 12416.0 812.498981 411.296057 198.954424
|
||||
96 12544.0 810.925276 412.971190 199.209928
|
||||
97 12672.0 811.007961 412.097543 199.264875
|
||||
93 12160.0 812.359066 406.179533 198.834951
|
||||
94 12288.0 812.429770 415.661740 199.197579
|
||||
95 12416.0 812.498981 412.149375 198.854847
|
||||
96 12544.0 810.925276 412.546756 199.012395
|
||||
97 12672.0 811.007961 412.097543 199.167004
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -306,7 +306,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 3 minutes 30.017 seconds)
|
||||
**Total running time of the script:** ( 3 minutes 32.151 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.730667 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 7.899428
|
||||
0 256.0 2.978909 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 35.389441 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 45.242181 ... 47.775747 47.396572
|
||||
7 1152.0 45.242181 ... 48.161033 47.396572
|
||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
||||
9 1408.0 64.138541 ... 69.009825 68.147202
|
||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||
13 1920.0 68.776119 ... 71.257735 71.257735
|
||||
13 1920.0 69.120002 ... 71.626943 70.892307
|
||||
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||
15 2176.0 83.155572 ... 87.876193 86.367588
|
||||
16 2304.0 68.446623 ... 77.810656 77.307030
|
||||
17 2432.0 71.305746 ... 86.711310 83.864074
|
||||
18 2560.0 78.019048 ... 82.747477 81.920002
|
||||
19 2688.0 83.737433 ... 90.316801 89.044730
|
||||
20 2816.0 79.587973 ... 83.312714 83.233226
|
||||
21 2944.0 81.967162 ... 82.784108 82.784108
|
||||
22 3072.0 78.534123 ... 89.170242 85.792579
|
||||
23 3200.0 79.701121 ... 96.822991 95.096582
|
||||
24 3328.0 82.843841 ... 85.096096 81.530349
|
||||
25 3456.0 80.945348 ... 89.579522 81.352463
|
||||
26 3584.0 87.466332 ... 97.416461 98.699661
|
||||
27 3712.0 82.559787 ... 88.955779 88.015279
|
||||
28 3840.0 83.497171 ... 89.043476 89.912191
|
||||
29 3968.0 87.976885 ... 87.315873 90.121079
|
||||
30 4096.0 87.552332 ... 85.460934 91.304576
|
||||
15 2176.0 83.155572 ... 87.876193 85.998493
|
||||
16 2304.0 68.446623 ... 78.064941 76.809875
|
||||
17 2432.0 71.305746 ... 85.393507 83.614477
|
||||
18 2560.0 78.019048 ... 82.022525 81.512437
|
||||
19 2688.0 83.737433 ... 90.748936 89.676257
|
||||
20 2816.0 80.173175 ... 84.035084 80.099554
|
||||
21 2944.0 82.921853 ... 83.337844 82.102191
|
||||
22 3072.0 81.825298 ... 88.890270 88.060814
|
||||
23 3200.0 83.989503 ... 97.116842 93.704243
|
||||
24 3328.0 83.226931 ... 85.398926 84.397770
|
||||
25 3456.0 82.099354 ... 91.615417 91.097818
|
||||
26 3584.0 85.715344 ... 90.367227 91.563533
|
||||
27 3712.0 85.896254 ... 88.797643 83.456425
|
||||
28 3840.0 85.005380 ... 93.523887 84.485870
|
||||
29 3968.0 92.512459 ... 84.679120 91.266964
|
||||
30 4096.0 86.480498 ... 86.536250 92.214171
|
||||
|
||||
[31 rows x 5 columns]
|
||||
|
||||
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 7 minutes 0.527 seconds)
|
||||
**Total running time of the script:** ( 6 minutes 43.164 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -240,7 +240,7 @@ References
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.292 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.012 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
||||
|
@@ -40,34 +40,34 @@ Layer Normalization
|
||||
N Triton Torch Apex
|
||||
0 1024.0 585.142849 277.694907 481.882344
|
||||
1 1536.0 630.153868 323.368435 511.999982
|
||||
2 2048.0 682.666643 337.814445 520.126988
|
||||
2 2048.0 668.734716 337.814445 520.126988
|
||||
3 2560.0 694.237267 362.477870 518.481028
|
||||
4 3072.0 712.347810 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 458.751978
|
||||
6 4096.0 728.177767 381.023256 455.111095
|
||||
4 3072.0 702.171410 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 451.527536
|
||||
6 4096.0 728.177767 383.251446 445.823133
|
||||
7 4608.0 670.254540 396.387087 426.173427
|
||||
8 5120.0 688.403381 397.669909 422.268057
|
||||
9 5632.0 704.000002 396.969169 413.357796
|
||||
10 6144.0 697.191505 402.885254 409.600010
|
||||
11 6656.0 705.271522 400.360920 400.360920
|
||||
12 7168.0 690.891575 392.767108 386.154893
|
||||
13 7680.0 678.895043 392.587863 386.415087
|
||||
14 8192.0 636.271854 391.259714 373.424507
|
||||
15 8704.0 627.315309 390.095225 380.502740
|
||||
16 9216.0 606.814809 406.214877 383.999986
|
||||
17 9728.0 587.350922 408.524944 382.427505
|
||||
18 10240.0 566.920437 408.578556 382.803739
|
||||
19 10752.0 546.133312 410.577576 380.601764
|
||||
20 11264.0 533.207081 396.969169 371.595879
|
||||
21 11776.0 520.486200 409.599991 378.345375
|
||||
22 12288.0 514.680630 413.042029 383.251457
|
||||
8 5120.0 688.403381 397.669909 424.455959
|
||||
9 5632.0 698.542675 396.969169 411.470331
|
||||
10 6144.0 702.171410 404.543206 411.313806
|
||||
11 6656.0 700.631610 400.360920 400.360920
|
||||
12 7168.0 686.754468 382.293315 384.859062
|
||||
13 7680.0 678.895043 393.846167 386.415087
|
||||
14 8192.0 642.509816 392.431125 374.491442
|
||||
15 8704.0 624.502255 391.191007 379.465939
|
||||
16 9216.0 604.327881 405.098894 382.010363
|
||||
17 9728.0 585.142883 408.524944 383.369452
|
||||
18 10240.0 564.965524 410.626555 382.803739
|
||||
19 10752.0 546.133312 411.559798 380.601764
|
||||
20 11264.0 531.634232 396.969169 367.804077
|
||||
21 11776.0 520.486200 410.492372 377.587162
|
||||
22 12288.0 513.336807 413.911572 383.251457
|
||||
23 12800.0 504.433489 410.420828 377.163903
|
||||
24 13312.0 494.180982 406.473303 377.645399
|
||||
25 13824.0 482.934503 410.359948 378.739711
|
||||
26 14336.0 471.967074 398.222208 373.576536
|
||||
27 14848.0 461.297068 404.027214 374.712936
|
||||
28 15360.0 454.269882 406.887417 378.092307
|
||||
29 15872.0 447.098578 408.940410 377.343238
|
||||
24 13312.0 494.180982 408.030638 376.976995
|
||||
25 13824.0 481.882350 412.656711 379.389355
|
||||
26 14336.0 471.967074 398.914774 371.158581
|
||||
27 14848.0 461.297068 406.794504 375.304904
|
||||
28 15360.0 454.269882 406.887417 377.511515
|
||||
29 15872.0 447.098578 408.940410 376.225175
|
||||
|
||||
|
||||
|
||||
@@ -393,7 +393,7 @@ Layer Normalization
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 5 minutes 33.832 seconds)
|
||||
**Total running time of the script:** ( 5 minutes 39.791 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
||||
|
@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.072 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.073 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
||||
|
@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.251 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.010 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
||||
|
@@ -5,20 +5,20 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**17:45.492** total execution time for **getting-started_tutorials** files:
|
||||
**17:40.655** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:00.527 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:43.164 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:33.832 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:39.791 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:30.017 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.151 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:40.502 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:45.454 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.292 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.073 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.251 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
@@ -337,12 +337,12 @@ for different problem sizes.</p>
|
||||
10 4194304.0 780.190482 780.190482
|
||||
11 8388608.0 812.429770 812.429770
|
||||
12 16777216.0 833.084721 833.084721
|
||||
13 33554432.0 842.004273 843.811163
|
||||
13 33554432.0 842.004273 842.004273
|
||||
14 67108864.0 847.448255 848.362445
|
||||
15 134217728.0 849.737435 850.656574
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 40.502 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 45.454 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
|
||||
|
@@ -371,17 +371,17 @@ We will then compare its performance against (1) <code class="code docutils lite
|
||||
<p class="sphx-glr-script-out">Out:</p>
|
||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 546.133347 546.133347 186.181817
|
||||
0 256.0 546.133347 546.133347 188.321838
|
||||
1 384.0 614.400016 585.142862 151.703707
|
||||
2 512.0 655.360017 606.814814 154.566038
|
||||
3 640.0 706.206879 640.000002 160.000000
|
||||
2 512.0 655.360017 606.814814 156.038096
|
||||
3 640.0 706.206879 640.000002 158.759699
|
||||
4 768.0 722.823517 664.216187 162.754967
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 406.179533 199.140227
|
||||
94 12288.0 812.429770 415.661740 199.399583
|
||||
95 12416.0 812.498981 411.296057 198.954424
|
||||
96 12544.0 810.925276 412.971190 199.209928
|
||||
97 12672.0 811.007961 412.097543 199.264875
|
||||
93 12160.0 812.359066 406.179533 198.834951
|
||||
94 12288.0 812.429770 415.661740 199.197579
|
||||
95 12416.0 812.498981 412.149375 198.854847
|
||||
96 12544.0 810.925276 412.546756 199.012395
|
||||
97 12672.0 811.007961 412.097543 199.167004
|
||||
|
||||
[98 rows x 4 columns]
|
||||
</pre></div>
|
||||
@@ -394,7 +394,7 @@ We will then compare its performance against (1) <code class="code docutils lite
|
||||
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
|
||||
</ul>
|
||||
</div></blockquote>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 30.017 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 32.151 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>
|
||||
|
@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
|
||||
<p class="sphx-glr-script-out">Out:</p>
|
||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.730667 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 7.899428
|
||||
0 256.0 2.978909 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 35.389441 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 45.242181 ... 47.775747 47.396572
|
||||
7 1152.0 45.242181 ... 48.161033 47.396572
|
||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
||||
9 1408.0 64.138541 ... 69.009825 68.147202
|
||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||
13 1920.0 68.776119 ... 71.257735 71.257735
|
||||
13 1920.0 69.120002 ... 71.626943 70.892307
|
||||
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||
15 2176.0 83.155572 ... 87.876193 86.367588
|
||||
16 2304.0 68.446623 ... 77.810656 77.307030
|
||||
17 2432.0 71.305746 ... 86.711310 83.864074
|
||||
18 2560.0 78.019048 ... 82.747477 81.920002
|
||||
19 2688.0 83.737433 ... 90.316801 89.044730
|
||||
20 2816.0 79.587973 ... 83.312714 83.233226
|
||||
21 2944.0 81.967162 ... 82.784108 82.784108
|
||||
22 3072.0 78.534123 ... 89.170242 85.792579
|
||||
23 3200.0 79.701121 ... 96.822991 95.096582
|
||||
24 3328.0 82.843841 ... 85.096096 81.530349
|
||||
25 3456.0 80.945348 ... 89.579522 81.352463
|
||||
26 3584.0 87.466332 ... 97.416461 98.699661
|
||||
27 3712.0 82.559787 ... 88.955779 88.015279
|
||||
28 3840.0 83.497171 ... 89.043476 89.912191
|
||||
29 3968.0 87.976885 ... 87.315873 90.121079
|
||||
30 4096.0 87.552332 ... 85.460934 91.304576
|
||||
15 2176.0 83.155572 ... 87.876193 85.998493
|
||||
16 2304.0 68.446623 ... 78.064941 76.809875
|
||||
17 2432.0 71.305746 ... 85.393507 83.614477
|
||||
18 2560.0 78.019048 ... 82.022525 81.512437
|
||||
19 2688.0 83.737433 ... 90.748936 89.676257
|
||||
20 2816.0 80.173175 ... 84.035084 80.099554
|
||||
21 2944.0 82.921853 ... 83.337844 82.102191
|
||||
22 3072.0 81.825298 ... 88.890270 88.060814
|
||||
23 3200.0 83.989503 ... 97.116842 93.704243
|
||||
24 3328.0 83.226931 ... 85.398926 84.397770
|
||||
25 3456.0 82.099354 ... 91.615417 91.097818
|
||||
26 3584.0 85.715344 ... 90.367227 91.563533
|
||||
27 3712.0 85.896254 ... 88.797643 83.456425
|
||||
28 3840.0 85.005380 ... 93.523887 84.485870
|
||||
29 3968.0 92.512459 ... 84.679120 91.266964
|
||||
30 4096.0 86.480498 ... 86.536250 92.214171
|
||||
|
||||
[31 rows x 5 columns]
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 7 minutes 0.527 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 43.164 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>
|
||||
|
@@ -374,7 +374,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
|
||||
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
|
||||
</dd>
|
||||
</dl>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.292 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.012 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>
|
||||
|
@@ -198,34 +198,34 @@ to download the full example code</p>
|
||||
N Triton Torch Apex
|
||||
0 1024.0 585.142849 277.694907 481.882344
|
||||
1 1536.0 630.153868 323.368435 511.999982
|
||||
2 2048.0 682.666643 337.814445 520.126988
|
||||
2 2048.0 668.734716 337.814445 520.126988
|
||||
3 2560.0 694.237267 362.477870 518.481028
|
||||
4 3072.0 712.347810 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 458.751978
|
||||
6 4096.0 728.177767 381.023256 455.111095
|
||||
4 3072.0 702.171410 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 451.527536
|
||||
6 4096.0 728.177767 383.251446 445.823133
|
||||
7 4608.0 670.254540 396.387087 426.173427
|
||||
8 5120.0 688.403381 397.669909 422.268057
|
||||
9 5632.0 704.000002 396.969169 413.357796
|
||||
10 6144.0 697.191505 402.885254 409.600010
|
||||
11 6656.0 705.271522 400.360920 400.360920
|
||||
12 7168.0 690.891575 392.767108 386.154893
|
||||
13 7680.0 678.895043 392.587863 386.415087
|
||||
14 8192.0 636.271854 391.259714 373.424507
|
||||
15 8704.0 627.315309 390.095225 380.502740
|
||||
16 9216.0 606.814809 406.214877 383.999986
|
||||
17 9728.0 587.350922 408.524944 382.427505
|
||||
18 10240.0 566.920437 408.578556 382.803739
|
||||
19 10752.0 546.133312 410.577576 380.601764
|
||||
20 11264.0 533.207081 396.969169 371.595879
|
||||
21 11776.0 520.486200 409.599991 378.345375
|
||||
22 12288.0 514.680630 413.042029 383.251457
|
||||
8 5120.0 688.403381 397.669909 424.455959
|
||||
9 5632.0 698.542675 396.969169 411.470331
|
||||
10 6144.0 702.171410 404.543206 411.313806
|
||||
11 6656.0 700.631610 400.360920 400.360920
|
||||
12 7168.0 686.754468 382.293315 384.859062
|
||||
13 7680.0 678.895043 393.846167 386.415087
|
||||
14 8192.0 642.509816 392.431125 374.491442
|
||||
15 8704.0 624.502255 391.191007 379.465939
|
||||
16 9216.0 604.327881 405.098894 382.010363
|
||||
17 9728.0 585.142883 408.524944 383.369452
|
||||
18 10240.0 564.965524 410.626555 382.803739
|
||||
19 10752.0 546.133312 411.559798 380.601764
|
||||
20 11264.0 531.634232 396.969169 367.804077
|
||||
21 11776.0 520.486200 410.492372 377.587162
|
||||
22 12288.0 513.336807 413.911572 383.251457
|
||||
23 12800.0 504.433489 410.420828 377.163903
|
||||
24 13312.0 494.180982 406.473303 377.645399
|
||||
25 13824.0 482.934503 410.359948 378.739711
|
||||
26 14336.0 471.967074 398.222208 373.576536
|
||||
27 14848.0 461.297068 404.027214 374.712936
|
||||
28 15360.0 454.269882 406.887417 378.092307
|
||||
29 15872.0 447.098578 408.940410 377.343238
|
||||
24 13312.0 494.180982 408.030638 376.976995
|
||||
25 13824.0 481.882350 412.656711 379.389355
|
||||
26 14336.0 471.967074 398.914774 371.158581
|
||||
27 14848.0 461.297068 406.794504 375.304904
|
||||
28 15360.0 454.269882 406.887417 377.511515
|
||||
29 15872.0 447.098578 408.940410 376.225175
|
||||
</pre></div>
|
||||
</div>
|
||||
<div class="line-block">
|
||||
@@ -543,7 +543,7 @@ to download the full example code</p>
|
||||
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">'.'</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 33.832 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 39.791 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>
|
||||
|
@@ -543,7 +543,7 @@ to download the full example code</p>
|
||||
<span class="c1"># bench_flash_attention.run(save_path='.', print_data=True)</span>
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.072 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.073 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-06-fused-attention-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/54a35f6ec55f9746935b9566fb6bb1df/06-fused-attention.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">06-fused-attention.py</span></code></a></p>
|
||||
|
@@ -276,7 +276,7 @@ tensor([0.4105, 0.5430, 0.0249, ..., 0.0424, 0.5351, 0.8149], device='cuda:
|
||||
The maximum difference between torch and triton is 2.384185791015625e-07
|
||||
</pre></div>
|
||||
</div>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.251 seconds)</p>
|
||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.010 seconds)</p>
|
||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-07-libdevice-function-py">
|
||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||
<p><a class="reference download internal" download="" href="../../_downloads/3ff29f967ace7985da24aab10352fc76/07-libdevice-function.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">07-libdevice-function.py</span></code></a></p>
|
||||
|
@@ -174,7 +174,7 @@
|
||||
|
||||
<div class="section" id="computation-times">
|
||||
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
|
||||
<p><strong>17:45.492</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
||||
<p><strong>17:40.655</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
||||
<table class="docutils align-default">
|
||||
<colgroup>
|
||||
<col style="width: 85%" />
|
||||
@@ -183,31 +183,31 @@
|
||||
</colgroup>
|
||||
<tbody>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
|
||||
<td><p>07:00.527</p></td>
|
||||
<td><p>06:43.164</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
|
||||
<td><p>05:33.832</p></td>
|
||||
<td><p>05:39.791</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
|
||||
<td><p>03:30.017</p></td>
|
||||
<td><p>03:32.151</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
|
||||
<td><p>01:40.502</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
|
||||
<td><p>00:00.292</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
|
||||
<td><p>00:00.251</p></td>
|
||||
<td><p>01:45.454</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="06-fused-attention.html#sphx-glr-getting-started-tutorials-06-fused-attention-py"><span class="std std-ref">Fused Attention</span></a> (<code class="docutils literal notranslate"><span class="pre">06-fused-attention.py</span></code>)</p></td>
|
||||
<td><p>00:00.072</p></td>
|
||||
<td><p>00:00.073</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
|
||||
<td><p>00:00.012</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
<tr class="row-odd"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
|
||||
<td><p>00:00.010</p></td>
|
||||
<td><p>0.0 MB</p></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
|
@@ -1,4 +1,4 @@
|
||||
# Sphinx build info version 1
|
||||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
||||
config: e0c0af907e3f828106b90ac86849905a
|
||||
config: ac23329f74fba034f447590fbba37166
|
||||
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
||||
|