[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-09-12 00:51:39 +00:00
parent f79b7c6f03
commit a81d78b680
165 changed files with 272 additions and 272 deletions

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 1b584902668bd40609d4cad005c16910
config: c2b2b52c1f772d41550667b6c2b5be3a
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -234,7 +234,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
size Triton Torch
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
2 16384.0 31.999999 38.400001
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
@@ -245,7 +245,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273
13 33554432.0 842.004273 843.811163
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 35.051 seconds)
**Total running time of the script:** ( 1 minutes 46.118 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,16 +278,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 546.133347 186.181817
0 256.0 546.133347 546.133347 190.511628
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 158.759699
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 406.179533 198.936606
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 412.149375 198.854847
96 12544.0 810.925276 412.971190 199.111113
93 12160.0 812.359066 406.603966 198.834951
94 12288.0 812.429770 415.661740 199.096718
95 12416.0 812.498981 412.149375 198.755369
96 12544.0 810.925276 412.971190 199.012395
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 28.987 seconds)
**Total running time of the script:** ( 3 minutes 31.735 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.978909 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 8.507077
0 256.0 2.978909 ... 3.276800 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 44.566925 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 56.888887
6 1024.0 51.150050 ... 53.773130 52.428801
7 1152.0 45.242181 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 78.643199
10 1536.0 80.430545 ... 81.355034 79.526831
11 1664.0 62.929456 ... 63.372618 62.492442
12 1792.0 72.512412 ... 73.460287 59.467852
13 1920.0 69.120002 ... 71.257735 71.257735
13 1920.0 69.120002 ... 71.626943 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.876193 86.367588
15 2176.0 83.500614 ... 87.115360 85.998493
16 2304.0 68.446623 ... 78.064941 77.307030
17 2432.0 71.305746 ... 86.179335 75.118889
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 83.277839 ... 90.532356 88.628636
20 2816.0 79.011245 ... 83.712490 83.074685
21 2944.0 82.102191 ... 83.337844 82.102191
22 3072.0 81.825298 ... 88.890270 88.060814
23 3200.0 83.660130 ... 96.096095 92.888243
24 3328.0 83.226931 ... 85.500351 84.596116
25 3456.0 81.108217 ... 91.771848 85.676480
26 3584.0 85.633710 ... 97.416461 97.734120
27 3712.0 85.455380 ... 89.194055 87.783251
28 3840.0 80.284573 ... 86.197974 89.912191
29 3968.0 88.873953 ... 91.747320 84.328915
30 4096.0 92.820009 ... 83.055527 89.958266
17 2432.0 71.305746 ... 85.393507 75.118889
18 2560.0 77.833728 ... 82.956960 81.715711
19 2688.0 83.922689 ... 90.748936 89.888756
20 2816.0 79.879498 ... 84.360174 83.873477
21 2944.0 82.373605 ... 83.337844 82.373605
22 3072.0 82.540970 ... 89.593522 88.750943
23 3200.0 84.544253 ... 96.603776 94.604578
24 3328.0 82.181847 ... 84.795401 84.397770
25 3456.0 81.435930 ... 91.928814 91.097818
26 3584.0 86.665439 ... 94.349836 94.548254
27 3712.0 85.822459 ... 86.641231 87.860458
28 3840.0 83.277102 ... 93.484358 85.730230
29 3968.0 92.512459 ... 80.917732 77.648067
30 4096.0 87.552332 ... 93.858555 89.299883
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 31.599 seconds)
**Total running time of the script:** ( 7 minutes 21.249 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.012 seconds)
**Total running time of the script:** ( 0 minutes 0.280 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm:
N Triton Torch Apex
0 1024.0 585.142849 277.694907 468.114273
0 1024.0 585.142849 277.694907 481.882344
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 682.666643 334.367358 520.126988
3 2560.0 694.237267 365.714281 512.000013
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 365.714281 518.481028
4 3072.0 712.347810 378.092307 496.484863
5 3584.0 725.873439 384.859062 451.527536
6 4096.0 728.177767 381.023256 455.111095
7 4608.0 670.254540 394.267384 426.173427
8 5120.0 688.403381 397.669909 420.102563
9 5632.0 704.000002 395.228063 415.262685
10 6144.0 697.191505 402.885254 409.600010
5 3584.0 725.873439 384.859062 455.111115
6 4096.0 728.177767 381.023256 448.876695
7 4608.0 670.254540 396.387087 426.173427
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 698.542675 396.969169 413.357796
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 690.891575 396.844306 387.459443
13 7680.0 678.895043 393.846167 386.415087
14 8192.0 636.271854 393.609605 371.308771
15 8704.0 624.502255 389.005597 380.502740
16 9216.0 604.327881 407.337026 383.999986
17 9728.0 585.142883 409.599987 383.369452
13 7680.0 678.895043 392.587863 387.634072
14 8192.0 633.198054 393.609605 371.308771
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.999986
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 382.803739
19 10752.0 546.133312 411.559798 381.445676
19 10752.0 547.872604 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 516.031509 413.911572 383.251457
22 12288.0 514.680630 414.784810 383.251457
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.976995
25 13824.0 482.934503 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.887117 406.974373 376.225175
29 15872.0 447.098578 406.974373 376.225175
@@ -393,7 +393,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 32.190 seconds)
**Total running time of the script:** ( 5 minutes 36.144 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -390,7 +390,7 @@ This is a Triton implementation of the Flash Attention algorithm
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.075 seconds)
**Total running time of the script:** ( 0 minutes 0.072 seconds)
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:

View File

@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.012 seconds)
**Total running time of the script:** ( 0 minutes 0.273 seconds)
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:

View File

@@ -5,20 +5,20 @@
Computation times
=================
**17:07.926** total execution time for **getting-started_tutorials** files:
**18:15.870** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:31.599 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:21.249 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:32.190 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:36.144 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:28.987 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:31.735 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:35.051 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.118 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.075 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.280 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.273 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.012 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -326,7 +326,7 @@ for different problem sizes.</p>
size Triton Torch
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
2 16384.0 31.999999 38.400001
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
@@ -337,12 +337,12 @@ for different problem sizes.</p>
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273
13 33554432.0 842.004273 843.811163
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 35.051 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 46.118 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -371,16 +371,16 @@ We will then compare its performance against (1) <code class="code docutils lite
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 546.133347 186.181817
0 256.0 546.133347 546.133347 190.511628
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 158.759699
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 406.179533 198.936606
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 412.149375 198.854847
96 12544.0 810.925276 412.971190 199.111113
93 12160.0 812.359066 406.603966 198.834951
94 12288.0 812.429770 415.661740 199.096718
95 12416.0 812.498981 412.149375 198.755369
96 12544.0 810.925276 412.971190 199.012395
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
@@ -394,7 +394,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 28.987 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 31.735 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.978909 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 8.507077
0 256.0 2.978909 ... 3.276800 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 44.566925 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 56.888887
6 1024.0 51.150050 ... 53.773130 52.428801
7 1152.0 45.242181 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 78.643199
10 1536.0 80.430545 ... 81.355034 79.526831
11 1664.0 62.929456 ... 63.372618 62.492442
12 1792.0 72.512412 ... 73.460287 59.467852
13 1920.0 69.120002 ... 71.257735 71.257735
13 1920.0 69.120002 ... 71.626943 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.876193 86.367588
15 2176.0 83.500614 ... 87.115360 85.998493
16 2304.0 68.446623 ... 78.064941 77.307030
17 2432.0 71.305746 ... 86.179335 75.118889
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 83.277839 ... 90.532356 88.628636
20 2816.0 79.011245 ... 83.712490 83.074685
21 2944.0 82.102191 ... 83.337844 82.102191
22 3072.0 81.825298 ... 88.890270 88.060814
23 3200.0 83.660130 ... 96.096095 92.888243
24 3328.0 83.226931 ... 85.500351 84.596116
25 3456.0 81.108217 ... 91.771848 85.676480
26 3584.0 85.633710 ... 97.416461 97.734120
27 3712.0 85.455380 ... 89.194055 87.783251
28 3840.0 80.284573 ... 86.197974 89.912191
29 3968.0 88.873953 ... 91.747320 84.328915
30 4096.0 92.820009 ... 83.055527 89.958266
17 2432.0 71.305746 ... 85.393507 75.118889
18 2560.0 77.833728 ... 82.956960 81.715711
19 2688.0 83.922689 ... 90.748936 89.888756
20 2816.0 79.879498 ... 84.360174 83.873477
21 2944.0 82.373605 ... 83.337844 82.373605
22 3072.0 82.540970 ... 89.593522 88.750943
23 3200.0 84.544253 ... 96.603776 94.604578
24 3328.0 82.181847 ... 84.795401 84.397770
25 3456.0 81.435930 ... 91.928814 91.097818
26 3584.0 86.665439 ... 94.349836 94.548254
27 3712.0 85.822459 ... 86.641231 87.860458
28 3840.0 83.277102 ... 93.484358 85.730230
29 3968.0 92.512459 ... 80.917732 77.648067
30 4096.0 87.552332 ... 93.858555 89.299883
[31 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 31.599 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 7 minutes 21.249 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -374,7 +374,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd>
</dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.012 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.280 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -196,36 +196,36 @@ to download the full example code</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm:
N Triton Torch Apex
0 1024.0 585.142849 277.694907 468.114273
0 1024.0 585.142849 277.694907 481.882344
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 682.666643 334.367358 520.126988
3 2560.0 694.237267 365.714281 512.000013
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 365.714281 518.481028
4 3072.0 712.347810 378.092307 496.484863
5 3584.0 725.873439 384.859062 451.527536
6 4096.0 728.177767 381.023256 455.111095
7 4608.0 670.254540 394.267384 426.173427
8 5120.0 688.403381 397.669909 420.102563
9 5632.0 704.000002 395.228063 415.262685
10 6144.0 697.191505 402.885254 409.600010
5 3584.0 725.873439 384.859062 455.111115
6 4096.0 728.177767 381.023256 448.876695
7 4608.0 670.254540 396.387087 426.173427
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 698.542675 396.969169 413.357796
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 690.891575 396.844306 387.459443
13 7680.0 678.895043 393.846167 386.415087
14 8192.0 636.271854 393.609605 371.308771
15 8704.0 624.502255 389.005597 380.502740
16 9216.0 604.327881 407.337026 383.999986
17 9728.0 585.142883 409.599987 383.369452
13 7680.0 678.895043 392.587863 387.634072
14 8192.0 633.198054 393.609605 371.308771
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.999986
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 382.803739
19 10752.0 546.133312 411.559798 381.445676
19 10752.0 547.872604 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 516.031509 413.911572 383.251457
22 12288.0 514.680630 414.784810 383.251457
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.976995
25 13824.0 482.934503 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.887117 406.974373 376.225175
29 15872.0 447.098578 406.974373 376.225175
</pre></div>
</div>
<div class="line-block">
@@ -543,7 +543,7 @@ to download the full example code</p>
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 32.190 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 36.144 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>

View File

@@ -548,7 +548,7 @@ to download the full example code</p>
<span class="c1"># bench_flash_attention.run(save_path=&#39;.&#39;, print_data=True)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.075 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.072 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-06-fused-attention-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/54a35f6ec55f9746935b9566fb6bb1df/06-fused-attention.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">06-fused-attention.py</span></code></a></p>

View File

@@ -276,7 +276,7 @@ tensor([0.4105, 0.5430, 0.0249, ..., 0.0424, 0.5351, 0.8149], device=&#39;cuda:
The maximum difference between torch and triton is 2.384185791015625e-07
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.012 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.273 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-07-libdevice-function-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/3ff29f967ace7985da24aab10352fc76/07-libdevice-function.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">07-libdevice-function.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>17:07.926</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>18:15.870</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,31 +183,31 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>06:31.599</p></td>
<td><p>07:21.249</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
<td><p>05:32.190</p></td>
<td><p>05:36.144</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:28.987</p></td>
<td><p>03:31.735</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:35.051</p></td>
<td><p>01:46.118</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.280</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
<td><p>00:00.273</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="06-fused-attention.html#sphx-glr-getting-started-tutorials-06-fused-attention-py"><span class="std std-ref">Fused Attention</span></a> (<code class="docutils literal notranslate"><span class="pre">06-fused-attention.py</span></code>)</p></td>
<td><p>00:00.075</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.012</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
<td><p>00:00.012</p></td>
<td><p>00:00.072</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 2d5a1488528d834fcf88480396421fec
config: 7afdc92e6ceb92537c8bf29177c2dbb0
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More