[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-09-10 00:52:29 +00:00
parent 8f733c4476
commit 4588c0bc46
165 changed files with 284 additions and 284 deletions

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 5ab336a702bbdecaaa31e7a15d0c8cb1
config: bbf13b0f91abba000f682ef71c0ca565
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -235,7 +235,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
@@ -245,7 +245,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273
13 33554432.0 842.004273 843.811163
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 49.605 seconds)
**Total running time of the script:** ( 1 minutes 46.048 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -279,16 +279,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 546.133347 188.321838
1 384.0 614.400016 558.545450 151.703707
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 158.759699
4 768.0 722.823517 664.216187 163.839992
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 405.755985 198.733401
94 12288.0 812.429770 415.222812 198.995960
95 12416.0 812.498981 412.149375 198.655991
96 12544.0 810.925276 412.971190 198.815254
97 12672.0 811.007961 412.097543 198.971549
93 12160.0 812.359066 406.179533 198.936606
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 412.149375 198.854847
96 12544.0 810.925276 412.546756 199.111113
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 32.100 seconds)
**Total running time of the script:** ( 3 minutes 27.135 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
0 256.0 2.978909 ... 2.978909 3.276800
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 16.384000 15.420235
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 37.971025 ... 40.140799 39.025776
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
7 1152.0 44.566925 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 67.305878
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 79.526831
11 1664.0 62.929456 ... 63.372618 62.492442
12 1792.0 72.512412 ... 73.460287 59.467852
13 1920.0 69.120002 ... 71.257735 71.257735
14 2048.0 73.584279 ... 78.033565 77.314362
15 2176.0 83.155572 ... 87.494120 85.998493
16 2304.0 68.251065 ... 78.064941 77.307030
17 2432.0 71.305746 ... 86.179335 75.118889
13 1920.0 68.776119 ... 71.626943 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.876193 85.998493
16 2304.0 68.446623 ... 78.064941 77.057651
17 2432.0 71.305746 ... 84.367759 75.320281
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 83.922689 ... 90.966561 88.836198
20 2816.0 79.733474 ... 83.873477 83.074685
21 2944.0 82.509987 ... 83.617504 82.373605
22 3072.0 81.530748 ... 89.735509 88.060814
23 3200.0 84.432717 ... 96.530922 95.380032
24 3328.0 82.843841 ... 85.754970 83.808259
25 3456.0 80.945348 ... 85.133652 81.271743
26 3584.0 87.466332 ... 95.249353 98.699661
27 3712.0 85.896254 ... 81.283434 87.706180
28 3840.0 81.019778 ... 86.332554 89.766237
29 3968.0 85.751184 ... 90.388098 86.114283
30 4096.0 91.867031 ... 83.081233 89.807782
19 2688.0 83.552988 ... 90.966561 89.676257
20 2816.0 79.443003 ... 84.523664 83.873477
21 2944.0 82.509987 ... 83.337844 82.373605
22 3072.0 81.707223 ... 89.310890 89.030036
23 3200.0 84.656085 ... 96.530922 94.674553
24 3328.0 82.369902 ... 85.398926 85.096096
25 3456.0 82.099354 ... 91.824110 90.382926
26 3584.0 85.633710 ... 98.375705 98.591437
27 3712.0 84.230479 ... 87.783251 82.559787
28 3840.0 84.100380 ... 93.563449 84.484863
29 3968.0 92.512459 ... 84.976733 91.540836
30 4096.0 87.495257 ... 88.127204 91.242506
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 7 minutes 11.851 seconds)
**Total running time of the script:** ( 6 minutes 30.888 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.290 seconds)
**Total running time of the script:** ( 0 minutes 0.012 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm:
N Triton Torch Apex
0 1024.0 585.142849 277.694907 481.882344
0 1024.0 585.142849 277.694907 468.114273
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 668.734716 337.814445 520.126988
3 2560.0 694.237267 362.477870 512.000013
4 3072.0 702.171410 378.092307 501.551037
5 3584.0 725.873439 384.859062 458.751978
6 4096.0 728.177767 383.251446 451.972420
7 4608.0 670.254540 396.387087 428.651163
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 365.714281 512.000013
4 3072.0 712.347810 378.092307 496.484863
5 3584.0 725.873439 384.859062 448.000001
6 4096.0 728.177767 381.023256 445.823133
7 4608.0 670.254540 394.267384 421.302872
8 5120.0 688.403381 397.669909 424.455959
9 5632.0 698.542675 398.725657 411.470331
10 6144.0 697.191505 404.543206 409.600010
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 686.754468 383.571898 383.571898
13 7680.0 682.666656 392.587863 386.415087
14 8192.0 639.375598 392.431125 373.424507
15 8704.0 624.502255 391.191007 380.502740
16 9216.0 604.327881 403.989025 382.010363
17 9728.0 585.142883 409.599987 383.369452
18 10240.0 564.965524 409.600010 382.803739
19 10752.0 546.133312 410.577576 380.601764
20 11264.0 530.070590 396.969169 367.804077
21 11776.0 519.052343 409.599991 377.587162
9 5632.0 698.542675 395.228063 411.470331
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 700.631610 398.861429 400.360920
12 7168.0 690.891575 396.844306 387.459443
13 7680.0 678.895043 393.846167 386.415087
14 8192.0 636.271854 393.609605 371.308771
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.999986
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 381.911416
19 10752.0 546.133312 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 513.336807 413.911572 383.251457
23 12800.0 504.433489 410.420828 377.163903
24 13312.0 494.180982 408.030638 376.976995
25 13824.0 481.882350 412.656711 379.389355
26 14336.0 471.967074 398.914774 371.158581
27 14848.0 461.297068 406.794504 375.304904
28 15360.0 454.269882 406.887417 377.511515
29 15872.0 447.887117 408.282944 376.225175
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.976995
25 13824.0 482.934503 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.098578 406.974373 376.783377
@@ -393,7 +393,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 37.085 seconds)
**Total running time of the script:** ( 5 minutes 36.789 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -390,7 +390,7 @@ This is a Triton implementation of the Flash Attention algorithm
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.072 seconds)
**Total running time of the script:** ( 0 minutes 0.071 seconds)
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:

View File

@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.253 seconds)
**Total running time of the script:** ( 0 minutes 0.010 seconds)
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:

View File

@@ -5,20 +5,20 @@
Computation times
=================
**18:11.256** total execution time for **getting-started_tutorials** files:
**17:20.953** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:11.851 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:30.888 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:37.085 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:36.789 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.100 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.135 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:49.605 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.048 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.290 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.071 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.253 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -327,7 +327,7 @@ for different problem sizes.</p>
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
@@ -337,12 +337,12 @@ for different problem sizes.</p>
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273
13 33554432.0 842.004273 843.811163
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 49.605 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 46.048 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -372,16 +372,16 @@ We will then compare its performance against (1) <code class="code docutils lite
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 546.133347 188.321838
1 384.0 614.400016 558.545450 151.703707
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 158.759699
4 768.0 722.823517 664.216187 163.839992
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 405.755985 198.733401
94 12288.0 812.429770 415.222812 198.995960
95 12416.0 812.498981 412.149375 198.655991
96 12544.0 810.925276 412.971190 198.815254
97 12672.0 811.007961 412.097543 198.971549
93 12160.0 812.359066 406.179533 198.936606
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 412.149375 198.854847
96 12544.0 810.925276 412.546756 199.111113
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
</pre></div>
@@ -394,7 +394,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 32.100 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 27.135 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
0 256.0 2.978909 ... 2.978909 3.276800
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 16.384000 15.420235
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 37.971025 ... 40.140799 39.025776
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
7 1152.0 44.566925 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 67.305878
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 79.526831
11 1664.0 62.929456 ... 63.372618 62.492442
12 1792.0 72.512412 ... 73.460287 59.467852
13 1920.0 69.120002 ... 71.257735 71.257735
14 2048.0 73.584279 ... 78.033565 77.314362
15 2176.0 83.155572 ... 87.494120 85.998493
16 2304.0 68.251065 ... 78.064941 77.307030
17 2432.0 71.305746 ... 86.179335 75.118889
13 1920.0 68.776119 ... 71.626943 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.876193 85.998493
16 2304.0 68.446623 ... 78.064941 77.057651
17 2432.0 71.305746 ... 84.367759 75.320281
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 83.922689 ... 90.966561 88.836198
20 2816.0 79.733474 ... 83.873477 83.074685
21 2944.0 82.509987 ... 83.617504 82.373605
22 3072.0 81.530748 ... 89.735509 88.060814
23 3200.0 84.432717 ... 96.530922 95.380032
24 3328.0 82.843841 ... 85.754970 83.808259
25 3456.0 80.945348 ... 85.133652 81.271743
26 3584.0 87.466332 ... 95.249353 98.699661
27 3712.0 85.896254 ... 81.283434 87.706180
28 3840.0 81.019778 ... 86.332554 89.766237
29 3968.0 85.751184 ... 90.388098 86.114283
30 4096.0 91.867031 ... 83.081233 89.807782
19 2688.0 83.552988 ... 90.966561 89.676257
20 2816.0 79.443003 ... 84.523664 83.873477
21 2944.0 82.509987 ... 83.337844 82.373605
22 3072.0 81.707223 ... 89.310890 89.030036
23 3200.0 84.656085 ... 96.530922 94.674553
24 3328.0 82.369902 ... 85.398926 85.096096
25 3456.0 82.099354 ... 91.824110 90.382926
26 3584.0 85.633710 ... 98.375705 98.591437
27 3712.0 84.230479 ... 87.783251 82.559787
28 3840.0 84.100380 ... 93.563449 84.484863
29 3968.0 92.512459 ... 84.976733 91.540836
30 4096.0 87.495257 ... 88.127204 91.242506
[31 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 7 minutes 11.851 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 30.888 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -374,7 +374,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd>
</dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.290 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.012 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -196,36 +196,36 @@ to download the full example code</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm:
N Triton Torch Apex
0 1024.0 585.142849 277.694907 481.882344
0 1024.0 585.142849 277.694907 468.114273
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 668.734716 337.814445 520.126988
3 2560.0 694.237267 362.477870 512.000013
4 3072.0 702.171410 378.092307 501.551037
5 3584.0 725.873439 384.859062 458.751978
6 4096.0 728.177767 383.251446 451.972420
7 4608.0 670.254540 396.387087 428.651163
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 365.714281 512.000013
4 3072.0 712.347810 378.092307 496.484863
5 3584.0 725.873439 384.859062 448.000001
6 4096.0 728.177767 381.023256 445.823133
7 4608.0 670.254540 394.267384 421.302872
8 5120.0 688.403381 397.669909 424.455959
9 5632.0 698.542675 398.725657 411.470331
10 6144.0 697.191505 404.543206 409.600010
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 686.754468 383.571898 383.571898
13 7680.0 682.666656 392.587863 386.415087
14 8192.0 639.375598 392.431125 373.424507
15 8704.0 624.502255 391.191007 380.502740
16 9216.0 604.327881 403.989025 382.010363
17 9728.0 585.142883 409.599987 383.369452
18 10240.0 564.965524 409.600010 382.803739
19 10752.0 546.133312 410.577576 380.601764
20 11264.0 530.070590 396.969169 367.804077
21 11776.0 519.052343 409.599991 377.587162
9 5632.0 698.542675 395.228063 411.470331
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 700.631610 398.861429 400.360920
12 7168.0 690.891575 396.844306 387.459443
13 7680.0 678.895043 393.846167 386.415087
14 8192.0 636.271854 393.609605 371.308771
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.999986
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 381.911416
19 10752.0 546.133312 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 513.336807 413.911572 383.251457
23 12800.0 504.433489 410.420828 377.163903
24 13312.0 494.180982 408.030638 376.976995
25 13824.0 481.882350 412.656711 379.389355
26 14336.0 471.967074 398.914774 371.158581
27 14848.0 461.297068 406.794504 375.304904
28 15360.0 454.269882 406.887417 377.511515
29 15872.0 447.887117 408.282944 376.225175
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.976995
25 13824.0 482.934503 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.098578 406.974373 376.783377
</pre></div>
</div>
<div class="line-block">
@@ -543,7 +543,7 @@ to download the full example code</p>
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 37.085 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 36.789 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>

View File

@@ -548,7 +548,7 @@ to download the full example code</p>
<span class="c1"># bench_flash_attention.run(save_path=&#39;.&#39;, print_data=True)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.072 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.071 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-06-fused-attention-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/54a35f6ec55f9746935b9566fb6bb1df/06-fused-attention.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">06-fused-attention.py</span></code></a></p>

View File

@@ -276,7 +276,7 @@ tensor([0.4105, 0.5430, 0.0249, ..., 0.0424, 0.5351, 0.8149], device=&#39;cuda:
The maximum difference between torch and triton is 2.384185791015625e-07
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.253 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.010 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-07-libdevice-function-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/3ff29f967ace7985da24aab10352fc76/07-libdevice-function.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">07-libdevice-function.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>18:11.256</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>17:20.953</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,31 +183,31 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>07:11.851</p></td>
<td><p>06:30.888</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
<td><p>05:37.085</p></td>
<td><p>05:36.789</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:32.100</p></td>
<td><p>03:27.135</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:49.605</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.290</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
<td><p>00:00.253</p></td>
<td><p>01:46.048</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="06-fused-attention.html#sphx-glr-getting-started-tutorials-06-fused-attention-py"><span class="std std-ref">Fused Attention</span></a> (<code class="docutils literal notranslate"><span class="pre">06-fused-attention.py</span></code>)</p></td>
<td><p>00:00.072</p></td>
<td><p>00:00.071</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.012</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
<td><p>00:00.010</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 785a054243bb7f711254427164d01157
config: 9550e7e67a618ab8fb2356063f9a5dc6
tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More