[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-04-24 00:44:07 +00:00
parent 31dd4ab60e
commit 1581cf9d79
158 changed files with 328 additions and 328 deletions

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1 # Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ca77bb6474c45e6f0da200b497be637e config: dcb51d10379790b23b9f305eb4789868
tags: 645f666f9bcd5a90fca523b33c5a78b7 tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

@@ -241,13 +241,13 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
6 262144.0 341.333321 341.333321 6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390 7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016 8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 702.171410 9 2097152.0 702.171410 702.171410
10 4194304.0 780.190482 780.190482 10 4194304.0 780.190482 768.000002
11 8388608.0 812.429770 812.429770 11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721 12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273 13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445 14 67108864.0 847.448255 848.362445
15 134217728.0 850.196756 850.656574 15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 52.088 seconds) **Total running time of the script:** ( 1 minutes 48.554 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py: .. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance: softmax-performance:
N Triton Torch (native) Torch (jit) N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628 0 256.0 512.000001 546.133347 186.181817
1 384.0 585.142862 558.545450 149.853661 1 384.0 438.857137 558.545450 151.703707
2 512.0 655.360017 585.142849 154.566038 2 512.0 468.114273 585.142849 154.566038
3 640.0 682.666684 640.000002 158.759699 3 640.0 465.454542 640.000002 158.759699
4 768.0 722.823517 646.736871 162.754967 4 768.0 472.615390 664.216187 163.839992
.. ... ... ... ... .. ... ... ... ...
93 12160.0 815.765209 405.755985 198.631953 93 12160.0 478.034381 405.333344 198.834951
94 12288.0 814.111783 416.101597 198.794749 94 12288.0 484.256150 415.222812 199.096718
95 12416.0 815.835709 412.149375 198.457532 95 12416.0 461.454135 411.296057 198.755369
96 12544.0 814.214963 412.971190 198.618504 96 12544.0 458.228323 412.971190 198.815254
97 12672.0 814.265046 412.097543 198.776477 97 12672.0 458.196617 411.679167 198.971549
[98 rows x 4 columns] [98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 32.198 seconds) **Total running time of the script:** ( 3 minutes 10.849 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py: .. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,36 +459,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance: matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU) M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909 0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 8.507077 1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000 2 512.0 14.563555 ... 15.420235 15.420235
3 640.0 22.260869 ... 24.380953 24.380953 3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 31.597714 ... 34.028308 34.028308 4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 36.971791 ... 40.140799 39.025776 5 896.0 37.971025 ... 40.140799 40.140799
6 1024.0 48.770977 ... 52.428801 52.428801 6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 43.911529 ... 46.656000 46.656000 7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 49.951220 ... 56.888887 56.109587 8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 62.664092 ... 67.305878 66.485074 9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 78.643199 ... 78.643199 77.778988 10 1536.0 79.526831 ... 79.526831 79.526831
11 1664.0 61.636381 ... 62.061463 61.636381 11 1664.0 62.929456 ... 62.929456 62.492442
12 1792.0 71.588687 ... 67.707374 67.707374 12 1792.0 72.512412 ... 63.142831 62.790080
13 1920.0 67.764707 ... 70.172588 69.818184 13 1920.0 69.120002 ... 71.626943 70.892307
14 2048.0 72.005219 ... 76.608294 76.260072 14 2048.0 73.584279 ... 78.033565 77.672296
15 2176.0 81.803444 ... 85.632545 85.269692 15 2176.0 83.155572 ... 87.115360 86.739860
16 2304.0 67.100763 ... 76.563695 76.319081 16 2304.0 68.251065 ... 78.064941 77.558029
17 2432.0 69.886725 ... 74.323979 73.932798 17 2432.0 71.125224 ... 75.522751 75.320281
18 2560.0 76.382283 ... 80.908642 80.511054 18 2560.0 77.833728 ... 82.331658 81.920002
19 2688.0 82.642823 ... 89.676257 89.254248 19 2688.0 83.645109 ... 90.102270 90.102270
20 2816.0 82.602666 ... 82.916747 82.759409 20 2816.0 80.026067 ... 83.712490 81.981598
21 2944.0 81.166173 ... 82.509987 82.169877 21 2944.0 82.509987 ... 82.852924 82.646820
22 3072.0 81.005868 ... 88.197981 88.060814 22 3072.0 82.540970 ... 86.845249 89.593522
23 3200.0 83.660130 ... 95.522391 95.096582 23 3200.0 84.993363 ... 96.240602 96.096095
24 3328.0 82.369902 ... 84.795401 84.596116 24 3328.0 83.034941 ... 83.905938 82.843841
25 3456.0 81.026701 ... 91.304157 90.994998 25 3456.0 81.849303 ... 89.679166 90.382926
26 3584.0 86.374055 ... 98.699661 98.375705 26 3584.0 85.797134 ... 93.467144 98.483450
27 3712.0 84.874549 ... 88.876645 88.483034 27 3712.0 85.748791 ... 86.754095 88.170647
28 3840.0 84.036474 ... 92.545605 92.159996 28 3840.0 81.798814 ... 87.841141 91.473945
29 3968.0 92.232760 ... 91.062642 91.403695 29 3968.0 85.871877 ... 92.582651 84.738843
30 4096.0 93.206754 ... 93.012976 92.820009 30 4096.0 94.386588 ... 84.894196 82.722796
[31 rows x 5 columns] [31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 41.514 seconds) **Total running time of the script:** ( 6 minutes 44.900 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py: .. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.411 seconds) **Total running time of the script:** ( 0 minutes 0.325 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py: .. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward: layer-norm-backward:
N Triton Torch Apex N Triton Torch Apex
0 1024.0 356.173905 96.755900 299.707322 0 1024.0 114.306981 98.698793 315.076934
1 1536.0 400.695643 133.083026 338.201833 1 1536.0 118.153850 132.604320 341.333333
2 2048.0 486.653476 159.584422 321.254900 2 2048.0 125.068704 159.584422 332.108094
3 2560.0 455.111129 179.649115 323.368411 3 2560.0 119.300966 180.705883 332.108113
4 3072.0 508.468972 191.005181 320.556515 4 3072.0 123.912607 191.005181 320.556515
5 3584.0 551.384634 206.769233 310.527060 5 3584.0 127.809806 207.267476 310.527060
6 4096.0 558.545450 219.919464 297.890900 6 4096.0 130.723400 220.907859 296.990947
7 4608.0 493.714279 231.364016 285.767436 7 4608.0 105.325718 232.336141 287.999990
8 5120.0 518.481012 241.414550 283.133649 8 5120.0 108.647215 241.414550 283.787523
9 5632.0 534.260858 241.803217 288.820505 9 5632.0 109.714290 241.371422 288.204696
10 6144.0 540.131844 247.409397 285.214712 10 6144.0 112.133840 249.502530 287.438593
11 6656.0 525.473708 255.182111 284.748652 11 6656.0 112.733948 254.775119 285.257135
12 7168.0 503.017523 259.084340 283.881181 12 7168.0 114.994654 257.147993 281.098038
13 7680.0 482.513091 262.190612 275.104486 13 7680.0 115.128047 264.068761 281.834874
14 8192.0 463.698115 265.686491 283.296835 14 8192.0 115.856217 267.493874 282.889211
15 8704.0 406.412440 266.789264 283.826081 15 8704.0 94.054934 264.091015 282.673891
16 9216.0 419.703988 270.727053 287.251954 16 9216.0 96.755908 271.391419 287.625496
17 9728.0 428.388977 279.942444 288.950501 17 9728.0 97.768843 280.615388 288.950501
18 10240.0 437.295395 285.104413 286.767793 18 10240.0 100.105904 285.104413 289.129408
19 10752.0 424.421071 245.292781 289.291486 19 10752.0 101.036809 246.464170 290.594591
20 11264.0 423.724120 244.426754 286.069848 20 11264.0 102.945926 245.536784 286.676558
21 11776.0 418.702211 247.915800 288.097854 21 11776.0 103.601171 249.447482 288.981596
22 12288.0 414.784810 253.578674 293.737063 22 12288.0 106.159829 254.673582 294.911986
23 12800.0 409.327110 252.631590 287.371378 23 12800.0 106.187352 253.884294 289.811310
24 13312.0 408.030638 252.360194 289.653667 24 13312.0 107.463167 251.962147 289.129403
25 13824.0 403.620451 256.197690 291.287079 25 13824.0 107.544896 256.792581 292.313649
26 14336.0 394.116833 253.921779 286.242939 26 14336.0 109.435116 254.673567 287.198654
27 14848.0 384.829370 257.108233 289.246765 27 14848.0 109.276912 255.266469 289.481735
28 15360.0 380.041240 257.071134 284.884090 28 15360.0 110.669469 260.338991 290.496454
29 15872.0 370.913333 260.731015 289.679087 29 15872.0 110.863792 263.071829 291.898841
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 16.519 seconds) **Total running time of the script:** ( 2 minutes 2.489 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py: .. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times Computation times
================= =================
**14:22.730** total execution time for **getting-started_tutorials** files: **13:47.116** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:41.514 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:44.900 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.198 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:10.849 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:16.519 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:02.489 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:52.088 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:48.554 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.411 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.325 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -331,16 +331,16 @@ for different problem sizes.</p>
6 262144.0 341.333321 341.333321 6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390 7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016 8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 702.171410 9 2097152.0 702.171410 702.171410
10 4194304.0 780.190482 780.190482 10 4194304.0 780.190482 768.000002
11 8388608.0 812.429770 812.429770 11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721 12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273 13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445 14 67108864.0 847.448255 848.362445
15 134217728.0 850.196756 850.656574 15 134217728.0 849.737435 850.656574
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 52.088 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 48.554 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -369,17 +369,17 @@ We will then compare its performance against (1) <code class="code docutils lite
<p class="sphx-glr-script-out">Out:</p> <p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit) N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628 0 256.0 512.000001 546.133347 186.181817
1 384.0 585.142862 558.545450 149.853661 1 384.0 438.857137 558.545450 151.703707
2 512.0 655.360017 585.142849 154.566038 2 512.0 468.114273 585.142849 154.566038
3 640.0 682.666684 640.000002 158.759699 3 640.0 465.454542 640.000002 158.759699
4 768.0 722.823517 646.736871 162.754967 4 768.0 472.615390 664.216187 163.839992
.. ... ... ... ... .. ... ... ... ...
93 12160.0 815.765209 405.755985 198.631953 93 12160.0 478.034381 405.333344 198.834951
94 12288.0 814.111783 416.101597 198.794749 94 12288.0 484.256150 415.222812 199.096718
95 12416.0 815.835709 412.149375 198.457532 95 12416.0 461.454135 411.296057 198.755369
96 12544.0 814.214963 412.971190 198.618504 96 12544.0 458.228323 412.971190 198.815254
97 12672.0 814.265046 412.097543 198.776477 97 12672.0 458.196617 411.679167 198.971549
[98 rows x 4 columns] [98 rows x 4 columns]
</pre></div> </pre></div>
@@ -392,7 +392,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li> Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul> </ul>
</div></blockquote> </div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 32.198 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 10.849 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -565,41 +565,41 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU) M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909 0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 8.507077 1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000 2 512.0 14.563555 ... 15.420235 15.420235
3 640.0 22.260869 ... 24.380953 24.380953 3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 31.597714 ... 34.028308 34.028308 4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 36.971791 ... 40.140799 39.025776 5 896.0 37.971025 ... 40.140799 40.140799
6 1024.0 48.770977 ... 52.428801 52.428801 6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 43.911529 ... 46.656000 46.656000 7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 49.951220 ... 56.888887 56.109587 8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 62.664092 ... 67.305878 66.485074 9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 78.643199 ... 78.643199 77.778988 10 1536.0 79.526831 ... 79.526831 79.526831
11 1664.0 61.636381 ... 62.061463 61.636381 11 1664.0 62.929456 ... 62.929456 62.492442
12 1792.0 71.588687 ... 67.707374 67.707374 12 1792.0 72.512412 ... 63.142831 62.790080
13 1920.0 67.764707 ... 70.172588 69.818184 13 1920.0 69.120002 ... 71.626943 70.892307
14 2048.0 72.005219 ... 76.608294 76.260072 14 2048.0 73.584279 ... 78.033565 77.672296
15 2176.0 81.803444 ... 85.632545 85.269692 15 2176.0 83.155572 ... 87.115360 86.739860
16 2304.0 67.100763 ... 76.563695 76.319081 16 2304.0 68.251065 ... 78.064941 77.558029
17 2432.0 69.886725 ... 74.323979 73.932798 17 2432.0 71.125224 ... 75.522751 75.320281
18 2560.0 76.382283 ... 80.908642 80.511054 18 2560.0 77.833728 ... 82.331658 81.920002
19 2688.0 82.642823 ... 89.676257 89.254248 19 2688.0 83.645109 ... 90.102270 90.102270
20 2816.0 82.602666 ... 82.916747 82.759409 20 2816.0 80.026067 ... 83.712490 81.981598
21 2944.0 81.166173 ... 82.509987 82.169877 21 2944.0 82.509987 ... 82.852924 82.646820
22 3072.0 81.005868 ... 88.197981 88.060814 22 3072.0 82.540970 ... 86.845249 89.593522
23 3200.0 83.660130 ... 95.522391 95.096582 23 3200.0 84.993363 ... 96.240602 96.096095
24 3328.0 82.369902 ... 84.795401 84.596116 24 3328.0 83.034941 ... 83.905938 82.843841
25 3456.0 81.026701 ... 91.304157 90.994998 25 3456.0 81.849303 ... 89.679166 90.382926
26 3584.0 86.374055 ... 98.699661 98.375705 26 3584.0 85.797134 ... 93.467144 98.483450
27 3712.0 84.874549 ... 88.876645 88.483034 27 3712.0 85.748791 ... 86.754095 88.170647
28 3840.0 84.036474 ... 92.545605 92.159996 28 3840.0 81.798814 ... 87.841141 91.473945
29 3968.0 92.232760 ... 91.062642 91.403695 29 3968.0 85.871877 ... 92.582651 84.738843
30 4096.0 93.206754 ... 93.012976 92.820009 30 4096.0 94.386588 ... 84.894196 82.722796
[31 rows x 5 columns] [31 rows x 5 columns]
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 41.514 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 44.900 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -372,7 +372,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p> <dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd> </dd>
</dl> </dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.411 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.325 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -194,36 +194,36 @@ to download the full example code</p>
<p class="sphx-glr-script-out">Out:</p> <p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm-backward: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm-backward:
N Triton Torch Apex N Triton Torch Apex
0 1024.0 356.173905 96.755900 299.707322 0 1024.0 114.306981 98.698793 315.076934
1 1536.0 400.695643 133.083026 338.201833 1 1536.0 118.153850 132.604320 341.333333
2 2048.0 486.653476 159.584422 321.254900 2 2048.0 125.068704 159.584422 332.108094
3 2560.0 455.111129 179.649115 323.368411 3 2560.0 119.300966 180.705883 332.108113
4 3072.0 508.468972 191.005181 320.556515 4 3072.0 123.912607 191.005181 320.556515
5 3584.0 551.384634 206.769233 310.527060 5 3584.0 127.809806 207.267476 310.527060
6 4096.0 558.545450 219.919464 297.890900 6 4096.0 130.723400 220.907859 296.990947
7 4608.0 493.714279 231.364016 285.767436 7 4608.0 105.325718 232.336141 287.999990
8 5120.0 518.481012 241.414550 283.133649 8 5120.0 108.647215 241.414550 283.787523
9 5632.0 534.260858 241.803217 288.820505 9 5632.0 109.714290 241.371422 288.204696
10 6144.0 540.131844 247.409397 285.214712 10 6144.0 112.133840 249.502530 287.438593
11 6656.0 525.473708 255.182111 284.748652 11 6656.0 112.733948 254.775119 285.257135
12 7168.0 503.017523 259.084340 283.881181 12 7168.0 114.994654 257.147993 281.098038
13 7680.0 482.513091 262.190612 275.104486 13 7680.0 115.128047 264.068761 281.834874
14 8192.0 463.698115 265.686491 283.296835 14 8192.0 115.856217 267.493874 282.889211
15 8704.0 406.412440 266.789264 283.826081 15 8704.0 94.054934 264.091015 282.673891
16 9216.0 419.703988 270.727053 287.251954 16 9216.0 96.755908 271.391419 287.625496
17 9728.0 428.388977 279.942444 288.950501 17 9728.0 97.768843 280.615388 288.950501
18 10240.0 437.295395 285.104413 286.767793 18 10240.0 100.105904 285.104413 289.129408
19 10752.0 424.421071 245.292781 289.291486 19 10752.0 101.036809 246.464170 290.594591
20 11264.0 423.724120 244.426754 286.069848 20 11264.0 102.945926 245.536784 286.676558
21 11776.0 418.702211 247.915800 288.097854 21 11776.0 103.601171 249.447482 288.981596
22 12288.0 414.784810 253.578674 293.737063 22 12288.0 106.159829 254.673582 294.911986
23 12800.0 409.327110 252.631590 287.371378 23 12800.0 106.187352 253.884294 289.811310
24 13312.0 408.030638 252.360194 289.653667 24 13312.0 107.463167 251.962147 289.129403
25 13824.0 403.620451 256.197690 291.287079 25 13824.0 107.544896 256.792581 292.313649
26 14336.0 394.116833 253.921779 286.242939 26 14336.0 109.435116 254.673567 287.198654
27 14848.0 384.829370 257.108233 289.246765 27 14848.0 109.276912 255.266469 289.481735
28 15360.0 380.041240 257.071134 284.884090 28 15360.0 110.669469 260.338991 290.496454
29 15872.0 370.913333 260.731015 289.679087 29 15872.0 110.863792 263.071829 291.898841
</pre></div> </pre></div>
</div> </div>
<div class="line-block"> <div class="line-block">
@@ -487,7 +487,7 @@ to download the full example code</p>
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 16.519 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 2.489 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times"> <div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1> <span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>14:22.730</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p> <p><strong>13:47.116</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default"> <table class="docutils align-default">
<colgroup> <colgroup>
<col style="width: 85%" /> <col style="width: 85%" />
@@ -183,23 +183,23 @@
</colgroup> </colgroup>
<tbody> <tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>06:41.514</p></td> <td><p>06:44.900</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td> <tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:32.198</p></td> <td><p>03:10.849</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-odd"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
<td><p>02:16.519</p></td> <td><p>02:02.489</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td> <tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:52.088</p></td> <td><p>01:48.554</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.411</p></td> <td><p>00:00.325</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
</tbody> </tbody>

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1 # Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 3ac44b8ff916995b2d2faa9a97dba090 config: 859316008f2f6719727d4fb3b970c7e2
tags: 645f666f9bcd5a90fca523b33c5a78b7 tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More