[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-04-23 00:44:46 +00:00
parent cb69ba73a9
commit 31dd4ab60e
158 changed files with 334 additions and 334 deletions

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1 # Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 9d0e80f33b62bdb551b3370046e08ebe config: ca77bb6474c45e6f0da200b497be637e
tags: 645f666f9bcd5a90fca523b33c5a78b7 tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

View File

@@ -241,13 +241,13 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
6 262144.0 341.333321 341.333321 6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390 7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016 8 1048576.0 614.400016 614.400016
9 2097152.0 702.171410 702.171410 9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482 10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770 11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721 12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273 13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445 14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574 15 134217728.0 850.196756 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 46.845 seconds) **Total running time of the script:** ( 1 minutes 52.088 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py: .. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -279,16 +279,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance: softmax-performance:
N Triton Torch (native) Torch (jit) N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628 0 256.0 512.000001 546.133347 190.511628
1 384.0 585.142862 558.545450 153.600004 1 384.0 585.142862 558.545450 149.853661
2 512.0 655.360017 585.142849 153.121496 2 512.0 655.360017 585.142849 154.566038
3 640.0 682.666684 620.606056 158.759699 3 640.0 682.666684 640.000002 158.759699
4 768.0 722.823517 646.736871 162.754967 4 768.0 722.823517 646.736871 162.754967
.. ... ... ... ... .. ... ... ... ...
93 12160.0 814.058574 405.755985 198.936606 93 12160.0 815.765209 405.755985 198.631953
94 12288.0 814.111783 415.222812 199.197579 94 12288.0 814.111783 416.101597 198.794749
95 12416.0 814.163950 411.296057 198.854847 95 12416.0 815.835709 412.149375 198.457532
96 12544.0 812.566838 412.546756 199.012395 96 12544.0 814.214963 412.971190 198.618504
97 12672.0 814.265046 412.516771 199.167004 97 12672.0 814.265046 412.097543 198.776477
[98 rows x 4 columns] [98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 29.190 seconds) **Total running time of the script:** ( 3 minutes 32.198 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py: .. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,36 +459,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance: matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU) M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909 0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077 1 384.0 7.372800 ... 7.899428 8.507077
2 512.0 14.563555 ... 15.420235 15.420235 2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953 3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308 4 768.0 31.597714 ... 34.028308 34.028308
5 896.0 37.971025 ... 40.140799 40.140799 5 896.0 36.971791 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801 6 1024.0 48.770977 ... 52.428801 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572 7 1152.0 43.911529 ... 46.656000 46.656000
8 1280.0 51.200001 ... 57.690139 57.690139 8 1280.0 49.951220 ... 56.888887 56.109587
9 1408.0 64.138541 ... 69.009825 68.147202 9 1408.0 62.664092 ... 67.305878 66.485074
10 1536.0 80.430545 ... 80.430545 79.526831 10 1536.0 78.643199 ... 78.643199 77.778988
11 1664.0 62.929456 ... 62.929456 62.929456 11 1664.0 61.636381 ... 62.061463 61.636381
12 1792.0 72.983276 ... 63.142831 63.142831 12 1792.0 71.588687 ... 67.707374 67.707374
13 1920.0 68.776119 ... 71.257735 71.257735 13 1920.0 67.764707 ... 70.172588 69.818184
14 2048.0 73.584279 ... 78.398206 78.033565 14 2048.0 72.005219 ... 76.608294 76.260072
15 2176.0 82.813365 ... 86.739860 86.367588 15 2176.0 81.803444 ... 85.632545 85.269692
16 2304.0 68.251065 ... 77.558029 77.307030 16 2304.0 67.100763 ... 76.563695 76.319081
17 2432.0 71.125224 ... 75.726318 75.522751 17 2432.0 69.886725 ... 74.323979 73.932798
18 2560.0 77.283019 ... 82.331658 82.125311 18 2560.0 76.382283 ... 80.908642 80.511054
19 2688.0 83.552988 ... 91.185232 90.748936 19 2688.0 82.642823 ... 89.676257 89.254248
20 2816.0 79.443003 ... 83.792906 83.552120 20 2816.0 82.602666 ... 82.916747 82.759409
21 2944.0 82.237674 ... 84.040530 83.899046 21 2944.0 81.166173 ... 82.509987 82.169877
22 3072.0 81.589488 ... 85.598037 89.451983 22 3072.0 81.005868 ... 88.197981 88.060814
23 3200.0 84.432717 ... 88.520058 95.952022 23 3200.0 83.660130 ... 95.522391 95.096582
24 3328.0 81.854799 ... 85.703924 82.275764 24 3328.0 82.369902 ... 84.795401 84.596116
25 3456.0 80.220468 ... 92.244355 87.632137 25 3456.0 81.026701 ... 91.304157 90.994998
26 3584.0 83.101104 ... 91.284657 94.250936 26 3584.0 86.374055 ... 98.699661 98.375705
27 3712.0 85.675250 ... 82.902362 87.552452 27 3712.0 84.874549 ... 88.876645 88.483034
28 3840.0 81.079177 ... 87.493673 91.625518 28 3840.0 84.036474 ... 92.545605 92.159996
29 3968.0 87.035620 ... 90.388098 84.915752 29 3968.0 92.232760 ... 91.062642 91.403695
30 4096.0 91.366730 ... 83.208386 90.626421 30 4096.0 93.206754 ... 93.012976 92.820009
[31 rows x 5 columns] [31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 21.623 seconds) **Total running time of the script:** ( 6 minutes 41.514 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py: .. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.011 seconds) **Total running time of the script:** ( 0 minutes 0.411 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py: .. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward: layer-norm-backward:
N Triton Torch Apex N Triton Torch Apex
0 1024.0 356.173905 97.912354 303.407414 0 1024.0 356.173905 96.755900 299.707322
1 1536.0 405.098894 135.032961 341.333333 1 1536.0 400.695643 133.083026 338.201833
2 2048.0 486.653476 161.684218 336.657521 2 2048.0 486.653476 159.584422 321.254900
3 2560.0 461.954908 182.857144 330.322572 3 2560.0 455.111129 179.649115 323.368411
4 3072.0 519.211251 190.511624 319.168834 4 3072.0 508.468972 191.005181 320.556515
5 3584.0 554.941930 207.267476 308.301075 5 3584.0 551.384634 206.769233 310.527060
6 4096.0 564.965515 219.919464 297.890900 6 4096.0 558.545450 219.919464 297.890900
7 4608.0 500.416301 232.825259 287.999990 7 4608.0 493.714279 231.364016 285.767436
8 5120.0 527.381977 241.889751 285.104413 8 5120.0 518.481012 241.414550 283.133649
9 5632.0 538.517949 242.236559 288.820505 9 5632.0 534.260858 241.803217 288.820505
10 6144.0 546.133354 250.775512 288.000001 10 6144.0 540.131844 247.409397 285.214712
11 6656.0 532.479975 254.775119 284.748652 11 6656.0 525.473708 255.182111 284.748652
12 7168.0 505.976473 255.240352 279.272738 12 7168.0 503.017523 259.084340 283.881181
13 7680.0 485.052616 264.447629 281.404588 13 7680.0 482.513091 262.190612 275.104486
14 8192.0 464.794337 266.406514 282.077471 14 8192.0 463.698115 265.686491 283.296835
15 8704.0 415.300208 262.762264 280.397305 15 8704.0 406.412440 266.789264 283.826081
16 9216.0 428.651187 270.727053 287.065546 16 9216.0 419.703988 270.727053 287.251954
17 9728.0 437.213490 279.942444 287.527089 17 9728.0 428.388977 279.942444 288.950501
18 10240.0 442.810829 285.104413 289.469963 18 10240.0 437.295395 285.104413 286.767793
19 10752.0 425.821771 248.123076 292.571421 19 10752.0 424.421071 245.292781 289.291486
20 11264.0 427.071098 243.545956 282.778242 20 11264.0 423.724120 244.426754 286.069848
21 11776.0 423.724129 248.569911 287.511689 21 11776.0 418.702211 247.915800 288.097854
22 12288.0 417.131525 253.578674 294.323369 22 12288.0 414.784810 253.578674 293.737063
23 12800.0 412.903215 254.726371 292.015215 23 12800.0 409.327110 252.631590 287.371378
24 13312.0 408.030638 253.561895 292.036577 24 13312.0 408.030638 252.360194 289.653667
25 13824.0 404.112047 256.991469 291.799461 25 13824.0 403.620451 256.197690 291.287079
26 14336.0 393.215988 252.988236 284.585606 26 14336.0 394.116833 253.921779 286.242939
27 14848.0 381.533186 256.552919 288.077610 27 14848.0 384.829370 257.108233 289.246765
28 15360.0 377.318326 260.338991 290.267715 28 15360.0 380.041240 257.071134 284.884090
29 15872.0 369.832994 262.708969 290.341468 29 15872.0 370.913333 260.731015 289.679087
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing .. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 14.604 seconds) **Total running time of the script:** ( 2 minutes 16.519 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py: .. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times Computation times
================= =================
**13:52.274** total execution time for **getting-started_tutorials** files: **14:22.730** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:21.623 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:41.514 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:29.190 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.198 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:14.604 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:16.519 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.845 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:52.088 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.011 | 0.0 MB | | :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.411 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+ +---------------------------------------------------------------------------------------------------------+-----------+--------+

View File

@@ -331,16 +331,16 @@ for different problem sizes.</p>
6 262144.0 341.333321 341.333321 6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390 7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016 8 1048576.0 614.400016 614.400016
9 2097152.0 702.171410 702.171410 9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482 10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770 11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721 12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273 13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445 14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574 15 134217728.0 850.196756 850.656574
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 46.845 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 52.088 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -370,16 +370,16 @@ We will then compare its performance against (1) <code class="code docutils lite
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit) N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628 0 256.0 512.000001 546.133347 190.511628
1 384.0 585.142862 558.545450 153.600004 1 384.0 585.142862 558.545450 149.853661
2 512.0 655.360017 585.142849 153.121496 2 512.0 655.360017 585.142849 154.566038
3 640.0 682.666684 620.606056 158.759699 3 640.0 682.666684 640.000002 158.759699
4 768.0 722.823517 646.736871 162.754967 4 768.0 722.823517 646.736871 162.754967
.. ... ... ... ... .. ... ... ... ...
93 12160.0 814.058574 405.755985 198.936606 93 12160.0 815.765209 405.755985 198.631953
94 12288.0 814.111783 415.222812 199.197579 94 12288.0 814.111783 416.101597 198.794749
95 12416.0 814.163950 411.296057 198.854847 95 12416.0 815.835709 412.149375 198.457532
96 12544.0 812.566838 412.546756 199.012395 96 12544.0 814.214963 412.971190 198.618504
97 12672.0 814.265046 412.516771 199.167004 97 12672.0 814.265046 412.097543 198.776477
[98 rows x 4 columns] [98 rows x 4 columns]
</pre></div> </pre></div>
@@ -392,7 +392,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li> Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul> </ul>
</div></blockquote> </div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 29.190 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 32.198 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -565,41 +565,41 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU) M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909 0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077 1 384.0 7.372800 ... 7.899428 8.507077
2 512.0 14.563555 ... 15.420235 15.420235 2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953 3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308 4 768.0 31.597714 ... 34.028308 34.028308
5 896.0 37.971025 ... 40.140799 40.140799 5 896.0 36.971791 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801 6 1024.0 48.770977 ... 52.428801 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572 7 1152.0 43.911529 ... 46.656000 46.656000
8 1280.0 51.200001 ... 57.690139 57.690139 8 1280.0 49.951220 ... 56.888887 56.109587
9 1408.0 64.138541 ... 69.009825 68.147202 9 1408.0 62.664092 ... 67.305878 66.485074
10 1536.0 80.430545 ... 80.430545 79.526831 10 1536.0 78.643199 ... 78.643199 77.778988
11 1664.0 62.929456 ... 62.929456 62.929456 11 1664.0 61.636381 ... 62.061463 61.636381
12 1792.0 72.983276 ... 63.142831 63.142831 12 1792.0 71.588687 ... 67.707374 67.707374
13 1920.0 68.776119 ... 71.257735 71.257735 13 1920.0 67.764707 ... 70.172588 69.818184
14 2048.0 73.584279 ... 78.398206 78.033565 14 2048.0 72.005219 ... 76.608294 76.260072
15 2176.0 82.813365 ... 86.739860 86.367588 15 2176.0 81.803444 ... 85.632545 85.269692
16 2304.0 68.251065 ... 77.558029 77.307030 16 2304.0 67.100763 ... 76.563695 76.319081
17 2432.0 71.125224 ... 75.726318 75.522751 17 2432.0 69.886725 ... 74.323979 73.932798
18 2560.0 77.283019 ... 82.331658 82.125311 18 2560.0 76.382283 ... 80.908642 80.511054
19 2688.0 83.552988 ... 91.185232 90.748936 19 2688.0 82.642823 ... 89.676257 89.254248
20 2816.0 79.443003 ... 83.792906 83.552120 20 2816.0 82.602666 ... 82.916747 82.759409
21 2944.0 82.237674 ... 84.040530 83.899046 21 2944.0 81.166173 ... 82.509987 82.169877
22 3072.0 81.589488 ... 85.598037 89.451983 22 3072.0 81.005868 ... 88.197981 88.060814
23 3200.0 84.432717 ... 88.520058 95.952022 23 3200.0 83.660130 ... 95.522391 95.096582
24 3328.0 81.854799 ... 85.703924 82.275764 24 3328.0 82.369902 ... 84.795401 84.596116
25 3456.0 80.220468 ... 92.244355 87.632137 25 3456.0 81.026701 ... 91.304157 90.994998
26 3584.0 83.101104 ... 91.284657 94.250936 26 3584.0 86.374055 ... 98.699661 98.375705
27 3712.0 85.675250 ... 82.902362 87.552452 27 3712.0 84.874549 ... 88.876645 88.483034
28 3840.0 81.079177 ... 87.493673 91.625518 28 3840.0 84.036474 ... 92.545605 92.159996
29 3968.0 87.035620 ... 90.388098 84.915752 29 3968.0 92.232760 ... 91.062642 91.403695
30 4096.0 91.366730 ... 83.208386 90.626421 30 4096.0 93.206754 ... 93.012976 92.820009
[31 rows x 5 columns] [31 rows x 5 columns]
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 21.623 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 41.514 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -372,7 +372,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p> <dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd> </dd>
</dl> </dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.011 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.411 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -194,36 +194,36 @@ to download the full example code</p>
<p class="sphx-glr-script-out">Out:</p> <p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm-backward: <div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm-backward:
N Triton Torch Apex N Triton Torch Apex
0 1024.0 356.173905 97.912354 303.407414 0 1024.0 356.173905 96.755900 299.707322
1 1536.0 405.098894 135.032961 341.333333 1 1536.0 400.695643 133.083026 338.201833
2 2048.0 486.653476 161.684218 336.657521 2 2048.0 486.653476 159.584422 321.254900
3 2560.0 461.954908 182.857144 330.322572 3 2560.0 455.111129 179.649115 323.368411
4 3072.0 519.211251 190.511624 319.168834 4 3072.0 508.468972 191.005181 320.556515
5 3584.0 554.941930 207.267476 308.301075 5 3584.0 551.384634 206.769233 310.527060
6 4096.0 564.965515 219.919464 297.890900 6 4096.0 558.545450 219.919464 297.890900
7 4608.0 500.416301 232.825259 287.999990 7 4608.0 493.714279 231.364016 285.767436
8 5120.0 527.381977 241.889751 285.104413 8 5120.0 518.481012 241.414550 283.133649
9 5632.0 538.517949 242.236559 288.820505 9 5632.0 534.260858 241.803217 288.820505
10 6144.0 546.133354 250.775512 288.000001 10 6144.0 540.131844 247.409397 285.214712
11 6656.0 532.479975 254.775119 284.748652 11 6656.0 525.473708 255.182111 284.748652
12 7168.0 505.976473 255.240352 279.272738 12 7168.0 503.017523 259.084340 283.881181
13 7680.0 485.052616 264.447629 281.404588 13 7680.0 482.513091 262.190612 275.104486
14 8192.0 464.794337 266.406514 282.077471 14 8192.0 463.698115 265.686491 283.296835
15 8704.0 415.300208 262.762264 280.397305 15 8704.0 406.412440 266.789264 283.826081
16 9216.0 428.651187 270.727053 287.065546 16 9216.0 419.703988 270.727053 287.251954
17 9728.0 437.213490 279.942444 287.527089 17 9728.0 428.388977 279.942444 288.950501
18 10240.0 442.810829 285.104413 289.469963 18 10240.0 437.295395 285.104413 286.767793
19 10752.0 425.821771 248.123076 292.571421 19 10752.0 424.421071 245.292781 289.291486
20 11264.0 427.071098 243.545956 282.778242 20 11264.0 423.724120 244.426754 286.069848
21 11776.0 423.724129 248.569911 287.511689 21 11776.0 418.702211 247.915800 288.097854
22 12288.0 417.131525 253.578674 294.323369 22 12288.0 414.784810 253.578674 293.737063
23 12800.0 412.903215 254.726371 292.015215 23 12800.0 409.327110 252.631590 287.371378
24 13312.0 408.030638 253.561895 292.036577 24 13312.0 408.030638 252.360194 289.653667
25 13824.0 404.112047 256.991469 291.799461 25 13824.0 403.620451 256.197690 291.287079
26 14336.0 393.215988 252.988236 284.585606 26 14336.0 394.116833 253.921779 286.242939
27 14848.0 381.533186 256.552919 288.077610 27 14848.0 384.829370 257.108233 289.246765
28 15360.0 377.318326 260.338991 290.267715 28 15360.0 380.041240 257.071134 284.884090
29 15872.0 369.832994 262.708969 290.341468 29 15872.0 370.913333 260.731015 289.679087
</pre></div> </pre></div>
</div> </div>
<div class="line-block"> <div class="line-block">
@@ -487,7 +487,7 @@ to download the full example code</p>
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
</pre></div> </pre></div>
</div> </div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 14.604 seconds)</p> <p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 16.519 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py"> <div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container"> <div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p> <p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times"> <div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1> <span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>13:52.274</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p> <p><strong>14:22.730</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default"> <table class="docutils align-default">
<colgroup> <colgroup>
<col style="width: 85%" /> <col style="width: 85%" />
@@ -183,23 +183,23 @@
</colgroup> </colgroup>
<tbody> <tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>06:21.623</p></td> <td><p>06:41.514</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td> <tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:29.190</p></td> <td><p>03:32.198</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-odd"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
<td><p>02:14.604</p></td> <td><p>02:16.519</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td> <tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:46.845</p></td> <td><p>01:52.088</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td> <tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.011</p></td> <td><p>00:00.411</p></td>
<td><p>0.0 MB</p></td> <td><p>0.0 MB</p></td>
</tr> </tr>
</tbody> </tbody>

File diff suppressed because one or more lines are too long

View File

@@ -1,4 +1,4 @@
# Sphinx build info version 1 # Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 12adf5803d2b9c3841a7e10c7915f089 config: 3ac44b8ff916995b2d2faa9a97dba090
tags: 645f666f9bcd5a90fca523b33c5a78b7 tags: 645f666f9bcd5a90fca523b33c5a78b7

Binary file not shown.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More