[GH-PAGES] Updated website
@@ -1,4 +1,4 @@
|
|||||||
# Sphinx build info version 1
|
# Sphinx build info version 1
|
||||||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
||||||
config: 42c97a67bae5a482aaf67bfed4cafa6c
|
config: 61bcdc44f0c1cd26741c67ba040ea73c
|
||||||
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
||||||
|
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 24 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 59 KiB |
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 22 KiB |
@@ -235,10 +235,10 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
|||||||
0 4096.0 9.600000 9.600000
|
0 4096.0 9.600000 9.600000
|
||||||
1 8192.0 19.200000 19.200000
|
1 8192.0 19.200000 19.200000
|
||||||
2 16384.0 38.400001 38.400001
|
2 16384.0 38.400001 38.400001
|
||||||
3 32768.0 63.999998 76.800002
|
3 32768.0 63.999998 63.999998
|
||||||
4 65536.0 127.999995 127.999995
|
4 65536.0 127.999995 127.999995
|
||||||
5 131072.0 219.428568 219.428568
|
5 131072.0 219.428568 219.428568
|
||||||
6 262144.0 341.333321 341.333321
|
6 262144.0 384.000001 384.000001
|
||||||
7 524288.0 472.615390 472.615390
|
7 524288.0 472.615390 472.615390
|
||||||
8 1048576.0 614.400016 614.400016
|
8 1048576.0 614.400016 614.400016
|
||||||
9 2097152.0 722.823517 722.823517
|
9 2097152.0 722.823517 722.823517
|
||||||
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 1 minutes 34.873 seconds)
|
**Total running time of the script:** ( 1 minutes 50.221 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||||
|
@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
|||||||
|
|
||||||
softmax-performance:
|
softmax-performance:
|
||||||
N Triton Torch (native) Torch (jit)
|
N Triton Torch (native) Torch (jit)
|
||||||
0 256.0 546.133347 546.133347 188.321838
|
0 256.0 546.133347 546.133347 190.511628
|
||||||
1 384.0 614.400016 585.142862 151.703707
|
1 384.0 614.400016 585.142862 153.600004
|
||||||
2 512.0 655.360017 606.814814 154.566038
|
2 512.0 655.360017 606.814814 154.566038
|
||||||
3 640.0 706.206879 640.000002 160.000000
|
3 640.0 706.206879 640.000002 160.000000
|
||||||
4 768.0 722.823517 664.216187 163.839992
|
4 768.0 722.823517 664.216187 162.754967
|
||||||
.. ... ... ... ...
|
.. ... ... ... ...
|
||||||
93 12160.0 812.359066 405.755985 198.834951
|
93 12160.0 812.359066 406.179533 198.530610
|
||||||
94 12288.0 812.429770 415.661740 199.096718
|
94 12288.0 812.429770 416.101597 198.895304
|
||||||
95 12416.0 812.498981 411.722274 198.755369
|
95 12416.0 812.498981 412.149375 198.556711
|
||||||
96 12544.0 810.925276 413.396498 199.012395
|
96 12544.0 810.925276 412.971190 198.716830
|
||||||
97 12672.0 812.633240 412.097543 199.167004
|
97 12672.0 812.633240 412.097543 198.776477
|
||||||
|
|
||||||
[98 rows x 4 columns]
|
[98 rows x 4 columns]
|
||||||
|
|
||||||
@@ -306,7 +306,7 @@ In the above plot, we can see that:
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 3 minutes 27.507 seconds)
|
**Total running time of the script:** ( 3 minutes 32.174 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||||
|
@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
|||||||
|
|
||||||
matmul-performance:
|
matmul-performance:
|
||||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||||
0 256.0 2.730667 ... 3.276800 3.276800
|
0 256.0 2.730667 ... 2.978909 3.276800
|
||||||
1 384.0 7.372800 ... 7.899428 7.899428
|
1 384.0 7.372800 ... 8.507077 8.507077
|
||||||
2 512.0 14.563555 ... 16.384000 16.384000
|
2 512.0 14.563555 ... 16.384000 16.384000
|
||||||
3 640.0 22.260869 ... 24.380953 24.380953
|
3 640.0 22.260869 ... 24.380953 24.380953
|
||||||
4 768.0 32.768000 ... 35.389441 34.028308
|
4 768.0 32.768000 ... 35.389441 34.028308
|
||||||
5 896.0 39.025776 ... 40.140799 39.025776
|
5 896.0 39.025776 ... 40.140799 39.025776
|
||||||
6 1024.0 51.150050 ... 53.773130 52.428801
|
6 1024.0 51.150050 ... 53.773130 52.428801
|
||||||
7 1152.0 45.242181 ... 47.396572 47.396572
|
7 1152.0 45.242181 ... 48.161033 47.396572
|
||||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
9 1408.0 64.138541 ... 69.009825 67.305878
|
||||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||||
11 1664.0 62.492442 ... 63.372618 62.492442
|
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||||
13 1920.0 69.120002 ... 71.626943 71.257735
|
13 1920.0 69.120002 ... 71.626943 71.257735
|
||||||
14 2048.0 73.584279 ... 78.398206 77.314362
|
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||||
15 2176.0 83.500614 ... 87.876193 86.367588
|
15 2176.0 83.155572 ... 87.876193 85.998493
|
||||||
16 2304.0 68.251065 ... 78.064941 77.307030
|
16 2304.0 68.446623 ... 78.064941 77.307030
|
||||||
17 2432.0 71.305746 ... 86.179335 85.653855
|
17 2432.0 71.305746 ... 86.711310 85.393507
|
||||||
18 2560.0 77.833728 ... 82.331658 81.920002
|
18 2560.0 77.833728 ... 82.125311 81.512437
|
||||||
19 2688.0 83.552988 ... 91.404957 89.464755
|
19 2688.0 83.922689 ... 90.316801 90.316801
|
||||||
20 2816.0 79.879498 ... 84.360174 83.873477
|
20 2816.0 79.733474 ... 84.360174 83.074685
|
||||||
21 2944.0 82.237674 ... 81.431424 83.337844
|
21 2944.0 81.967162 ... 83.198715 82.373605
|
||||||
22 3072.0 81.589488 ... 89.030036 88.612060
|
22 3072.0 82.241254 ... 89.593522 88.750943
|
||||||
23 3200.0 84.993363 ... 96.822991 95.808380
|
23 3200.0 84.544253 ... 96.969694 95.238096
|
||||||
24 3328.0 82.891535 ... 85.602017 84.101981
|
24 3328.0 83.034941 ... 85.398926 84.298943
|
||||||
25 3456.0 80.300370 ... 91.771848 86.596744
|
25 3456.0 82.015834 ... 91.511426 90.790053
|
||||||
26 3584.0 85.633710 ... 90.458141 95.451583
|
26 3584.0 85.715344 ... 94.349836 97.734120
|
||||||
27 3712.0 83.247783 ... 86.829501 87.552452
|
27 3712.0 85.455380 ... 86.940496 87.284705
|
||||||
28 3840.0 81.019778 ... 88.971840 91.853823
|
28 3840.0 83.120631 ... 92.778524 85.629632
|
||||||
29 3968.0 85.753071 ... 85.600795 89.988156
|
29 3968.0 90.791620 ... 86.266483 90.054568
|
||||||
30 4096.0 88.651075 ... 88.768339 89.299883
|
30 4096.0 91.678778 ... 92.852115 88.330190
|
||||||
|
|
||||||
[31 rows x 5 columns]
|
[31 rows x 5 columns]
|
||||||
|
|
||||||
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 6 minutes 20.627 seconds)
|
**Total running time of the script:** ( 7 minutes 4.446 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||||
|
@@ -240,7 +240,7 @@ References
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 0 minutes 0.012 seconds)
|
**Total running time of the script:** ( 0 minutes 0.284 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
||||||
|
@@ -40,34 +40,34 @@ Layer Normalization
|
|||||||
N Triton Torch Apex
|
N Triton Torch Apex
|
||||||
0 1024.0 585.142849 277.694907 481.882344
|
0 1024.0 585.142849 277.694907 481.882344
|
||||||
1 1536.0 630.153868 323.368435 511.999982
|
1 1536.0 630.153868 323.368435 511.999982
|
||||||
2 2048.0 668.734716 337.814445 520.126988
|
2 2048.0 668.734716 334.367358 520.126988
|
||||||
3 2560.0 694.237267 362.477870 518.481028
|
3 2560.0 694.237267 365.714281 512.000013
|
||||||
4 3072.0 702.171410 375.206126 501.551037
|
4 3072.0 712.347810 378.092307 501.551037
|
||||||
5 3584.0 725.873439 384.859062 455.111115
|
5 3584.0 725.873439 384.859062 455.111115
|
||||||
6 4096.0 728.177767 383.251446 451.972420
|
6 4096.0 728.177767 381.023256 455.111095
|
||||||
7 4608.0 670.254540 396.387087 421.302872
|
7 4608.0 670.254540 396.387087 428.651163
|
||||||
8 5120.0 688.403381 397.669909 422.268057
|
8 5120.0 688.403381 397.669909 422.268057
|
||||||
9 5632.0 704.000002 398.725657 413.357796
|
9 5632.0 704.000002 396.969169 417.185184
|
||||||
10 6144.0 702.171410 402.885254 411.313806
|
10 6144.0 697.191505 402.885254 411.313806
|
||||||
11 6656.0 700.631610 400.360920 400.360920
|
11 6656.0 705.271522 400.360920 400.360920
|
||||||
12 7168.0 690.891575 383.571898 381.023265
|
12 7168.0 690.891575 396.844306 387.459443
|
||||||
13 7680.0 678.895043 392.587863 386.415087
|
13 7680.0 678.895043 392.587863 387.634072
|
||||||
14 8192.0 636.271854 390.095241 375.564460
|
14 8192.0 633.198054 393.609605 371.308771
|
||||||
15 8704.0 627.315309 392.292962 380.502740
|
15 8704.0 627.315309 389.005597 380.502740
|
||||||
16 9216.0 609.322328 403.989025 381.023249
|
16 9216.0 606.814809 407.337026 383.999986
|
||||||
17 9728.0 587.350922 408.524944 382.427505
|
17 9728.0 587.350922 409.599987 383.369452
|
||||||
18 10240.0 566.920437 408.578556 382.803739
|
18 10240.0 564.965524 408.578556 382.803739
|
||||||
19 10752.0 547.872604 412.546760 379.761601
|
19 10752.0 547.872604 411.559798 381.445676
|
||||||
20 11264.0 531.634232 396.096702 369.311483
|
20 11264.0 533.207081 406.826188 373.134567
|
||||||
21 11776.0 521.927959 408.711507 378.345375
|
21 11776.0 520.486200 409.599991 377.587162
|
||||||
22 12288.0 514.007840 413.911572 383.251457
|
22 12288.0 514.680630 414.784810 383.251457
|
||||||
23 12800.0 504.433489 410.420828 377.163903
|
23 12800.0 504.433489 410.420828 376.470582
|
||||||
24 13312.0 494.180982 404.159395 376.310952
|
24 13312.0 494.180982 405.699062 376.310952
|
||||||
25 13824.0 481.882350 409.600016 378.739711
|
25 13824.0 482.934503 411.888257 379.389355
|
||||||
26 14336.0 471.967074 400.307157 369.961287
|
26 14336.0 471.967074 406.695045 374.185964
|
||||||
27 14848.0 461.297068 404.027214 375.304904
|
27 14848.0 461.297068 408.192434 375.304904
|
||||||
28 15360.0 454.269882 406.214870 378.092307
|
28 15360.0 454.269882 406.214870 378.092307
|
||||||
29 15872.0 447.098578 408.940410 376.783377
|
29 15872.0 447.098578 406.974373 376.225175
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@@ -393,7 +393,7 @@ Layer Normalization
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 5 minutes 29.729 seconds)
|
**Total running time of the script:** ( 5 minutes 24.707 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
||||||
|
@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 0 minutes 0.072 seconds)
|
**Total running time of the script:** ( 0 minutes 0.073 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
||||||
|
@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
|
|||||||
|
|
||||||
.. rst-class:: sphx-glr-timing
|
.. rst-class:: sphx-glr-timing
|
||||||
|
|
||||||
**Total running time of the script:** ( 0 minutes 0.010 seconds)
|
**Total running time of the script:** ( 0 minutes 0.267 seconds)
|
||||||
|
|
||||||
|
|
||||||
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
||||||
|
@@ -5,20 +5,20 @@
|
|||||||
|
|
||||||
Computation times
|
Computation times
|
||||||
=================
|
=================
|
||||||
**16:52.830** total execution time for **getting-started_tutorials** files:
|
**17:52.172** total execution time for **getting-started_tutorials** files:
|
||||||
|
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:20.627 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:04.446 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:29.729 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:24.707 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.507 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.174 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:34.873 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:50.221 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.284 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.267 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
|
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.073 | 0.0 MB |
|
||||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||||
|
@@ -327,10 +327,10 @@ for different problem sizes.</p>
|
|||||||
0 4096.0 9.600000 9.600000
|
0 4096.0 9.600000 9.600000
|
||||||
1 8192.0 19.200000 19.200000
|
1 8192.0 19.200000 19.200000
|
||||||
2 16384.0 38.400001 38.400001
|
2 16384.0 38.400001 38.400001
|
||||||
3 32768.0 63.999998 76.800002
|
3 32768.0 63.999998 63.999998
|
||||||
4 65536.0 127.999995 127.999995
|
4 65536.0 127.999995 127.999995
|
||||||
5 131072.0 219.428568 219.428568
|
5 131072.0 219.428568 219.428568
|
||||||
6 262144.0 341.333321 341.333321
|
6 262144.0 384.000001 384.000001
|
||||||
7 524288.0 472.615390 472.615390
|
7 524288.0 472.615390 472.615390
|
||||||
8 1048576.0 614.400016 614.400016
|
8 1048576.0 614.400016 614.400016
|
||||||
9 2097152.0 722.823517 722.823517
|
9 2097152.0 722.823517 722.823517
|
||||||
@@ -342,7 +342,7 @@ for different problem sizes.</p>
|
|||||||
15 134217728.0 849.737435 850.656574
|
15 134217728.0 849.737435 850.656574
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 34.873 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 50.221 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
|
||||||
|
@@ -371,17 +371,17 @@ We will then compare its performance against (1) <code class="code docutils lite
|
|||||||
<p class="sphx-glr-script-out">Out:</p>
|
<p class="sphx-glr-script-out">Out:</p>
|
||||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
|
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
|
||||||
N Triton Torch (native) Torch (jit)
|
N Triton Torch (native) Torch (jit)
|
||||||
0 256.0 546.133347 546.133347 188.321838
|
0 256.0 546.133347 546.133347 190.511628
|
||||||
1 384.0 614.400016 585.142862 151.703707
|
1 384.0 614.400016 585.142862 153.600004
|
||||||
2 512.0 655.360017 606.814814 154.566038
|
2 512.0 655.360017 606.814814 154.566038
|
||||||
3 640.0 706.206879 640.000002 160.000000
|
3 640.0 706.206879 640.000002 160.000000
|
||||||
4 768.0 722.823517 664.216187 163.839992
|
4 768.0 722.823517 664.216187 162.754967
|
||||||
.. ... ... ... ...
|
.. ... ... ... ...
|
||||||
93 12160.0 812.359066 405.755985 198.834951
|
93 12160.0 812.359066 406.179533 198.530610
|
||||||
94 12288.0 812.429770 415.661740 199.096718
|
94 12288.0 812.429770 416.101597 198.895304
|
||||||
95 12416.0 812.498981 411.722274 198.755369
|
95 12416.0 812.498981 412.149375 198.556711
|
||||||
96 12544.0 810.925276 413.396498 199.012395
|
96 12544.0 810.925276 412.971190 198.716830
|
||||||
97 12672.0 812.633240 412.097543 199.167004
|
97 12672.0 812.633240 412.097543 198.776477
|
||||||
|
|
||||||
[98 rows x 4 columns]
|
[98 rows x 4 columns]
|
||||||
</pre></div>
|
</pre></div>
|
||||||
@@ -394,7 +394,7 @@ We will then compare its performance against (1) <code class="code docutils lite
|
|||||||
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
|
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
|
||||||
</ul>
|
</ul>
|
||||||
</div></blockquote>
|
</div></blockquote>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 27.507 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 32.174 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>
|
||||||
|
@@ -567,42 +567,42 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
|
|||||||
<p class="sphx-glr-script-out">Out:</p>
|
<p class="sphx-glr-script-out">Out:</p>
|
||||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
|
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
|
||||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||||
0 256.0 2.730667 ... 3.276800 3.276800
|
0 256.0 2.730667 ... 2.978909 3.276800
|
||||||
1 384.0 7.372800 ... 7.899428 7.899428
|
1 384.0 7.372800 ... 8.507077 8.507077
|
||||||
2 512.0 14.563555 ... 16.384000 16.384000
|
2 512.0 14.563555 ... 16.384000 16.384000
|
||||||
3 640.0 22.260869 ... 24.380953 24.380953
|
3 640.0 22.260869 ... 24.380953 24.380953
|
||||||
4 768.0 32.768000 ... 35.389441 34.028308
|
4 768.0 32.768000 ... 35.389441 34.028308
|
||||||
5 896.0 39.025776 ... 40.140799 39.025776
|
5 896.0 39.025776 ... 40.140799 39.025776
|
||||||
6 1024.0 51.150050 ... 53.773130 52.428801
|
6 1024.0 51.150050 ... 53.773130 52.428801
|
||||||
7 1152.0 45.242181 ... 47.396572 47.396572
|
7 1152.0 45.242181 ... 48.161033 47.396572
|
||||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
9 1408.0 64.138541 ... 69.009825 67.305878
|
||||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||||
11 1664.0 62.492442 ... 63.372618 62.492442
|
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||||
13 1920.0 69.120002 ... 71.626943 71.257735
|
13 1920.0 69.120002 ... 71.626943 71.257735
|
||||||
14 2048.0 73.584279 ... 78.398206 77.314362
|
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||||
15 2176.0 83.500614 ... 87.876193 86.367588
|
15 2176.0 83.155572 ... 87.876193 85.998493
|
||||||
16 2304.0 68.251065 ... 78.064941 77.307030
|
16 2304.0 68.446623 ... 78.064941 77.307030
|
||||||
17 2432.0 71.305746 ... 86.179335 85.653855
|
17 2432.0 71.305746 ... 86.711310 85.393507
|
||||||
18 2560.0 77.833728 ... 82.331658 81.920002
|
18 2560.0 77.833728 ... 82.125311 81.512437
|
||||||
19 2688.0 83.552988 ... 91.404957 89.464755
|
19 2688.0 83.922689 ... 90.316801 90.316801
|
||||||
20 2816.0 79.879498 ... 84.360174 83.873477
|
20 2816.0 79.733474 ... 84.360174 83.074685
|
||||||
21 2944.0 82.237674 ... 81.431424 83.337844
|
21 2944.0 81.967162 ... 83.198715 82.373605
|
||||||
22 3072.0 81.589488 ... 89.030036 88.612060
|
22 3072.0 82.241254 ... 89.593522 88.750943
|
||||||
23 3200.0 84.993363 ... 96.822991 95.808380
|
23 3200.0 84.544253 ... 96.969694 95.238096
|
||||||
24 3328.0 82.891535 ... 85.602017 84.101981
|
24 3328.0 83.034941 ... 85.398926 84.298943
|
||||||
25 3456.0 80.300370 ... 91.771848 86.596744
|
25 3456.0 82.015834 ... 91.511426 90.790053
|
||||||
26 3584.0 85.633710 ... 90.458141 95.451583
|
26 3584.0 85.715344 ... 94.349836 97.734120
|
||||||
27 3712.0 83.247783 ... 86.829501 87.552452
|
27 3712.0 85.455380 ... 86.940496 87.284705
|
||||||
28 3840.0 81.019778 ... 88.971840 91.853823
|
28 3840.0 83.120631 ... 92.778524 85.629632
|
||||||
29 3968.0 85.753071 ... 85.600795 89.988156
|
29 3968.0 90.791620 ... 86.266483 90.054568
|
||||||
30 4096.0 88.651075 ... 88.768339 89.299883
|
30 4096.0 91.678778 ... 92.852115 88.330190
|
||||||
|
|
||||||
[31 rows x 5 columns]
|
[31 rows x 5 columns]
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes 20.627 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 7 minutes 4.446 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>
|
||||||
|
@@ -374,7 +374,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
|
|||||||
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
|
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
|
||||||
</dd>
|
</dd>
|
||||||
</dl>
|
</dl>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.012 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.284 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>
|
||||||
|
@@ -198,34 +198,34 @@ to download the full example code</p>
|
|||||||
N Triton Torch Apex
|
N Triton Torch Apex
|
||||||
0 1024.0 585.142849 277.694907 481.882344
|
0 1024.0 585.142849 277.694907 481.882344
|
||||||
1 1536.0 630.153868 323.368435 511.999982
|
1 1536.0 630.153868 323.368435 511.999982
|
||||||
2 2048.0 668.734716 337.814445 520.126988
|
2 2048.0 668.734716 334.367358 520.126988
|
||||||
3 2560.0 694.237267 362.477870 518.481028
|
3 2560.0 694.237267 365.714281 512.000013
|
||||||
4 3072.0 702.171410 375.206126 501.551037
|
4 3072.0 712.347810 378.092307 501.551037
|
||||||
5 3584.0 725.873439 384.859062 455.111115
|
5 3584.0 725.873439 384.859062 455.111115
|
||||||
6 4096.0 728.177767 383.251446 451.972420
|
6 4096.0 728.177767 381.023256 455.111095
|
||||||
7 4608.0 670.254540 396.387087 421.302872
|
7 4608.0 670.254540 396.387087 428.651163
|
||||||
8 5120.0 688.403381 397.669909 422.268057
|
8 5120.0 688.403381 397.669909 422.268057
|
||||||
9 5632.0 704.000002 398.725657 413.357796
|
9 5632.0 704.000002 396.969169 417.185184
|
||||||
10 6144.0 702.171410 402.885254 411.313806
|
10 6144.0 697.191505 402.885254 411.313806
|
||||||
11 6656.0 700.631610 400.360920 400.360920
|
11 6656.0 705.271522 400.360920 400.360920
|
||||||
12 7168.0 690.891575 383.571898 381.023265
|
12 7168.0 690.891575 396.844306 387.459443
|
||||||
13 7680.0 678.895043 392.587863 386.415087
|
13 7680.0 678.895043 392.587863 387.634072
|
||||||
14 8192.0 636.271854 390.095241 375.564460
|
14 8192.0 633.198054 393.609605 371.308771
|
||||||
15 8704.0 627.315309 392.292962 380.502740
|
15 8704.0 627.315309 389.005597 380.502740
|
||||||
16 9216.0 609.322328 403.989025 381.023249
|
16 9216.0 606.814809 407.337026 383.999986
|
||||||
17 9728.0 587.350922 408.524944 382.427505
|
17 9728.0 587.350922 409.599987 383.369452
|
||||||
18 10240.0 566.920437 408.578556 382.803739
|
18 10240.0 564.965524 408.578556 382.803739
|
||||||
19 10752.0 547.872604 412.546760 379.761601
|
19 10752.0 547.872604 411.559798 381.445676
|
||||||
20 11264.0 531.634232 396.096702 369.311483
|
20 11264.0 533.207081 406.826188 373.134567
|
||||||
21 11776.0 521.927959 408.711507 378.345375
|
21 11776.0 520.486200 409.599991 377.587162
|
||||||
22 12288.0 514.007840 413.911572 383.251457
|
22 12288.0 514.680630 414.784810 383.251457
|
||||||
23 12800.0 504.433489 410.420828 377.163903
|
23 12800.0 504.433489 410.420828 376.470582
|
||||||
24 13312.0 494.180982 404.159395 376.310952
|
24 13312.0 494.180982 405.699062 376.310952
|
||||||
25 13824.0 481.882350 409.600016 378.739711
|
25 13824.0 482.934503 411.888257 379.389355
|
||||||
26 14336.0 471.967074 400.307157 369.961287
|
26 14336.0 471.967074 406.695045 374.185964
|
||||||
27 14848.0 461.297068 404.027214 375.304904
|
27 14848.0 461.297068 408.192434 375.304904
|
||||||
28 15360.0 454.269882 406.214870 378.092307
|
28 15360.0 454.269882 406.214870 378.092307
|
||||||
29 15872.0 447.098578 408.940410 376.783377
|
29 15872.0 447.098578 406.974373 376.225175
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<div class="line-block">
|
<div class="line-block">
|
||||||
@@ -543,7 +543,7 @@ to download the full example code</p>
|
|||||||
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">'.'</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">'.'</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 29.729 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 24.707 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>
|
||||||
|
@@ -543,7 +543,7 @@ to download the full example code</p>
|
|||||||
<span class="c1"># bench_flash_attention.run(save_path='.', print_data=True)</span>
|
<span class="c1"># bench_flash_attention.run(save_path='.', print_data=True)</span>
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.072 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.073 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-06-fused-attention-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-06-fused-attention-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/54a35f6ec55f9746935b9566fb6bb1df/06-fused-attention.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">06-fused-attention.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/54a35f6ec55f9746935b9566fb6bb1df/06-fused-attention.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">06-fused-attention.py</span></code></a></p>
|
||||||
|
@@ -276,7 +276,7 @@ tensor([0.4105, 0.5430, 0.0249, ..., 0.0424, 0.5351, 0.8149], device='cuda:
|
|||||||
The maximum difference between torch and triton is 2.384185791015625e-07
|
The maximum difference between torch and triton is 2.384185791015625e-07
|
||||||
</pre></div>
|
</pre></div>
|
||||||
</div>
|
</div>
|
||||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.010 seconds)</p>
|
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.267 seconds)</p>
|
||||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-07-libdevice-function-py">
|
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-07-libdevice-function-py">
|
||||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
<div class="sphx-glr-download sphx-glr-download-python docutils container">
|
||||||
<p><a class="reference download internal" download="" href="../../_downloads/3ff29f967ace7985da24aab10352fc76/07-libdevice-function.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">07-libdevice-function.py</span></code></a></p>
|
<p><a class="reference download internal" download="" href="../../_downloads/3ff29f967ace7985da24aab10352fc76/07-libdevice-function.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">07-libdevice-function.py</span></code></a></p>
|
||||||
|
@@ -174,7 +174,7 @@
|
|||||||
|
|
||||||
<div class="section" id="computation-times">
|
<div class="section" id="computation-times">
|
||||||
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
|
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
|
||||||
<p><strong>16:52.830</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
<p><strong>17:52.172</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
|
||||||
<table class="docutils align-default">
|
<table class="docutils align-default">
|
||||||
<colgroup>
|
<colgroup>
|
||||||
<col style="width: 85%" />
|
<col style="width: 85%" />
|
||||||
@@ -183,31 +183,31 @@
|
|||||||
</colgroup>
|
</colgroup>
|
||||||
<tbody>
|
<tbody>
|
||||||
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
|
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
|
||||||
<td><p>06:20.627</p></td>
|
<td><p>07:04.446</p></td>
|
||||||
<td><p>0.0 MB</p></td>
|
<td><p>0.0 MB</p></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr class="row-even"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
|
<tr class="row-even"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
|
||||||
<td><p>05:29.729</p></td>
|
<td><p>05:24.707</p></td>
|
||||||
<td><p>0.0 MB</p></td>
|
<td><p>0.0 MB</p></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr class="row-odd"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
|
<tr class="row-odd"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
|
||||||
<td><p>03:27.507</p></td>
|
<td><p>03:32.174</p></td>
|
||||||
<td><p>0.0 MB</p></td>
|
<td><p>0.0 MB</p></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
|
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
|
||||||
<td><p>01:34.873</p></td>
|
<td><p>01:50.221</p></td>
|
||||||
|
<td><p>0.0 MB</p></td>
|
||||||
|
</tr>
|
||||||
|
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
|
||||||
|
<td><p>00:00.284</p></td>
|
||||||
|
<td><p>0.0 MB</p></td>
|
||||||
|
</tr>
|
||||||
|
<tr class="row-even"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
|
||||||
|
<td><p>00:00.267</p></td>
|
||||||
<td><p>0.0 MB</p></td>
|
<td><p>0.0 MB</p></td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr class="row-odd"><td><p><a class="reference internal" href="06-fused-attention.html#sphx-glr-getting-started-tutorials-06-fused-attention-py"><span class="std std-ref">Fused Attention</span></a> (<code class="docutils literal notranslate"><span class="pre">06-fused-attention.py</span></code>)</p></td>
|
<tr class="row-odd"><td><p><a class="reference internal" href="06-fused-attention.html#sphx-glr-getting-started-tutorials-06-fused-attention-py"><span class="std std-ref">Fused Attention</span></a> (<code class="docutils literal notranslate"><span class="pre">06-fused-attention.py</span></code>)</p></td>
|
||||||
<td><p>00:00.072</p></td>
|
<td><p>00:00.073</p></td>
|
||||||
<td><p>0.0 MB</p></td>
|
|
||||||
</tr>
|
|
||||||
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
|
|
||||||
<td><p>00:00.012</p></td>
|
|
||||||
<td><p>0.0 MB</p></td>
|
|
||||||
</tr>
|
|
||||||
<tr class="row-odd"><td><p><a class="reference internal" href="07-libdevice-function.html#sphx-glr-getting-started-tutorials-07-libdevice-function-py"><span class="std std-ref">Libdevice function</span></a> (<code class="docutils literal notranslate"><span class="pre">07-libdevice-function.py</span></code>)</p></td>
|
|
||||||
<td><p>00:00.010</p></td>
|
|
||||||
<td><p>0.0 MB</p></td>
|
<td><p>0.0 MB</p></td>
|
||||||
</tr>
|
</tr>
|
||||||
</tbody>
|
</tbody>
|
||||||
|
@@ -1,4 +1,4 @@
|
|||||||
# Sphinx build info version 1
|
# Sphinx build info version 1
|
||||||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
|
||||||
config: c3bdabfe41889716875c97ac2546912d
|
config: c191031d10f4b25c1f1c2b20f4e51faa
|
||||||
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
tags: 645f666f9bcd5a90fca523b33c5a78b7
|
||||||
|