[GH-PAGES] Updated website
This commit is contained in:
@@ -233,19 +233,19 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
vector-add-performance:
|
||||
size Triton Torch
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 15.999999
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 384.000001 384.000001
|
||||
6 262144.0 341.333321 384.000001
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 722.823517
|
||||
10 4194304.0 780.190482 780.190482
|
||||
11 8388608.0 812.429770 812.429770
|
||||
12 16777216.0 833.084721 833.084721
|
||||
13 33554432.0 842.004273 842.004273
|
||||
13 33554432.0 842.004273 843.811163
|
||||
14 67108864.0 847.448255 848.362445
|
||||
15 134217728.0 849.737435 850.656574
|
||||
|
||||
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 41.831 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 48.722 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 546.133347 546.133347 190.511628
|
||||
1 384.0 614.400016 585.142862 151.703707
|
||||
2 512.0 655.360017 606.814814 156.038096
|
||||
0 256.0 512.000001 546.133347 186.181817
|
||||
1 384.0 614.400016 585.142862 153.600004
|
||||
2 512.0 655.360017 585.142849 154.566038
|
||||
3 640.0 706.206879 640.000002 160.000000
|
||||
4 768.0 722.823517 664.216187 163.839992
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 405.333344 199.038365
|
||||
94 12288.0 812.429770 415.222812 199.197579
|
||||
95 12416.0 812.498981 411.296057 198.805107
|
||||
96 12544.0 811.745227 412.971190 199.012395
|
||||
97 12672.0 811.007961 412.097543 199.167004
|
||||
93 12160.0 812.359066 405.755985 198.936606
|
||||
94 12288.0 812.429770 415.661740 199.197579
|
||||
95 12416.0 812.498981 411.722274 198.755369
|
||||
96 12544.0 810.925276 412.971190 199.012395
|
||||
97 12672.0 811.007961 412.097543 199.069228
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -306,7 +306,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 3 minutes 28.967 seconds)
|
||||
**Total running time of the script:** ( 3 minutes 32.036 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.978909 ... 3.276800 3.276800
|
||||
0 256.0 2.730667 ... 2.978909 3.276800
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
2 512.0 14.563555 ... 16.384000 15.420235
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 35.389441 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
5 896.0 37.971025 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 53.092457
|
||||
7 1152.0 45.242181 ... 48.161033 48.161033
|
||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||
9 1408.0 64.138541 ... 69.009825 68.147202
|
||||
9 1408.0 64.138541 ... 69.009825 67.305878
|
||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||
11 1664.0 63.372618 ... 63.372618 62.492442
|
||||
12 1792.0 72.983276 ... 73.460287 59.467852
|
||||
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||
13 1920.0 69.120002 ... 71.626943 71.257735
|
||||
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||
15 2176.0 83.500614 ... 87.876193 86.367588
|
||||
15 2176.0 83.500614 ... 87.876193 85.998493
|
||||
16 2304.0 68.446623 ... 78.064941 77.057651
|
||||
17 2432.0 71.305746 ... 85.915795 83.366361
|
||||
18 2560.0 77.833728 ... 81.715711 81.310171
|
||||
19 2688.0 83.369354 ... 90.532356 90.102270
|
||||
20 2816.0 81.445766 ... 84.035084 83.873477
|
||||
21 2944.0 81.564701 ... 83.477440 82.921853
|
||||
22 3072.0 82.420822 ... 89.735509 88.750943
|
||||
23 3200.0 84.544253 ... 97.116842 95.096582
|
||||
24 3328.0 83.905938 ... 85.806075 83.808259
|
||||
25 3456.0 82.773682 ... 90.180725 90.687926
|
||||
26 3584.0 87.042978 ... 99.244365 98.268190
|
||||
27 3712.0 81.482335 ... 88.718781 88.248537
|
||||
28 3840.0 82.531346 ... 89.043476 89.912191
|
||||
29 3968.0 86.911637 ... 92.652949 84.094627
|
||||
30 4096.0 93.368854 ... 83.313299 89.928129
|
||||
17 2432.0 71.305746 ... 85.393507 75.522751
|
||||
18 2560.0 77.833728 ... 82.125311 80.908642
|
||||
19 2688.0 83.922689 ... 90.966561 89.464755
|
||||
20 2816.0 81.067298 ... 84.360174 83.873477
|
||||
21 2944.0 81.967162 ... 83.337844 81.832567
|
||||
22 3072.0 82.420822 ... 89.310890 88.473602
|
||||
23 3200.0 83.879425 ... 95.238096 87.671229
|
||||
24 3328.0 82.369902 ... 85.602017 81.994643
|
||||
25 3456.0 80.864158 ... 84.332184 90.281712
|
||||
26 3584.0 87.127323 ... 97.947050 97.205829
|
||||
27 3712.0 84.159518 ... 89.035062 87.552452
|
||||
28 3840.0 81.138664 ... 86.332554 90.279183
|
||||
29 3968.0 87.913500 ... 91.954739 86.358055
|
||||
30 4096.0 93.271527 ... 84.254693 82.418802
|
||||
|
||||
[31 rows x 5 columns]
|
||||
|
||||
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 6 minutes 37.798 seconds)
|
||||
**Total running time of the script:** ( 7 minutes 19.813 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -240,7 +240,7 @@ References
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.012 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.281 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
||||
|
@@ -38,34 +38,34 @@ Layer Normalization
|
||||
|
||||
layer-norm:
|
||||
N Triton Torch Apex
|
||||
0 1024.0 585.142849 277.694907 468.114273
|
||||
0 1024.0 606.814814 277.694907 468.114273
|
||||
1 1536.0 630.153868 323.368435 511.999982
|
||||
2 2048.0 668.734716 337.814445 520.126988
|
||||
3 2560.0 694.237267 362.477870 518.481028
|
||||
4 3072.0 712.347810 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 458.751978
|
||||
6 4096.0 728.177767 383.251446 455.111095
|
||||
7 4608.0 670.254540 396.387087 423.724136
|
||||
8 5120.0 688.403381 395.748783 417.959197
|
||||
9 5632.0 704.000002 396.969169 413.357796
|
||||
10 6144.0 702.171410 404.543206 409.600010
|
||||
2 2048.0 682.666643 337.814445 520.126988
|
||||
3 2560.0 694.237267 362.477870 512.000013
|
||||
4 3072.0 712.347810 378.092307 506.721668
|
||||
5 3584.0 725.873439 384.859062 455.111115
|
||||
6 4096.0 728.177767 381.023256 451.972420
|
||||
7 4608.0 670.254540 396.387087 431.157877
|
||||
8 5120.0 688.403381 397.669909 422.268057
|
||||
9 5632.0 704.000002 395.228063 415.262685
|
||||
10 6144.0 702.171410 402.885254 409.600010
|
||||
11 6656.0 700.631610 400.360920 400.360920
|
||||
12 7168.0 690.891575 391.426634 387.459443
|
||||
13 7680.0 678.895043 392.587863 385.203746
|
||||
14 8192.0 636.271854 388.937680 368.179771
|
||||
15 8704.0 627.315309 387.922008 380.502740
|
||||
16 9216.0 606.814809 405.098894 382.010363
|
||||
17 9728.0 587.350922 409.599987 382.427505
|
||||
18 10240.0 566.920437 408.578556 382.803739
|
||||
19 10752.0 547.872604 411.559798 380.601764
|
||||
20 11264.0 533.207081 401.389743 372.363645
|
||||
21 11776.0 520.486200 408.711507 377.587162
|
||||
22 12288.0 516.031509 413.911572 383.251457
|
||||
23 12800.0 504.433489 410.420828 377.163903
|
||||
24 13312.0 494.180982 404.159395 375.647260
|
||||
25 13824.0 482.934503 410.359948 378.739711
|
||||
26 14336.0 472.940209 403.121247 374.185964
|
||||
27 14848.0 461.297068 405.406157 374.712936
|
||||
12 7168.0 690.891575 392.767108 382.293315
|
||||
13 7680.0 678.895043 393.846167 386.415087
|
||||
14 8192.0 633.198054 394.795186 376.643677
|
||||
15 8704.0 624.502255 389.005597 379.465939
|
||||
16 9216.0 606.814809 406.214877 382.010363
|
||||
17 9728.0 587.350922 408.524944 383.369452
|
||||
18 10240.0 564.965524 409.600010 382.803739
|
||||
19 10752.0 546.133312 411.559798 380.601764
|
||||
20 11264.0 532.419472 404.089694 371.595879
|
||||
21 11776.0 520.486200 409.599991 377.587162
|
||||
22 12288.0 513.336807 413.911572 383.251457
|
||||
23 12800.0 504.433489 409.599981 377.163903
|
||||
24 13312.0 494.180982 406.473303 377.645399
|
||||
25 13824.0 482.934503 412.656711 379.389355
|
||||
26 14336.0 471.967074 402.414053 370.558967
|
||||
27 14848.0 461.297068 406.794504 373.534584
|
||||
28 15360.0 454.269882 406.214870 377.511515
|
||||
29 15872.0 447.098578 407.627589 376.225175
|
||||
|
||||
@@ -393,7 +393,7 @@ Layer Normalization
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 5 minutes 35.089 seconds)
|
||||
**Total running time of the script:** ( 5 minutes 37.451 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
||||
|
@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.072 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.073 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
||||
|
@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.010 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.254 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
||||
|
@@ -5,20 +5,20 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**17:23.779** total execution time for **getting-started_tutorials** files:
|
||||
**18:18.630** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:37.798 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:19.813 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:35.089 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:37.451 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:28.967 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.036 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:41.831 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:48.722 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.281 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.254 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.073 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
Reference in New Issue
Block a user