[GH-PAGES] Updated website
This commit is contained in:
@@ -234,7 +234,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
size Triton Torch
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
2 16384.0 31.999999 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
@@ -245,7 +245,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
10 4194304.0 780.190482 780.190482
|
||||
11 8388608.0 812.429770 812.429770
|
||||
12 16777216.0 833.084721 833.084721
|
||||
13 33554432.0 842.004273 843.811163
|
||||
13 33554432.0 842.004273 842.004273
|
||||
14 67108864.0 847.448255 848.362445
|
||||
15 134217728.0 849.737435 850.656574
|
||||
|
||||
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 45.595 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 41.994 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 546.133347 546.133347 188.321838
|
||||
0 256.0 512.000001 546.133347 188.321838
|
||||
1 384.0 614.400016 585.142862 153.600004
|
||||
2 512.0 655.360017 606.814814 154.566038
|
||||
3 640.0 706.206879 640.000002 158.759699
|
||||
4 768.0 722.823517 664.216187 162.754967
|
||||
2 512.0 655.360017 585.142849 154.566038
|
||||
3 640.0 706.206879 640.000002 160.000000
|
||||
4 768.0 722.823517 664.216187 163.839992
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 406.179533 198.631953
|
||||
94 12288.0 812.429770 415.661740 198.895304
|
||||
95 12416.0 812.498981 412.149375 198.556711
|
||||
96 12544.0 810.925276 412.546756 198.815254
|
||||
97 12672.0 811.007961 412.097543 198.873965
|
||||
93 12160.0 812.359066 405.755985 198.936606
|
||||
94 12288.0 812.429770 415.661740 199.096718
|
||||
95 12416.0 812.498981 412.149375 198.755369
|
||||
96 12544.0 810.925276 412.971190 198.913776
|
||||
97 12672.0 811.007961 412.097543 199.069228
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -306,7 +306,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 3 minutes 30.581 seconds)
|
||||
**Total running time of the script:** ( 3 minutes 29.122 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.978909 ... 2.978909 3.276800
|
||||
0 256.0 2.730667 ... 3.276800 2.978909
|
||||
1 384.0 7.372800 ... 8.507077 8.507077
|
||||
2 512.0 14.563555 ... 16.384000 16.384000
|
||||
2 512.0 14.563555 ... 16.384000 15.420235
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 35.389441 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 45.242181 ... 48.161033 48.161033
|
||||
7 1152.0 44.566925 ... 47.396572 47.396572
|
||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||
9 1408.0 64.138541 ... 69.009825 68.147202
|
||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
||||
10 1536.0 80.430545 ... 81.355034 78.643199
|
||||
11 1664.0 62.929456 ... 63.372618 62.492442
|
||||
12 1792.0 72.512412 ... 73.460287 59.467852
|
||||
13 1920.0 69.120002 ... 71.257735 71.257735
|
||||
13 1920.0 69.120002 ... 71.626943 71.257735
|
||||
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||
15 2176.0 83.500614 ... 87.876193 86.367588
|
||||
16 2304.0 68.446623 ... 78.064941 77.307030
|
||||
17 2432.0 71.305746 ... 86.179335 85.653855
|
||||
18 2560.0 77.833728 ... 82.125311 81.715711
|
||||
19 2688.0 83.552988 ... 90.748936 89.254248
|
||||
20 2816.0 83.873477 ... 84.687779 83.873477
|
||||
21 2944.0 81.967162 ... 81.431424 83.337844
|
||||
22 3072.0 81.589488 ... 86.712254 88.197981
|
||||
23 3200.0 84.099871 ... 96.749806 95.238096
|
||||
24 3328.0 82.939284 ... 85.096096 84.695641
|
||||
25 3456.0 80.300370 ... 91.615417 86.596744
|
||||
26 3584.0 83.876297 ... 88.152348 93.857401
|
||||
27 3712.0 85.528545 ... 83.526206 87.629253
|
||||
28 3840.0 81.798814 ... 88.086021 90.058629
|
||||
29 3968.0 87.976885 ... 91.989400 86.296981
|
||||
30 4096.0 93.142072 ... 83.105343 84.626564
|
||||
15 2176.0 83.500614 ... 87.494120 85.998493
|
||||
16 2304.0 68.446623 ... 77.810656 77.307030
|
||||
17 2432.0 71.305746 ... 86.047367 80.155391
|
||||
18 2560.0 78.019048 ... 82.331658 81.108913
|
||||
19 2688.0 83.369354 ... 90.102270 89.888756
|
||||
20 2816.0 79.733474 ... 83.873477 83.712490
|
||||
21 2944.0 81.832567 ... 82.921853 82.102191
|
||||
22 3072.0 82.540970 ... 88.820552 85.922766
|
||||
23 3200.0 79.601989 ... 89.761569 95.952022
|
||||
24 3328.0 83.516586 ... 85.806075 85.703924
|
||||
25 3456.0 78.463811 ... 90.738961 83.632331
|
||||
26 3584.0 86.540320 ... 88.152348 93.955476
|
||||
27 3712.0 85.748791 ... 85.345876 87.475786
|
||||
28 3840.0 82.592983 ... 87.286505 90.500819
|
||||
29 3968.0 86.788006 ... 91.609561 84.096442
|
||||
30 4096.0 92.691803 ... 90.200084 86.480498
|
||||
|
||||
[31 rows x 5 columns]
|
||||
|
||||
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 6 minutes 29.672 seconds)
|
||||
**Total running time of the script:** ( 6 minutes 19.422 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -38,36 +38,36 @@ Layer Normalization
|
||||
|
||||
layer-norm:
|
||||
N Triton Torch Apex
|
||||
0 1024.0 585.142849 277.694907 468.114273
|
||||
0 1024.0 585.142849 277.694907 481.882344
|
||||
1 1536.0 630.153868 323.368435 511.999982
|
||||
2 2048.0 668.734716 334.367358 520.126988
|
||||
3 2560.0 694.237267 365.714281 518.481028
|
||||
4 3072.0 712.347810 378.092307 496.484863
|
||||
5 3584.0 725.873439 384.859062 451.527536
|
||||
2 2048.0 682.666643 334.367358 520.126988
|
||||
3 2560.0 694.237267 362.477870 518.481028
|
||||
4 3072.0 712.347810 378.092307 501.551037
|
||||
5 3584.0 725.873439 384.859062 458.751978
|
||||
6 4096.0 728.177767 381.023256 455.111095
|
||||
7 4608.0 670.254540 394.267384 421.302872
|
||||
8 5120.0 688.403381 397.669909 424.455959
|
||||
9 5632.0 698.542675 395.228063 411.470331
|
||||
10 6144.0 702.171410 402.885254 411.313806
|
||||
11 6656.0 700.631610 398.861429 400.360920
|
||||
12 7168.0 690.891575 396.844306 387.459443
|
||||
13 7680.0 678.895043 393.846167 386.415087
|
||||
14 8192.0 636.271854 393.609605 372.363633
|
||||
15 8704.0 627.315309 389.005597 380.502740
|
||||
16 9216.0 606.814809 407.337026 383.999986
|
||||
17 9728.0 587.350922 409.599987 382.427505
|
||||
18 10240.0 564.965524 408.578556 382.803739
|
||||
19 10752.0 547.872604 411.559798 381.445676
|
||||
20 11264.0 533.207081 406.826188 373.134567
|
||||
21 11776.0 520.486200 409.599991 377.587162
|
||||
22 12288.0 514.680630 413.911572 383.251457
|
||||
23 12800.0 504.433489 410.420828 376.470582
|
||||
24 13312.0 494.180982 405.699062 376.976995
|
||||
25 13824.0 482.934503 411.122660 379.389355
|
||||
26 14336.0 471.967074 406.695045 374.185964
|
||||
27 14848.0 461.297068 408.192434 375.304904
|
||||
28 15360.0 454.269882 406.214870 378.092307
|
||||
29 15872.0 447.098578 406.974373 376.783377
|
||||
7 4608.0 670.254540 396.387087 426.173427
|
||||
8 5120.0 688.403381 395.748783 424.455959
|
||||
9 5632.0 704.000002 396.969169 413.357796
|
||||
10 6144.0 697.191505 402.885254 411.313806
|
||||
11 6656.0 700.631610 400.360920 398.861429
|
||||
12 7168.0 686.754468 383.571898 381.023265
|
||||
13 7680.0 678.895043 391.337574 386.415087
|
||||
14 8192.0 642.509816 390.095241 377.729113
|
||||
15 8704.0 621.714277 390.095225 379.465939
|
||||
16 9216.0 601.861217 403.989025 383.002605
|
||||
17 9728.0 585.142883 409.599987 382.427505
|
||||
18 10240.0 563.024047 409.600010 382.803739
|
||||
19 10752.0 546.133312 410.577576 380.601764
|
||||
20 11264.0 531.634232 397.845487 371.595879
|
||||
21 11776.0 519.052343 409.599991 377.587162
|
||||
22 12288.0 516.031509 413.911572 382.505826
|
||||
23 12800.0 503.194086 409.599981 376.470582
|
||||
24 13312.0 494.180982 407.250459 376.976995
|
||||
25 13824.0 481.882350 411.888257 379.389355
|
||||
26 14336.0 471.967074 399.609753 371.760140
|
||||
27 14848.0 461.297068 405.406157 374.712936
|
||||
28 15360.0 454.269882 407.562194 378.092307
|
||||
29 15872.0 447.887117 406.974373 376.225175
|
||||
|
||||
|
||||
|
||||
@@ -393,7 +393,7 @@ Layer Normalization
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 5 minutes 36.337 seconds)
|
||||
**Total running time of the script:** ( 5 minutes 29.220 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
||||
|
@@ -390,7 +390,7 @@ This is a Triton implementation of the Flash Attention algorithm
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.072 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.076 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
||||
|
@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.010 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.012 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
||||
|
@@ -5,20 +5,20 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**17:22.279** total execution time for **getting-started_tutorials** files:
|
||||
**16:59.858** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:29.672 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:19.422 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:36.337 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:29.220 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:30.581 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:29.122 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:45.595 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:41.994 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.076 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.012 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
Reference in New Issue
Block a user