[GH-PAGES] Updated website
This commit is contained in:
@@ -234,11 +234,11 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
size Triton Torch
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
2 16384.0 31.999999 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 384.000001 384.000001
|
||||
6 262144.0 341.333321 341.333321
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 722.823517
|
||||
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 37.083 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 47.685 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 546.133347 546.133347 195.047621
|
||||
0 256.0 512.000001 512.000001 188.321838
|
||||
1 384.0 614.400016 585.142862 153.600004
|
||||
2 512.0 655.360017 585.142849 154.566038
|
||||
3 640.0 706.206879 640.000002 160.000000
|
||||
4 768.0 722.823517 664.216187 162.754967
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 406.179533 199.038365
|
||||
94 12288.0 812.429770 416.101597 199.298541
|
||||
95 12416.0 812.498981 412.149375 198.954424
|
||||
96 12544.0 810.925276 412.546756 199.209928
|
||||
97 12672.0 811.007961 412.097543 199.264875
|
||||
93 12160.0 812.359066 406.179533 198.530610
|
||||
94 12288.0 812.429770 415.661740 198.895304
|
||||
95 12416.0 812.498981 412.149375 198.457532
|
||||
96 12544.0 810.925276 412.971190 198.716830
|
||||
97 12672.0 811.007961 412.097543 198.776477
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -306,7 +306,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 3 minutes 32.694 seconds)
|
||||
**Total running time of the script:** ( 3 minutes 33.159 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 256.0 2.730667 ... 2.978909 3.276800
|
||||
1 384.0 7.372800 ... 7.899428 8.507077
|
||||
2 512.0 14.563555 ... 15.420235 16.384000
|
||||
0 256.0 2.730667 ... 2.978909 2.978909
|
||||
1 384.0 7.372800 ... 7.899428 7.899428
|
||||
2 512.0 14.563555 ... 16.384000 15.420235
|
||||
3 640.0 22.260869 ... 24.380953 24.380953
|
||||
4 768.0 32.768000 ... 35.389441 34.028308
|
||||
5 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 1024.0 51.150050 ... 53.773130 52.428801
|
||||
5 896.0 37.971025 ... 40.140799 39.025776
|
||||
6 1024.0 49.932191 ... 53.773130 52.428801
|
||||
7 1152.0 45.242181 ... 48.161033 47.396572
|
||||
8 1280.0 51.200001 ... 57.690139 57.690139
|
||||
9 1408.0 64.138541 ... 68.147202 67.305878
|
||||
10 1536.0 80.430545 ... 81.355034 79.526831
|
||||
10 1536.0 79.526831 ... 81.355034 79.526831
|
||||
11 1664.0 63.372618 ... 63.372618 62.492442
|
||||
12 1792.0 72.983276 ... 73.460287 59.467852
|
||||
12 1792.0 72.983276 ... 72.983276 59.467852
|
||||
13 1920.0 69.120002 ... 71.257735 70.892307
|
||||
14 2048.0 73.908442 ... 78.398206 77.314362
|
||||
15 2176.0 83.155572 ... 88.261612 86.367588
|
||||
16 2304.0 68.446623 ... 78.064941 77.558029
|
||||
17 2432.0 71.305746 ... 86.711310 85.915795
|
||||
18 2560.0 78.019048 ... 82.539044 81.310171
|
||||
19 2688.0 83.552988 ... 90.102270 89.888756
|
||||
20 2816.0 83.712490 ... 84.852542 83.873477
|
||||
21 2944.0 82.373605 ... 83.758038 82.921853
|
||||
22 3072.0 82.540970 ... 85.922766 88.335577
|
||||
23 3200.0 84.993363 ... 96.676741 96.385543
|
||||
24 3328.0 84.003845 ... 86.217120 81.162679
|
||||
25 3456.0 81.026701 ... 85.767626 89.183149
|
||||
26 3584.0 87.211821 ... 99.463928 97.840469
|
||||
27 3712.0 83.247783 ... 89.273764 84.088676
|
||||
28 3840.0 85.070769 ... 90.723546 88.686451
|
||||
29 3968.0 93.219206 ... 88.103928 87.441013
|
||||
30 4096.0 90.565269 ... 86.037005 82.597115
|
||||
14 2048.0 73.584279 ... 78.398206 77.314362
|
||||
15 2176.0 83.155572 ... 87.494120 85.632545
|
||||
16 2304.0 68.446623 ... 78.064941 77.307030
|
||||
17 2432.0 71.305746 ... 86.711310 86.179335
|
||||
18 2560.0 78.019048 ... 82.331658 81.715711
|
||||
19 2688.0 83.369354 ... 90.532356 89.464755
|
||||
20 2816.0 84.360174 ... 84.687779 83.873477
|
||||
21 2944.0 82.373605 ... 83.758038 82.102191
|
||||
22 3072.0 82.540970 ... 89.593522 88.612060
|
||||
23 3200.0 84.544253 ... 96.822991 95.665176
|
||||
24 3328.0 83.905938 ... 85.398926 84.101981
|
||||
25 3456.0 82.773682 ... 86.318594 88.400840
|
||||
26 3584.0 86.457107 ... 97.416461 98.699661
|
||||
27 3712.0 83.317214 ... 88.955779 85.491947
|
||||
28 3840.0 84.036474 ... 93.801526 84.679936
|
||||
29 3968.0 93.576636 ... 81.025193 81.512316
|
||||
30 4096.0 88.475759 ... 93.858555 89.928129
|
||||
|
||||
[31 rows x 5 columns]
|
||||
|
||||
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 7 minutes 17.371 seconds)
|
||||
**Total running time of the script:** ( 7 minutes 24.521 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -240,7 +240,7 @@ References
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.280 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.282 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
|
||||
|
@@ -42,14 +42,14 @@ Layer Normalization
|
||||
1 1536.0 630.153868 323.368435 511.999982
|
||||
2 2048.0 668.734716 334.367358 520.126988
|
||||
3 2560.0 694.237267 362.477870 512.000013
|
||||
4 3072.0 712.347810 378.092307 501.551037
|
||||
4 3072.0 712.347810 375.206126 496.484863
|
||||
5 3584.0 725.873439 384.859062 455.111115
|
||||
6 4096.0 728.177767 381.023256 458.293714
|
||||
7 4608.0 670.254540 396.387087 431.157877
|
||||
8 5120.0 688.403381 397.669909 422.268057
|
||||
9 5632.0 704.000002 396.969169 417.185184
|
||||
10 6144.0 697.191505 402.885254 411.313806
|
||||
11 6656.0 705.271522 400.360920 400.360920
|
||||
6 4096.0 728.177767 381.023256 448.876695
|
||||
7 4608.0 670.254540 396.387087 426.173427
|
||||
8 5120.0 688.403381 397.669909 426.666652
|
||||
9 5632.0 698.542675 396.969169 413.357796
|
||||
10 6144.0 702.171410 402.885254 411.313806
|
||||
11 6656.0 700.631610 400.360920 400.360920
|
||||
12 7168.0 690.891575 396.844306 387.459443
|
||||
13 7680.0 678.895043 393.846167 387.634072
|
||||
14 8192.0 633.198054 393.609605 371.308771
|
||||
@@ -59,9 +59,9 @@ Layer Normalization
|
||||
18 10240.0 564.965524 408.578556 382.803739
|
||||
19 10752.0 547.872604 411.559798 381.445676
|
||||
20 11264.0 533.207081 406.826188 373.134567
|
||||
21 11776.0 520.486200 410.492372 378.345375
|
||||
21 11776.0 520.486200 409.599991 378.345375
|
||||
22 12288.0 514.680630 414.784810 383.251457
|
||||
23 12800.0 504.433489 410.420828 377.163903
|
||||
23 12800.0 504.433489 410.420828 376.470582
|
||||
24 13312.0 494.180982 405.699062 376.976995
|
||||
25 13824.0 481.882350 411.888257 379.389355
|
||||
26 14336.0 471.967074 406.695045 374.185964
|
||||
@@ -393,7 +393,7 @@ Layer Normalization
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 5 minutes 38.079 seconds)
|
||||
**Total running time of the script:** ( 5 minutes 40.195 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:
|
||||
|
@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.073 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.072 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:
|
||||
|
@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 0.264 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 0.253 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:
|
||||
|
@@ -5,20 +5,20 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**18:05.845** total execution time for **getting-started_tutorials** files:
|
||||
**18:26.166** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:17.371 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:24.521 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:38.079 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:40.195 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.694 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:33.159 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:37.083 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:47.685 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.280 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.282 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.264 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.253 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.073 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
Reference in New Issue
Block a user