[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-07-26 00:50:19 +00:00
parent 84440be392
commit bec0049ff5
165 changed files with 294 additions and 294 deletions

View File

@@ -233,7 +233,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
vector-add-performance:
size Triton Torch
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
1 8192.0 15.999999 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
@@ -241,7 +241,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
6 262144.0 341.333321 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 42.454 seconds)
**Total running time of the script:** ( 1 minutes 51.976 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 512.000001 186.181817
0 256.0 512.000001 546.133347 184.089886
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 585.142849 154.566038
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 406.179533 198.834951
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 411.722274 198.755369
96 12544.0 810.925276 412.971190 199.111113
97 12672.0 811.007961 412.097543 199.167004
93 12160.0 812.359066 405.755985 198.936606
94 12288.0 812.429770 415.222812 199.096718
95 12416.0 812.498981 412.149375 198.755369
96 12544.0 810.925276 412.971190 199.012395
97 12672.0 811.007961 412.097543 199.069228
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 31.920 seconds)
**Total running time of the script:** ( 3 minutes 30.958 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000
0 256.0 2.730667 ... 3.276800 2.978909
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 15.420235 15.420235
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
7 1152.0 45.242181 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 68.147202 66.485074
10 1536.0 79.526831 ... 81.355034 79.526831
11 1664.0 63.372618 ... 63.372618 62.492442
12 1792.0 72.983276 ... 73.460287 59.467852
13 1920.0 68.776119 ... 71.257735 70.892307
14 2048.0 73.584279 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.494120 86.367588
16 2304.0 68.446623 ... 78.064941 77.558029
17 2432.0 71.487187 ... 86.711310 75.320281
18 2560.0 77.833728 ... 82.747477 81.613947
19 2688.0 83.369354 ... 90.966561 89.676257
20 2816.0 83.552120 ... 84.360174 84.197315
21 2944.0 81.298583 ... 83.060049 83.198715
22 3072.0 81.884457 ... 87.787755 85.404375
23 3200.0 85.106381 ... 96.896287 95.522391
24 3328.0 83.808259 ... 85.806075 85.602017
25 3456.0 79.351933 ... 87.064328 89.183149
26 3584.0 87.127323 ... 91.192076 96.787292
27 3712.0 85.675250 ... 93.274830 87.937800
28 3840.0 81.798814 ... 86.197974 90.279183
29 3968.0 86.051653 ... 92.864488 86.205539
30 4096.0 94.386588 ... 88.243079 84.573239
12 1792.0 72.983276 ... 72.983276 59.154861
13 1920.0 68.776119 ... 71.257735 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
15 2176.0 83.500614 ... 87.494120 85.998493
16 2304.0 68.251065 ... 78.064941 77.307030
17 2432.0 71.125224 ... 86.979769 85.915795
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 84.295681 ... 90.966561 89.464755
20 2816.0 84.035084 ... 85.017948 83.233226
21 2944.0 82.237674 ... 83.198715 82.509987
22 3072.0 82.420822 ... 89.735509 87.651868
23 3200.0 83.333330 ... 91.168092 95.952022
24 3328.0 83.468170 ... 84.795401 84.895397
25 3456.0 81.849303 ... 90.586029 91.097818
26 3584.0 87.042978 ... 95.451583 97.787265
27 3712.0 85.785610 ... 89.194055 85.163978
28 3840.0 84.940091 ... 93.801526 87.148936
29 3968.0 92.864488 ... 87.159957 91.712842
30 4096.0 86.536250 ... 85.652663 90.626421
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 7 minutes 8.783 seconds)
**Total running time of the script:** ( 7 minutes 14.455 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.287 seconds)
**Total running time of the script:** ( 0 minutes 0.283 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -40,34 +40,34 @@ Layer Normalization
N Triton Torch Apex
0 1024.0 585.142849 277.694907 468.114273
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 362.477870 512.000013
4 3072.0 712.347810 375.206126 501.551037
2 2048.0 682.666643 337.814445 520.126988
3 2560.0 694.237267 362.477870 518.481028
4 3072.0 712.347810 378.092307 501.551037
5 3584.0 725.873439 384.859062 458.751978
6 4096.0 728.177767 381.023256 458.293714
7 4608.0 670.254540 394.267384 426.173427
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 698.542675 395.228063 413.357796
6 4096.0 728.177767 381.023256 455.111095
7 4608.0 670.254540 396.387087 426.173427
8 5120.0 688.403381 397.669909 424.455959
9 5632.0 704.000002 398.725657 411.470331
10 6144.0 697.191505 402.885254 411.313806
11 6656.0 700.631610 398.861429 400.360920
12 7168.0 686.754468 396.844306 387.459443
13 7680.0 678.895043 392.587863 387.634072
14 8192.0 633.198054 393.609605 371.308771
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.002605
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 382.803739
19 10752.0 546.133312 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 513.336807 413.911572 383.251457
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.310952
25 13824.0 481.882350 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.098578 406.974373 376.225175
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 686.754468 384.859062 382.293315
13 7680.0 678.895043 391.337574 386.415087
14 8192.0 648.871301 390.095241 379.918832
15 8704.0 621.714277 390.095225 379.465939
16 9216.0 604.327881 405.098894 383.002605
17 9728.0 585.142883 409.599987 382.427505
18 10240.0 563.024047 409.600010 382.803739
19 10752.0 546.133312 410.577576 380.601764
20 11264.0 531.634232 398.725657 370.831272
21 11776.0 519.052343 409.599991 377.587162
22 12288.0 514.680630 413.911572 383.251457
23 12800.0 503.194086 410.420828 377.163903
24 13312.0 494.180982 408.030638 376.976995
25 13824.0 481.882350 412.656711 379.389355
26 14336.0 471.967074 399.609753 371.760140
27 14848.0 461.297068 405.406157 374.712936
28 15360.0 454.269882 406.887417 378.092307
29 15872.0 447.098578 407.627589 376.225175
@@ -393,7 +393,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 37.145 seconds)
**Total running time of the script:** ( 5 minutes 40.298 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.070 seconds)
**Total running time of the script:** ( 0 minutes 0.083 seconds)
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:

View File

@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.248 seconds)
**Total running time of the script:** ( 0 minutes 0.249 seconds)
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:

View File

@@ -5,20 +5,20 @@
Computation times
=================
**18:00.906** total execution time for **getting-started_tutorials** files:
**18:18.301** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:08.783 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:14.455 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:37.145 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:40.298 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:31.920 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:30.958 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:42.454 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:51.976 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.287 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.283 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.248 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.249 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.070 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.083 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+