[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-08-07 00:51:30 +00:00
parent 73ee4b1d0d
commit 355b06f4b3
165 changed files with 288 additions and 288 deletions

View File

@@ -234,18 +234,18 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
size Triton Torch
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 31.999999 38.400001
3 32768.0 76.800002 63.999998
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 384.000001 384.000001
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.906750
13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 50.715 seconds)
**Total running time of the script:** ( 1 minutes 34.873 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -280,15 +280,15 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 546.133347 188.321838
1 384.0 614.400016 585.142862 151.703707
2 512.0 655.360017 606.814814 156.038096
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
4 768.0 722.823517 664.216187 163.839992
.. ... ... ... ...
93 12160.0 812.359066 405.755985 198.834951
94 12288.0 812.429770 415.222812 199.096718
95 12416.0 812.498981 411.722274 198.655991
96 12544.0 810.925276 412.971190 198.913776
97 12672.0 811.007961 412.516771 199.069228
94 12288.0 812.429770 415.661740 199.096718
95 12416.0 812.498981 411.722274 198.755369
96 12544.0 810.925276 413.396498 199.012395
97 12672.0 812.633240 412.097543 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 30.914 seconds)
**Total running time of the script:** ( 3 minutes 27.507 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
0 256.0 2.730667 ... 3.276800 3.276800
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
6 1024.0 51.150050 ... 53.773130 52.428801
7 1152.0 45.242181 ... 47.396572 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 67.305878
10 1536.0 80.430545 ... 80.430545 79.526831
11 1664.0 62.929456 ... 63.372618 62.492442
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 79.526831
11 1664.0 62.492442 ... 63.372618 62.492442
12 1792.0 72.512412 ... 73.460287 59.467852
13 1920.0 69.120002 ... 71.626943 71.257735
14 2048.0 73.908442 ... 78.398206 77.314362
14 2048.0 73.584279 ... 78.398206 77.314362
15 2176.0 83.500614 ... 87.876193 86.367588
16 2304.0 68.251065 ... 78.064941 77.307030
17 2432.0 71.305746 ... 86.179335 85.393507
18 2560.0 77.833728 ... 82.539044 81.512437
19 2688.0 83.737433 ... 91.185232 89.254248
20 2816.0 79.879498 ... 82.602666 83.392363
21 2944.0 82.102191 ... 82.990890 83.337844
22 3072.0 80.202695 ... 89.170242 87.381335
23 3200.0 82.474230 ... 96.676741 95.238096
24 3328.0 82.843841 ... 86.062515 84.795401
25 3456.0 81.026701 ... 91.200871 87.347312
26 3584.0 87.381330 ... 95.350361 98.268190
27 3712.0 85.970176 ... 89.353616 87.552452
28 3840.0 79.192264 ... 91.853823 85.796739
29 3968.0 87.850207 ... 86.449828 89.988156
30 4096.0 86.509232 ... 92.948562 87.352901
17 2432.0 71.305746 ... 86.179335 85.653855
18 2560.0 77.833728 ... 82.331658 81.920002
19 2688.0 83.552988 ... 91.404957 89.464755
20 2816.0 79.879498 ... 84.360174 83.873477
21 2944.0 82.237674 ... 81.431424 83.337844
22 3072.0 81.589488 ... 89.030036 88.612060
23 3200.0 84.993363 ... 96.822991 95.808380
24 3328.0 82.891535 ... 85.602017 84.101981
25 3456.0 80.300370 ... 91.771848 86.596744
26 3584.0 85.633710 ... 90.458141 95.451583
27 3712.0 83.247783 ... 86.829501 87.552452
28 3840.0 81.019778 ... 88.971840 91.853823
29 3968.0 85.753071 ... 85.600795 89.988156
30 4096.0 88.651075 ... 88.768339 89.299883
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 7 minutes 14.457 seconds)
**Total running time of the script:** ( 6 minutes 20.627 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.282 seconds)
**Total running time of the script:** ( 0 minutes 0.012 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm:
N Triton Torch Apex
0 1024.0 585.142849 277.694907 468.114273
0 1024.0 585.142849 277.694907 481.882344
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 682.666643 337.814445 520.126988
3 2560.0 694.237267 365.714281 518.481028
4 3072.0 712.347810 375.206126 496.484863
2 2048.0 668.734716 337.814445 520.126988
3 2560.0 694.237267 362.477870 518.481028
4 3072.0 702.171410 375.206126 501.551037
5 3584.0 725.873439 384.859062 455.111115
6 4096.0 728.177767 381.023256 442.810792
7 4608.0 670.254540 396.387087 426.173427
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 698.542675 398.725657 411.470331
10 6144.0 697.191505 402.885254 409.600010
11 6656.0 700.631610 400.360920 398.861429
12 7168.0 690.891575 382.293315 382.293315
6 4096.0 728.177767 383.251446 451.972420
7 4608.0 670.254540 396.387087 421.302872
8 5120.0 688.403381 397.669909 422.268057
9 5632.0 704.000002 398.725657 413.357796
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 700.631610 400.360920 400.360920
12 7168.0 690.891575 383.571898 381.023265
13 7680.0 678.895043 392.587863 386.415087
14 8192.0 636.271854 392.431125 374.491442
15 8704.0 624.502255 392.292962 380.502740
16 9216.0 606.814809 403.989025 383.002605
17 9728.0 587.350922 407.455499 382.427505
18 10240.0 566.920437 407.562184 381.911416
19 10752.0 547.872604 410.577576 380.601764
20 11264.0 533.207081 396.096702 369.311483
21 11776.0 521.927959 407.826843 377.587162
22 12288.0 516.031509 413.042029 382.505826
23 12800.0 504.433489 408.782457 376.470582
24 13312.0 494.180982 401.871683 375.647260
25 13824.0 482.934503 409.600016 378.092325
26 14336.0 471.967074 398.914774 372.969090
27 14848.0 461.297068 403.341254 374.712936
28 15360.0 454.269882 406.887417 378.092307
29 15872.0 447.887117 406.974373 376.225175
14 8192.0 636.271854 390.095241 375.564460
15 8704.0 627.315309 392.292962 380.502740
16 9216.0 609.322328 403.989025 381.023249
17 9728.0 587.350922 408.524944 382.427505
18 10240.0 566.920437 408.578556 382.803739
19 10752.0 547.872604 412.546760 379.761601
20 11264.0 531.634232 396.096702 369.311483
21 11776.0 521.927959 408.711507 378.345375
22 12288.0 514.007840 413.911572 383.251457
23 12800.0 504.433489 410.420828 377.163903
24 13312.0 494.180982 404.159395 376.310952
25 13824.0 481.882350 409.600016 378.739711
26 14336.0 471.967074 400.307157 369.961287
27 14848.0 461.297068 404.027214 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.098578 408.940410 376.783377
@@ -393,7 +393,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 38.714 seconds)
**Total running time of the script:** ( 5 minutes 29.729 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.074 seconds)
**Total running time of the script:** ( 0 minutes 0.072 seconds)
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:

View File

@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.250 seconds)
**Total running time of the script:** ( 0 minutes 0.010 seconds)
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:

View File

@@ -5,20 +5,20 @@
Computation times
=================
**18:15.408** total execution time for **getting-started_tutorials** files:
**16:52.830** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:14.457 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:20.627 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:38.714 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:29.729 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:30.914 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.507 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:50.715 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:34.873 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.282 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.072 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.250 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.074 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+