[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-07-25 00:49:58 +00:00
parent 379acd8521
commit 84440be392
165 changed files with 266 additions and 266 deletions

View File

@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 46.215 seconds)
**Total running time of the script:** ( 1 minutes 42.454 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 512.000001 186.181817
0 256.0 546.133347 512.000001 186.181817
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 585.142849 154.566038
3 640.0 706.206879 640.000002 158.759699
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 406.179533 198.733401
94 12288.0 812.429770 415.222812 199.096718
95 12416.0 812.498981 412.149375 198.655991
96 12544.0 810.925276 412.758863 198.913776
97 12672.0 811.007961 411.679167 198.971549
93 12160.0 812.359066 406.179533 198.834951
94 12288.0 812.429770 415.661740 199.197579
95 12416.0 812.498981 411.722274 198.755369
96 12544.0 810.925276 412.971190 199.111113
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 32.284 seconds)
**Total running time of the script:** ( 3 minutes 31.920 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -460,36 +460,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 8.507077
2 512.0 14.563555 ... 16.384000 15.420235
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 47.396572 47.396572
7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 80.430545 ... 81.355034 78.643199
9 1408.0 64.138541 ... 68.147202 66.485074
10 1536.0 79.526831 ... 81.355034 79.526831
11 1664.0 63.372618 ... 63.372618 62.492442
12 1792.0 72.983276 ... 73.460287 59.467852
13 1920.0 69.467336 ... 70.892307 71.257735
14 2048.0 73.584279 ... 78.033565 77.314362
15 2176.0 83.155572 ... 87.494120 85.998493
16 2304.0 68.251065 ... 78.064941 77.307030
17 2432.0 71.125224 ... 86.444504 85.653855
18 2560.0 77.833728 ... 82.747477 81.715711
19 2688.0 83.737433 ... 90.748936 89.044730
20 2816.0 79.587973 ... 84.852542 84.197315
21 2944.0 81.967162 ... 83.758038 82.102191
22 3072.0 81.825298 ... 89.877939 88.473602
23 3200.0 84.656085 ... 96.822991 95.522391
24 3328.0 83.034941 ... 86.320498 82.891535
25 3456.0 81.026701 ... 82.350937 87.442050
26 3584.0 87.296493 ... 99.574077 98.699661
27 3712.0 84.088676 ... 89.513749 87.860458
28 3840.0 81.019778 ... 89.766237 90.352941
29 3968.0 85.992909 ... 92.723355 84.300694
30 4096.0 93.990003 ... 88.127204 87.495257
13 1920.0 68.776119 ... 71.257735 70.892307
14 2048.0 73.584279 ... 78.398206 77.314362
15 2176.0 83.155572 ... 87.494120 86.367588
16 2304.0 68.446623 ... 78.064941 77.558029
17 2432.0 71.487187 ... 86.711310 75.320281
18 2560.0 77.833728 ... 82.747477 81.613947
19 2688.0 83.369354 ... 90.966561 89.676257
20 2816.0 83.552120 ... 84.360174 84.197315
21 2944.0 81.298583 ... 83.060049 83.198715
22 3072.0 81.884457 ... 87.787755 85.404375
23 3200.0 85.106381 ... 96.896287 95.522391
24 3328.0 83.808259 ... 85.806075 85.602017
25 3456.0 79.351933 ... 87.064328 89.183149
26 3584.0 87.127323 ... 91.192076 96.787292
27 3712.0 85.675250 ... 93.274830 87.937800
28 3840.0 81.798814 ... 86.197974 90.279183
29 3968.0 86.051653 ... 92.864488 86.205539
30 4096.0 94.386588 ... 88.243079 84.573239
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 39.020 seconds)
**Total running time of the script:** ( 7 minutes 8.783 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.012 seconds)
**Total running time of the script:** ( 0 minutes 0.287 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -42,32 +42,32 @@ Layer Normalization
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 668.734716 334.367358 520.126988
3 2560.0 694.237267 362.477870 512.000013
4 3072.0 712.347810 375.206126 496.484863
5 3584.0 725.873439 384.859062 451.527536
6 4096.0 720.175811 381.023256 455.111095
4 3072.0 712.347810 375.206126 501.551037
5 3584.0 725.873439 384.859062 458.751978
6 4096.0 728.177767 381.023256 458.293714
7 4608.0 670.254540 394.267384 426.173427
8 5120.0 688.403381 397.669909 420.102563
9 5632.0 704.000002 395.228063 415.262685
10 6144.0 697.191505 402.885254 409.600010
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 698.542675 395.228063 413.357796
10 6144.0 697.191505 402.885254 411.313806
11 6656.0 700.631610 398.861429 400.360920
12 7168.0 690.891575 396.844306 386.154893
13 7680.0 678.895043 392.587863 386.415087
12 7168.0 686.754468 396.844306 387.459443
13 7680.0 678.895043 392.587863 387.634072
14 8192.0 633.198054 393.609605 371.308771
15 8704.0 624.502255 389.005597 380.502740
16 9216.0 604.327881 407.337026 383.002605
17 9728.0 585.142883 409.599987 382.427505
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.002605
17 9728.0 587.350922 409.599987 383.369452
18 10240.0 564.965524 408.578556 382.803739
19 10752.0 546.133312 411.559798 381.445676
20 11264.0 533.207081 406.826188 373.134567
21 11776.0 520.486200 409.599991 377.587162
22 12288.0 516.031509 413.911572 383.251457
22 12288.0 513.336807 413.911572 383.251457
23 12800.0 504.433489 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.310952
25 13824.0 482.934503 411.122660 379.389355
25 13824.0 481.882350 411.888257 379.389355
26 14336.0 471.967074 406.695045 374.185964
27 14848.0 461.297068 408.192434 375.304904
28 15360.0 454.269882 406.214870 378.092307
29 15872.0 447.887117 406.974373 376.225175
29 15872.0 447.098578 406.974373 376.225175
@@ -393,7 +393,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 39.388 seconds)
**Total running time of the script:** ( 5 minutes 37.145 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -385,7 +385,7 @@ This is a Triton implementation of the Flash Attention algorithm
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.073 seconds)
**Total running time of the script:** ( 0 minutes 0.070 seconds)
.. _sphx_glr_download_getting-started_tutorials_06-fused-attention.py:

View File

@@ -152,7 +152,7 @@ We can also customize the libdevice library path by passing the path to the `lib
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.010 seconds)
**Total running time of the script:** ( 0 minutes 0.248 seconds)
.. _sphx_glr_download_getting-started_tutorials_07-libdevice-function.py:

View File

@@ -5,20 +5,20 @@
Computation times
=================
**17:37.003** total execution time for **getting-started_tutorials** files:
**18:00.906** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:39.020 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 07:08.783 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:39.388 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:37.145 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.284 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:31.920 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.215 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:42.454 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.073 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.287 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.248 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_07-libdevice-function.py` (``07-libdevice-function.py``) | 00:00.010 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_06-fused-attention.py` (``06-fused-attention.py``) | 00:00.070 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+