[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-04-22 00:44:52 +00:00
parent ab04e47bf2
commit cb69ba73a9
158 changed files with 312 additions and 312 deletions

View File

@@ -241,7 +241,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 702.171410
9 2097152.0 702.171410 702.171410
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 37.409 seconds)
**Total running time of the script:** ( 1 minutes 46.845 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 186.181817
1 384.0 585.142862 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 682.666684 620.606056 160.000000
4 768.0 722.823517 664.216187 162.754967
0 256.0 512.000001 546.133347 190.511628
1 384.0 585.142862 558.545450 153.600004
2 512.0 655.360017 585.142849 153.121496
3 640.0 682.666684 620.606056 158.759699
4 768.0 722.823517 646.736871 162.754967
.. ... ... ... ...
93 12160.0 814.058574 405.755985 198.936606
94 12288.0 815.800825 415.661740 199.298541
95 12416.0 814.163950 411.722274 198.854847
96 12544.0 814.214963 412.546756 199.061730
97 12672.0 814.265046 412.097543 199.167004
94 12288.0 814.111783 415.222812 199.197579
95 12416.0 814.163950 411.296057 198.854847
96 12544.0 812.566838 412.546756 199.012395
97 12672.0 814.265046 412.516771 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 27.320 seconds)
**Total running time of the script:** ( 3 minutes 29.190 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,36 +459,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 15.420235 16.384000
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 15.420235
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 37.971025 ... 41.321411 40.140799
5 896.0 37.971025 ... 40.140799 40.140799
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 44.566925 ... 48.161033 47.396572
7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 68.147202
10 1536.0 80.430545 ... 80.430545 79.526831
11 1664.0 62.929456 ... 63.372618 62.929456
12 1792.0 72.983276 ... 63.499573 62.790080
13 1920.0 69.120002 ... 71.626943 70.892307
14 2048.0 73.262953 ... 78.033565 78.033565
15 2176.0 83.155572 ... 87.115360 86.739860
16 2304.0 68.446623 ... 77.558029 77.307030
17 2432.0 71.125224 ... 75.726318 74.918570
18 2560.0 77.833728 ... 82.331658 81.715711
19 2688.0 83.369354 ... 91.404957 91.185232
20 2816.0 79.733474 ... 83.873477 83.392363
21 2944.0 82.102191 ... 83.758038 84.182483
22 3072.0 81.296638 ... 88.473602 87.516392
23 3200.0 79.750779 ... 96.168294 95.952022
24 3328.0 82.939284 ... 86.528001 85.602017
25 3456.0 82.688790 ... 92.455926 86.689860
26 3584.0 86.291162 ... 88.152348 94.250936
27 3712.0 85.675250 ... 87.706180 87.246590
28 3840.0 81.079177 ... 86.875096 91.398346
29 3968.0 87.035620 ... 89.525997 84.917596
30 4096.0 88.243079 ... 85.161847 91.616198
11 1664.0 62.929456 ... 62.929456 62.929456
12 1792.0 72.983276 ... 63.142831 63.142831
13 1920.0 68.776119 ... 71.257735 71.257735
14 2048.0 73.584279 ... 78.398206 78.033565
15 2176.0 82.813365 ... 86.739860 86.367588
16 2304.0 68.251065 ... 77.558029 77.307030
17 2432.0 71.125224 ... 75.726318 75.522751
18 2560.0 77.283019 ... 82.331658 82.125311
19 2688.0 83.552988 ... 91.185232 90.748936
20 2816.0 79.443003 ... 83.792906 83.552120
21 2944.0 82.237674 ... 84.040530 83.899046
22 3072.0 81.589488 ... 85.598037 89.451983
23 3200.0 84.432717 ... 88.520058 95.952022
24 3328.0 81.854799 ... 85.703924 82.275764
25 3456.0 80.220468 ... 92.244355 87.632137
26 3584.0 83.101104 ... 91.284657 94.250936
27 3712.0 85.675250 ... 82.902362 87.552452
28 3840.0 81.079177 ... 87.493673 91.625518
29 3968.0 87.035620 ... 90.388098 84.915752
30 4096.0 91.366730 ... 83.208386 90.626421
[31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 2.101 seconds)
**Total running time of the script:** ( 6 minutes 21.623 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.013 seconds)
**Total running time of the script:** ( 0 minutes 0.011 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward:
N Triton Torch Apex
0 1024.0 356.173905 98.303995 296.096389
1 1536.0 409.599994 132.604320 341.333333
2 2048.0 491.520012 161.684218 336.657521
0 1024.0 356.173905 97.912354 303.407414
1 1536.0 405.098894 135.032961 341.333333
2 2048.0 486.653476 161.684218 336.657521
3 2560.0 461.954908 182.857144 330.322572
4 3072.0 519.211251 191.501303 319.168834
5 3584.0 558.545477 207.768111 310.527060
6 4096.0 564.965515 220.907859 298.796351
7 4608.0 502.690905 232.825259 288.751954
8 5120.0 529.655159 243.326731 285.767451
9 5632.0 545.032265 242.236559 290.060087
10 6144.0 550.208948 251.202731 288.563606
11 6656.0 530.710976 255.182111 285.257135
12 7168.0 512.000004 251.877006 276.134819
13 7680.0 483.779539 263.690977 276.756754
14 8192.0 465.895721 267.130429 278.876591
15 8704.0 414.476194 266.448988 283.440968
16 9216.0 428.651187 270.396088 287.625496
17 9728.0 438.445087 281.630872 289.667485
18 10240.0 444.412281 286.433562 288.112552
19 10752.0 427.940303 246.699797 290.267711
20 11264.0 427.071098 243.985547 284.564206
21 11776.0 421.826879 249.447482 288.981596
22 12288.0 418.314886 254.015505 294.323369
23 12800.0 413.458944 253.256381 289.538159
24 13312.0 409.075539 252.559690 290.443638
25 13824.0 403.620451 257.590056 292.056329
26 14336.0 400.074432 255.051144 288.160801
27 14848.0 384.414233 257.665934 289.717061
28 15360.0 378.092318 259.971797 288.225185
29 15872.0 370.913333 262.166551 290.562936
4 3072.0 519.211251 190.511624 319.168834
5 3584.0 554.941930 207.267476 308.301075
6 4096.0 564.965515 219.919464 297.890900
7 4608.0 500.416301 232.825259 287.999990
8 5120.0 527.381977 241.889751 285.104413
9 5632.0 538.517949 242.236559 288.820505
10 6144.0 546.133354 250.775512 288.000001
11 6656.0 532.479975 254.775119 284.748652
12 7168.0 505.976473 255.240352 279.272738
13 7680.0 485.052616 264.447629 281.404588
14 8192.0 464.794337 266.406514 282.077471
15 8704.0 415.300208 262.762264 280.397305
16 9216.0 428.651187 270.727053 287.065546
17 9728.0 437.213490 279.942444 287.527089
18 10240.0 442.810829 285.104413 289.469963
19 10752.0 425.821771 248.123076 292.571421
20 11264.0 427.071098 243.545956 282.778242
21 11776.0 423.724129 248.569911 287.511689
22 12288.0 417.131525 253.578674 294.323369
23 12800.0 412.903215 254.726371 292.015215
24 13312.0 408.030638 253.561895 292.036577
25 13824.0 404.112047 256.991469 291.799461
26 14336.0 393.215988 252.988236 284.585606
27 14848.0 381.533186 256.552919 288.077610
28 15360.0 377.318326 260.338991 290.267715
29 15872.0 369.832994 262.708969 290.341468
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 14.528 seconds)
**Total running time of the script:** ( 2 minutes 14.604 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**13:21.371** total execution time for **getting-started_tutorials** files:
**13:52.274** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:02.101 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:21.623 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:27.320 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:29.190 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:14.528 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:14.604 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:37.409 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.845 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.013 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.011 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+