[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-04-23 00:44:46 +00:00
parent cb69ba73a9
commit 31dd4ab60e
158 changed files with 334 additions and 334 deletions

View File

@@ -241,13 +241,13 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 702.171410 702.171410
9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
15 134217728.0 850.196756 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 46.845 seconds)
**Total running time of the script:** ( 1 minutes 52.088 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -279,16 +279,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628
1 384.0 585.142862 558.545450 153.600004
2 512.0 655.360017 585.142849 153.121496
3 640.0 682.666684 620.606056 158.759699
1 384.0 585.142862 558.545450 149.853661
2 512.0 655.360017 585.142849 154.566038
3 640.0 682.666684 640.000002 158.759699
4 768.0 722.823517 646.736871 162.754967
.. ... ... ... ...
93 12160.0 814.058574 405.755985 198.936606
94 12288.0 814.111783 415.222812 199.197579
95 12416.0 814.163950 411.296057 198.854847
96 12544.0 812.566838 412.546756 199.012395
97 12672.0 814.265046 412.516771 199.167004
93 12160.0 815.765209 405.755985 198.631953
94 12288.0 814.111783 416.101597 198.794749
95 12416.0 815.835709 412.149375 198.457532
96 12544.0 814.214963 412.971190 198.618504
97 12672.0 814.265046 412.097543 198.776477
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 29.190 seconds)
**Total running time of the script:** ( 3 minutes 32.198 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,36 +459,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 15.420235
1 384.0 7.372800 ... 7.899428 8.507077
2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 37.971025 ... 40.140799 40.140799
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 68.147202
10 1536.0 80.430545 ... 80.430545 79.526831
11 1664.0 62.929456 ... 62.929456 62.929456
12 1792.0 72.983276 ... 63.142831 63.142831
13 1920.0 68.776119 ... 71.257735 71.257735
14 2048.0 73.584279 ... 78.398206 78.033565
15 2176.0 82.813365 ... 86.739860 86.367588
16 2304.0 68.251065 ... 77.558029 77.307030
17 2432.0 71.125224 ... 75.726318 75.522751
18 2560.0 77.283019 ... 82.331658 82.125311
19 2688.0 83.552988 ... 91.185232 90.748936
20 2816.0 79.443003 ... 83.792906 83.552120
21 2944.0 82.237674 ... 84.040530 83.899046
22 3072.0 81.589488 ... 85.598037 89.451983
23 3200.0 84.432717 ... 88.520058 95.952022
24 3328.0 81.854799 ... 85.703924 82.275764
25 3456.0 80.220468 ... 92.244355 87.632137
26 3584.0 83.101104 ... 91.284657 94.250936
27 3712.0 85.675250 ... 82.902362 87.552452
28 3840.0 81.079177 ... 87.493673 91.625518
29 3968.0 87.035620 ... 90.388098 84.915752
30 4096.0 91.366730 ... 83.208386 90.626421
4 768.0 31.597714 ... 34.028308 34.028308
5 896.0 36.971791 ... 40.140799 39.025776
6 1024.0 48.770977 ... 52.428801 52.428801
7 1152.0 43.911529 ... 46.656000 46.656000
8 1280.0 49.951220 ... 56.888887 56.109587
9 1408.0 62.664092 ... 67.305878 66.485074
10 1536.0 78.643199 ... 78.643199 77.778988
11 1664.0 61.636381 ... 62.061463 61.636381
12 1792.0 71.588687 ... 67.707374 67.707374
13 1920.0 67.764707 ... 70.172588 69.818184
14 2048.0 72.005219 ... 76.608294 76.260072
15 2176.0 81.803444 ... 85.632545 85.269692
16 2304.0 67.100763 ... 76.563695 76.319081
17 2432.0 69.886725 ... 74.323979 73.932798
18 2560.0 76.382283 ... 80.908642 80.511054
19 2688.0 82.642823 ... 89.676257 89.254248
20 2816.0 82.602666 ... 82.916747 82.759409
21 2944.0 81.166173 ... 82.509987 82.169877
22 3072.0 81.005868 ... 88.197981 88.060814
23 3200.0 83.660130 ... 95.522391 95.096582
24 3328.0 82.369902 ... 84.795401 84.596116
25 3456.0 81.026701 ... 91.304157 90.994998
26 3584.0 86.374055 ... 98.699661 98.375705
27 3712.0 84.874549 ... 88.876645 88.483034
28 3840.0 84.036474 ... 92.545605 92.159996
29 3968.0 92.232760 ... 91.062642 91.403695
30 4096.0 93.206754 ... 93.012976 92.820009
[31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 21.623 seconds)
**Total running time of the script:** ( 6 minutes 41.514 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.011 seconds)
**Total running time of the script:** ( 0 minutes 0.411 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward:
N Triton Torch Apex
0 1024.0 356.173905 97.912354 303.407414
1 1536.0 405.098894 135.032961 341.333333
2 2048.0 486.653476 161.684218 336.657521
3 2560.0 461.954908 182.857144 330.322572
4 3072.0 519.211251 190.511624 319.168834
5 3584.0 554.941930 207.267476 308.301075
6 4096.0 564.965515 219.919464 297.890900
7 4608.0 500.416301 232.825259 287.999990
8 5120.0 527.381977 241.889751 285.104413
9 5632.0 538.517949 242.236559 288.820505
10 6144.0 546.133354 250.775512 288.000001
11 6656.0 532.479975 254.775119 284.748652
12 7168.0 505.976473 255.240352 279.272738
13 7680.0 485.052616 264.447629 281.404588
14 8192.0 464.794337 266.406514 282.077471
15 8704.0 415.300208 262.762264 280.397305
16 9216.0 428.651187 270.727053 287.065546
17 9728.0 437.213490 279.942444 287.527089
18 10240.0 442.810829 285.104413 289.469963
19 10752.0 425.821771 248.123076 292.571421
20 11264.0 427.071098 243.545956 282.778242
21 11776.0 423.724129 248.569911 287.511689
22 12288.0 417.131525 253.578674 294.323369
23 12800.0 412.903215 254.726371 292.015215
24 13312.0 408.030638 253.561895 292.036577
25 13824.0 404.112047 256.991469 291.799461
26 14336.0 393.215988 252.988236 284.585606
27 14848.0 381.533186 256.552919 288.077610
28 15360.0 377.318326 260.338991 290.267715
29 15872.0 369.832994 262.708969 290.341468
0 1024.0 356.173905 96.755900 299.707322
1 1536.0 400.695643 133.083026 338.201833
2 2048.0 486.653476 159.584422 321.254900
3 2560.0 455.111129 179.649115 323.368411
4 3072.0 508.468972 191.005181 320.556515
5 3584.0 551.384634 206.769233 310.527060
6 4096.0 558.545450 219.919464 297.890900
7 4608.0 493.714279 231.364016 285.767436
8 5120.0 518.481012 241.414550 283.133649
9 5632.0 534.260858 241.803217 288.820505
10 6144.0 540.131844 247.409397 285.214712
11 6656.0 525.473708 255.182111 284.748652
12 7168.0 503.017523 259.084340 283.881181
13 7680.0 482.513091 262.190612 275.104486
14 8192.0 463.698115 265.686491 283.296835
15 8704.0 406.412440 266.789264 283.826081
16 9216.0 419.703988 270.727053 287.251954
17 9728.0 428.388977 279.942444 288.950501
18 10240.0 437.295395 285.104413 286.767793
19 10752.0 424.421071 245.292781 289.291486
20 11264.0 423.724120 244.426754 286.069848
21 11776.0 418.702211 247.915800 288.097854
22 12288.0 414.784810 253.578674 293.737063
23 12800.0 409.327110 252.631590 287.371378
24 13312.0 408.030638 252.360194 289.653667
25 13824.0 403.620451 256.197690 291.287079
26 14336.0 394.116833 253.921779 286.242939
27 14848.0 384.829370 257.108233 289.246765
28 15360.0 380.041240 257.071134 284.884090
29 15872.0 370.913333 260.731015 289.679087
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 14.604 seconds)
**Total running time of the script:** ( 2 minutes 16.519 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**13:52.274** total execution time for **getting-started_tutorials** files:
**14:22.730** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:21.623 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:41.514 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:29.190 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:32.198 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:14.604 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:16.519 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:46.845 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:52.088 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.011 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.411 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+