[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-02-09 10:00:38 +00:00
parent e3b4440dff
commit 5bffdf9c89
158 changed files with 276 additions and 276 deletions

View File

@@ -235,10 +235,10 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 63.999998
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 384.000001 384.000001
6 262144.0 341.333321 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 42.537 seconds)
**Total running time of the script:** ( 1 minutes 36.291 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
0 256.0 546.133347 546.133347 190.511628
1 384.0 585.142862 585.142862 153.600004
2 512.0 655.360017 585.142849 156.038096
3 640.0 682.666684 640.000002 160.000000
4 768.0 722.823517 664.216187 163.839992
.. ... ... ... ...
93 12160.0 814.058574 406.179533 198.733401
93 12160.0 814.058574 405.755985 198.936606
94 12288.0 814.111783 415.661740 198.995960
95 12416.0 814.163950 412.149375 198.556711
96 12544.0 814.214963 412.971190 198.815254
97 12672.0 814.265046 412.097543 198.873965
95 12416.0 814.163950 411.722274 198.755369
96 12544.0 814.214963 412.971190 198.913776
97 12672.0 814.265046 412.516771 199.069228
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 22.066 seconds)
**Total running time of the script:** ( 3 minutes 20.551 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -458,37 +458,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 7.899428
0 256.0 2.978909 ... 2.978909 3.276800
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 16.384000 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 39.025776 39.025776
5 896.0 37.971025 ... 40.140799 39.025776
6 1024.0 49.932191 ... 52.428801 52.428801
7 1152.0 45.242181 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.888887
9 1408.0 64.138541 ... 67.305878 66.485074
10 1536.0 80.430545 ... 79.526831 78.643199
10 1536.0 80.430545 ... 79.526831 79.526831
11 1664.0 62.929456 ... 62.492442 62.061463
12 1792.0 72.512412 ... 71.588687 72.047592
13 1920.0 69.467336 ... 70.172588 70.172588
14 2048.0 73.262953 ... 76.608294 76.608294
15 2176.0 83.500614 ... 85.998493 85.998493
16 2304.0 68.446623 ... 76.563695 76.319081
17 2432.0 71.305746 ... 74.521127 85.134737
18 2560.0 78.019048 ... 80.908642 81.108913
19 2688.0 82.823267 ... 89.888756 89.464755
20 2816.0 83.233226 ... 82.916747 82.759409
21 2944.0 81.967162 ... 82.509987 82.237674
22 3072.0 82.181572 ... 88.197981 87.651868
23 3200.0 84.099871 ... 93.567248 94.674553
24 3328.0 79.901550 ... 84.945483 84.496824
25 3456.0 81.600781 ... 91.511426 84.775569
26 3584.0 83.798127 ... 86.542919 95.148565
27 3712.0 81.615477 ... 88.640059 82.017526
28 3840.0 84.744825 ... 91.777595 85.399230
29 3968.0 92.024087 ... 84.183469 89.558851
30 4096.0 86.424811 ... 87.438257 91.992956
12 1792.0 72.512412 ... 72.047592 72.047592
13 1920.0 69.120002 ... 70.172588 70.530615
14 2048.0 73.584279 ... 76.959706 76.608294
15 2176.0 83.155572 ... 86.367588 85.269692
16 2304.0 68.446623 ... 77.057651 76.076024
17 2432.0 71.305746 ... 84.367759 85.134737
18 2560.0 78.019048 ... 80.908642 80.709358
19 2688.0 83.369354 ... 89.676257 89.254248
20 2816.0 79.733474 ... 83.392363 83.233226
21 2944.0 81.967162 ... 82.237674 82.102191
22 3072.0 81.825298 ... 88.612060 88.473602
23 3200.0 84.880639 ... 95.096582 95.096582
24 3328.0 83.808259 ... 84.101981 83.905938
25 3456.0 78.655188 ... 85.950501 88.400840
26 3584.0 87.296493 ... 97.628001 98.268190
27 3712.0 80.757757 ... 88.326564 85.822459
28 3840.0 83.027026 ... 91.059692 84.613126
29 3968.0 90.724116 ... 87.347124 90.791620
30 4096.0 86.313653 ... 86.424811 91.056800
[31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 4.559 seconds)
**Total running time of the script:** ( 5 minutes 56.072 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.480 seconds)
**Total running time of the script:** ( 0 minutes 0.482 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward:
N Triton Torch Apex
0 1024.0 307.200008 98.303995 307.200008
1 1536.0 347.773587 134.050910 341.333333
2 2048.0 420.102553 161.684218 325.509933
3 2560.0 458.507457 181.238943 326.808501
4 3072.0 511.999982 191.999993 317.793096
5 3584.0 547.872604 208.271186 310.527060
6 4096.0 564.965515 220.412561 294.323343
7 4608.0 504.986315 232.825259 290.267724
8 5120.0 527.381977 242.845844 287.775181
9 5632.0 542.843364 243.545956 288.820505
10 6144.0 546.133354 248.661056 285.767458
11 6656.0 532.479975 256.000009 285.767438
12 7168.0 507.469040 260.260201 286.242939
13 7680.0 479.999983 262.564106 279.272719
14 8192.0 462.607053 267.130429 284.526763
15 8704.0 417.791980 267.815384 284.599455
16 9216.0 431.157889 272.394084 288.751954
17 9728.0 438.857162 280.278512 290.027323
18 10240.0 449.287041 286.433562 290.153487
19 10752.0 427.231788 246.935876 290.594591
20 11264.0 426.397479 245.760001 286.676558
21 11776.0 423.089806 249.888595 288.981596
22 12288.0 419.504980 254.673582 294.323369
23 12800.0 414.016170 253.674644 288.180121
24 13312.0 411.181478 252.959629 289.916513
25 13824.0 404.112047 257.190689 292.056329
26 14336.0 393.215988 254.485198 286.719986
27 14848.0 385.245405 257.665934 289.012175
28 15360.0 373.495460 257.970599 286.211174
29 15872.0 371.637071 261.626369 289.899545
0 1024.0 307.200008 99.902435 307.200008
1 1536.0 351.085717 135.032961 341.333333
2 2048.0 420.102553 162.754967 327.679984
3 2560.0 458.507457 182.857144 330.322572
4 3072.0 515.580429 191.501303 319.168834
5 3584.0 547.872604 207.768111 311.652167
6 4096.0 568.231237 221.905193 301.546004
7 4608.0 504.986315 232.336141 287.999990
8 5120.0 531.948056 242.366855 285.104413
9 5632.0 538.517949 243.545956 290.683877
10 6144.0 546.133354 250.349744 288.000001
11 6656.0 536.053693 256.000009 286.279570
12 7168.0 510.480705 252.988236 277.024148
13 7680.0 482.513091 267.130429 284.444450
14 8192.0 463.698115 269.326017 282.482757
15 8704.0 417.791980 264.425310 282.673891
16 9216.0 431.157889 274.762727 291.031570
17 9728.0 439.683593 281.630872 290.027323
18 10240.0 446.025405 286.100109 289.811322
19 10752.0 425.120247 246.935876 289.941565
20 11264.0 425.056596 243.765566 283.371073
21 11776.0 423.089806 249.888595 289.129414
22 12288.0 421.302872 254.673582 295.207195
23 12800.0 414.574901 254.515329 290.909089
24 13312.0 414.381327 253.561895 289.129403
25 13824.0 406.090579 257.790206 293.088338
26 14336.0 394.116833 256.381525 290.349381
27 14848.0 385.245405 255.999999 287.844912
28 15360.0 376.932517 262.751252 291.184839
29 15872.0 370.913333 262.166551 291.118085
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 12.003 seconds)
**Total running time of the script:** ( 2 minutes 11.560 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**13:21.645** total execution time for **getting-started_tutorials** files:
**13:04.956** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:04.559 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 05:56.072 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:22.066 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:20.551 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:12.003 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:11.560 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:42.537 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:36.291 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.480 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.482 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+