[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-02-14 00:38:35 +00:00
parent 13537582ad
commit 819da42584
158 changed files with 248 additions and 248 deletions

View File

@@ -235,17 +235,17 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
6 262144.0 384.000001 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 843.811163
13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 42.825 seconds)
**Total running time of the script:** ( 1 minutes 40.274 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 188.321838
0 256.0 512.000001 546.133347 190.511628
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 815.765209 406.179533 198.631953
94 12288.0 815.800825 415.661740 198.895304
95 12416.0 814.163950 412.149375 198.457532
96 12544.0 814.214963 412.971190 198.716830
97 12672.0 814.265046 411.679167 198.679085
93 12160.0 814.058574 406.179533 198.733401
94 12288.0 814.111783 415.661740 198.995960
95 12416.0 814.163950 412.149375 198.556711
96 12544.0 814.214963 413.396498 198.913776
97 12672.0 814.265046 412.097543 198.873965
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 22.274 seconds)
**Total running time of the script:** ( 3 minutes 23.079 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -458,9 +458,9 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.978909 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.192000
2 512.0 14.563555 ... 16.384000 16.384000
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
@@ -468,27 +468,27 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
7 1152.0 45.242181 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.888887
9 1408.0 64.138541 ... 67.305878 66.485074
10 1536.0 79.526831 ... 79.526831 78.643199
10 1536.0 80.430545 ... 79.526831 78.643199
11 1664.0 62.929456 ... 62.492442 62.061463
12 1792.0 72.512412 ... 71.588687 72.047592
13 1920.0 69.120002 ... 70.172588 70.172588
14 2048.0 73.908442 ... 76.959706 76.260072
15 2176.0 83.155572 ... 85.998493 85.998493
16 2304.0 68.643310 ... 76.809875 76.076024
17 2432.0 71.125224 ... 84.877538 85.134737
18 2560.0 78.019048 ... 80.908642 81.108913
19 2688.0 82.642823 ... 89.995386 89.464755
20 2816.0 83.074685 ... 83.552120 82.680963
21 2944.0 81.832567 ... 81.564701 81.967162
22 3072.0 81.707223 ... 88.060814 88.612060
23 3200.0 80.706181 ... 95.167286 94.814812
24 3328.0 83.226931 ... 84.003845 84.298943
25 3456.0 79.430113 ... 84.909497 89.380896
26 3584.0 87.466332 ... 97.734120 98.160909
27 3712.0 79.917877 ... 86.942857 89.035062
28 3840.0 84.292684 ... 91.473945 86.467555
29 3968.0 90.791620 ... 80.864108 86.572497
30 4096.0 88.592559 ... 86.928580 91.366730
12 1792.0 72.512412 ... 72.047592 71.588687
13 1920.0 68.776119 ... 70.172588 70.172588
14 2048.0 73.262953 ... 76.608294 76.260072
15 2176.0 83.155572 ... 85.998493 85.632545
16 2304.0 68.643310 ... 76.563695 76.319081
17 2432.0 71.487187 ... 84.621881 85.134737
18 2560.0 78.019048 ... 81.310171 81.108913
19 2688.0 83.369354 ... 89.254248 89.464755
20 2816.0 81.218262 ... 82.135981 83.392363
21 2944.0 82.102191 ... 82.509987 82.373605
22 3072.0 81.707223 ... 86.845249 84.135370
23 3200.0 84.880639 ... 87.795257 90.780140
24 3328.0 79.990330 ... 84.101981 84.596116
25 3456.0 81.600781 ... 90.943675 84.775569
26 3584.0 85.715344 ... 93.661869 95.047985
27 3712.0 81.615477 ... 88.365630 83.982636
28 3840.0 82.592983 ... 87.286505 91.322872
29 3968.0 86.973584 ... 90.926929 83.982489
30 4096.0 92.500158 ... 88.417474 85.271746
[31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 8.357 seconds)
**Total running time of the script:** ( 6 minutes 4.120 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.489 seconds)
**Total running time of the script:** ( 0 minutes 0.481 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -39,35 +39,35 @@ Layer Normalization
layer-norm-backward:
N Triton Torch Apex
0 1024.0 311.088617 98.303995 307.200008
1 1536.0 347.773587 134.540150 341.333333
2 2048.0 420.102553 161.684218 334.367350
3 2560.0 458.507457 181.775141 330.322572
4 3072.0 511.999982 192.501302 320.556515
5 3584.0 547.872604 208.271186 311.652167
6 4096.0 568.231237 220.412561 297.890900
7 4608.0 504.986315 232.825259 286.507772
8 5120.0 529.655159 242.845844 285.104413
9 5632.0 545.032265 243.545956 289.438969
10 6144.0 548.163546 248.661056 285.767458
11 6656.0 534.260858 256.000009 285.767438
12 7168.0 507.469040 260.457220 286.242939
13 7680.0 481.253256 262.190612 275.104486
1 1536.0 347.773587 134.050910 341.333333
2 2048.0 420.102553 161.684218 325.509933
3 2560.0 458.507457 181.775141 326.808501
4 3072.0 511.999982 192.501302 319.168834
5 3584.0 551.384634 208.271186 310.527060
6 4096.0 564.965515 220.412561 298.796351
7 4608.0 504.986315 232.825259 287.251954
8 5120.0 529.655159 242.845844 287.102804
9 5632.0 542.843364 243.545956 289.438969
10 6144.0 546.133354 248.661056 286.879370
11 6656.0 532.479975 256.000009 285.767438
12 7168.0 507.469040 260.654538 286.242939
13 7680.0 479.999983 262.190612 278.850215
14 8192.0 462.607053 267.130429 284.939124
15 8704.0 417.791980 267.815384 284.599455
16 9216.0 431.157889 272.394084 288.751954
17 9728.0 438.857162 280.615388 290.027323
18 10240.0 449.287041 286.433562 287.438599
19 10752.0 427.231788 247.172406 290.594591
17 9728.0 439.683593 280.278512 290.027323
18 10240.0 449.287041 286.433562 290.153487
19 10752.0 427.231788 247.172406 290.922209
20 11264.0 427.071098 245.760001 286.676558
21 11776.0 422.457417 249.667843 288.686414
22 12288.0 419.504980 254.453844 294.029924
23 12800.0 414.016170 253.256381 289.538159
21 11776.0 423.089806 249.667843 288.981596
22 12288.0 419.504980 254.893699 294.323369
23 12800.0 414.016170 253.674644 288.180121
24 13312.0 411.181478 252.759501 289.916513
25 13824.0 404.112047 257.190689 292.056329
25 13824.0 404.604870 257.190689 292.056329
26 14336.0 393.215988 254.485198 286.719986
27 14848.0 385.245405 257.665934 289.246765
28 15360.0 373.495460 257.970599 287.102804
29 15872.0 371.637071 261.806182 289.899545
28 15360.0 373.874218 257.970599 286.211174
29 15872.0 371.274849 261.806182 289.679087
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 12.324 seconds)
**Total running time of the script:** ( 2 minutes 12.351 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**13:26.269** total execution time for **getting-started_tutorials** files:
**13:20.304** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:08.357 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:04.120 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:22.274 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:23.079 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:12.324 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:12.351 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:42.825 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:40.274 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.489 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.481 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+