[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-04-26 00:43:32 +00:00
parent 21613349ac
commit b0a569b724
156 changed files with 302 additions and 302 deletions

View File

@@ -235,17 +235,17 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 63.999998
3 32768.0 63.999998 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 384.000001
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 843.811163
13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 39.155 seconds)
**Total running time of the script:** ( 1 minutes 41.030 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 190.511628
1 384.0 438.857137 558.545450 151.703707
0 256.0 512.000001 512.000001 190.511628
1 384.0 438.857137 585.142862 151.703707
2 512.0 481.882344 606.814814 154.566038
3 640.0 465.454542 640.000002 158.759699
4 768.0 463.698115 664.216187 163.839992
4 768.0 463.698115 664.216187 162.754967
.. ... ... ... ...
93 12160.0 479.211815 405.333344 199.038365
94 12288.0 484.853264 415.222812 199.197579
95 12416.0 460.384708 412.149375 198.954424
96 12544.0 457.705824 412.546756 199.012395
97 12672.0 457.679461 411.679167 199.167004
93 12160.0 478.622374 405.333344 198.834951
94 12288.0 487.256521 415.661740 199.096718
95 12416.0 459.851851 411.722274 198.755369
96 12544.0 458.228323 412.971190 199.012395
97 12672.0 457.679461 412.097543 199.069228
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 8.135 seconds)
**Total running time of the script:** ( 3 minutes 8.844 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -458,37 +458,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 3.276800 3.276800
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 16.384000 16.384000
0 256.0 2.730667 ... 2.978909 2.978909
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 37.971025 ... 40.140799 40.140799
6 1024.0 49.932191 ... 53.773130 53.773130
5 896.0 37.971025 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 69.009825 68.147202
9 1408.0 64.138541 ... 69.009825 67.305878
10 1536.0 79.526831 ... 79.526831 79.526831
11 1664.0 63.372618 ... 63.372618 62.929456
11 1664.0 62.929456 ... 63.372618 62.929456
12 1792.0 72.983276 ... 63.499573 63.142831
13 1920.0 69.467336 ... 71.257735 71.257735
14 2048.0 73.262953 ... 78.033565 77.672296
15 2176.0 83.155572 ... 87.115360 86.739860
16 2304.0 68.251065 ... 78.064941 77.558029
17 2432.0 71.305746 ... 75.522751 75.320281
18 2560.0 77.833728 ... 82.331658 82.125311
19 2688.0 83.552988 ... 90.748936 90.316801
20 2816.0 84.035084 ... 83.873477 84.035084
21 2944.0 82.784108 ... 84.324925 84.040530
22 3072.0 82.540970 ... 89.170242 89.030036
23 3200.0 84.768213 ... 95.096582 95.380032
24 3328.0 83.808259 ... 85.500351 86.424125
25 3456.0 82.604067 ... 92.033756 91.719645
26 3584.0 87.466332 ... 92.696281 96.372338
27 3712.0 86.267139 ... 85.970176 88.248537
28 3840.0 82.716526 ... 86.400002 91.322872
29 3968.0 85.871877 ... 92.163097 87.472354
30 4096.0 93.924229 ... 94.055868 87.097813
13 1920.0 68.776119 ... 71.626943 71.257735
14 2048.0 73.584279 ... 78.398206 78.033565
15 2176.0 83.500614 ... 87.115360 86.739860
16 2304.0 68.251065 ... 77.810656 77.558029
17 2432.0 71.125224 ... 75.726318 75.522751
18 2560.0 77.833728 ... 82.331658 81.920002
19 2688.0 83.737433 ... 90.532356 90.316801
20 2816.0 82.290955 ... 83.712490 84.197315
21 2944.0 82.646820 ... 81.967162 83.477440
22 3072.0 82.062468 ... 85.662786 89.030036
23 3200.0 84.210524 ... 97.116842 95.952022
24 3328.0 83.905938 ... 86.946008 86.736504
25 3456.0 79.196043 ... 86.689860 91.407671
26 3584.0 87.211821 ... 94.947616 97.840469
27 3712.0 85.896254 ... 83.005689 88.404730
28 3840.0 81.738356 ... 88.297007 91.473945
29 3968.0 88.040360 ... 92.093539 84.797731
30 4096.0 93.336389 ... 91.491294 88.185107
[31 rows x 5 columns]
@@ -498,7 +498,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 6 minutes 0.518 seconds)
**Total running time of the script:** ( 6 minutes 27.164 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.012 seconds)
**Total running time of the script:** ( 0 minutes 0.325 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -38,36 +38,36 @@ Layer Normalization
layer-norm-backward:
N Triton Torch Apex
0 1024.0 114.306981 97.912354 303.407414
1 1536.0 118.153850 134.540150 341.333333
2 2048.0 125.068704 161.154101 334.367350
3 2560.0 119.766080 181.238943 330.322572
4 3072.0 124.121216 192.501302 323.368415
5 3584.0 127.242599 208.271186 311.652167
6 4096.0 130.549806 220.907859 296.990947
7 4608.0 105.526723 232.825259 287.251954
8 5120.0 108.743364 242.366855 284.444444
9 5632.0 109.625308 243.107920 290.060087
10 6144.0 112.133840 248.661056 286.879370
11 6656.0 112.733948 256.000009 285.767438
12 7168.0 114.917836 260.260201 284.821192
13 7680.0 115.128047 262.938666 280.121579
14 8192.0 115.856217 266.767970 284.526763
15 8704.0 94.182151 267.815384 285.377055
16 9216.0 96.713601 271.724806 287.999990
17 9728.0 97.687028 280.615388 290.027323
18 10240.0 100.065144 286.433562 290.153487
19 10752.0 101.115983 246.935876 290.594591
20 11264.0 103.024392 245.536784 286.980888
21 11776.0 103.373808 249.667843 288.981596
22 12288.0 106.083454 254.673582 294.617366
23 12800.0 106.004143 254.094291 288.180121
24 13312.0 107.067028 253.260416 290.443638
25 13824.0 107.440415 257.390218 292.056329
26 14336.0 109.296061 255.051144 286.959121
27 14848.0 109.143034 257.665934 289.952797
28 15360.0 110.802526 257.970599 288.000007
29 15872.0 110.702706 261.806182 290.341468
0 1024.0 114.306981 99.497980 315.076934
1 1536.0 117.776359 132.604320 341.333333
2 2048.0 124.751268 158.554837 321.254900
3 2560.0 119.766080 182.857144 325.079368
4 3072.0 123.912607 191.501303 319.168834
5 3584.0 126.308369 208.271186 309.410081
6 4096.0 130.549806 220.412561 298.796351
7 4608.0 105.325718 231.849059 285.767436
8 5120.0 108.647215 244.294240 286.433562
9 5632.0 109.803417 244.426754 291.310338
10 6144.0 111.878602 251.202731 286.879370
11 6656.0 112.733948 256.000009 286.793541
12 7168.0 114.917836 253.360829 277.470965
13 7680.0 115.128047 266.743841 284.444450
14 8192.0 115.583772 258.694729 277.303250
15 8704.0 93.928060 267.130429 286.158893
16 9216.0 96.650214 272.729961 289.129410
17 9728.0 97.564561 279.942444 288.950501
18 10240.0 100.024417 287.102804 290.153487
19 10752.0 100.997264 246.699797 289.941565
20 11264.0 102.985144 246.432094 287.897767
21 11776.0 103.373808 249.667843 289.573776
22 12288.0 106.007189 254.453844 294.029924
23 12800.0 105.967577 254.094291 290.084977
24 13312.0 107.138837 252.959629 289.391298
25 13824.0 107.370873 256.991469 292.056329
26 14336.0 109.157360 255.429842 288.160801
27 14848.0 109.461527 257.108233 288.544136
28 15360.0 110.669469 258.513318 288.225185
29 15872.0 110.960674 262.527914 290.562936
@@ -339,7 +339,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 0.332 seconds)
**Total running time of the script:** ( 2 minutes 2.154 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**12:48.152** total execution time for **getting-started_tutorials** files:
**13:19.517** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:00.518 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:27.164 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:08.135 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:08.844 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:00.332 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 02:02.154 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:39.155 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:41.030 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.012 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.325 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+