[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-06-08 00:46:05 +00:00
parent 563da1150b
commit ec86d5284f
158 changed files with 260 additions and 260 deletions

View File

@@ -235,17 +235,17 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
0 4096.0 9.600000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 76.800002 76.800002
3 32768.0 63.999998 63.999998
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
9 2097152.0 722.823517 702.171410
10 4194304.0 780.190482 780.190482
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 842.004273 842.906750
13 33554432.0 842.004273 842.004273
14 67108864.0 847.448255 848.362445
15 134217728.0 849.737435 850.656574
@@ -255,7 +255,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 26.773 seconds)
**Total running time of the script:** ( 1 minutes 40.930 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -278,17 +278,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 546.133347 512.000001 188.321838
0 256.0 546.133347 512.000001 186.181817
1 384.0 614.400016 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 706.206879 640.000002 160.000000
2 512.0 655.360017 585.142849 154.566038
3 640.0 706.206879 640.000002 158.759699
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 812.359066 406.179533 198.429370
94 12288.0 812.429770 415.661740 198.794749
95 12416.0 812.498981 412.149375 198.358474
96 12544.0 812.566838 412.546756 198.716830
97 12672.0 812.633240 412.097543 198.776477
93 12160.0 812.359066 406.179533 198.936606
94 12288.0 812.429770 415.222812 199.197579
95 12416.0 812.498981 412.149375 198.854847
96 12544.0 812.566838 412.758863 199.111113
97 12672.0 811.007961 412.097543 199.167004
[98 rows x 4 columns]
@@ -306,7 +306,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 3 minutes 13.971 seconds)
**Total running time of the script:** ( 3 minutes 22.248 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -459,37 +459,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 256.0 2.730667 ... 3.276800 3.276800
1 384.0 7.372800 ... 8.507077 8.507077
2 512.0 14.563555 ... 15.887515 15.420235
0 256.0 2.730667 ... 2.978909 3.276800
1 384.0 7.372800 ... 7.899428 7.899428
2 512.0 14.563555 ... 15.420235 16.384000
3 640.0 22.260869 ... 24.380953 24.380953
4 768.0 32.768000 ... 35.389441 34.028308
5 896.0 39.025776 ... 41.321411 39.025776
4 768.0 32.768000 ... 34.028308 34.028308
5 896.0 39.025776 ... 40.140799 39.025776
6 1024.0 49.932191 ... 53.773130 52.428801
7 1152.0 45.242181 ... 48.161033 47.396572
8 1280.0 51.200001 ... 57.690139 57.690139
9 1408.0 64.138541 ... 68.147202 67.305878
10 1536.0 79.526831 ... 80.430545 78.643199
9 1408.0 64.138541 ... 69.009825 67.305878
10 1536.0 79.526831 ... 79.526831 78.643199
11 1664.0 63.372618 ... 63.372618 62.492442
12 1792.0 72.983276 ... 63.499573 63.142831
13 1920.0 68.776119 ... 71.257735 70.892307
14 2048.0 73.262953 ... 78.033565 76.959706
15 2176.0 83.155572 ... 87.115360 85.632545
15 2176.0 83.155572 ... 87.494120 85.998493
16 2304.0 68.446623 ... 78.064941 76.809875
17 2432.0 71.125224 ... 75.522751 74.521127
18 2560.0 77.833728 ... 82.125311 81.512437
19 2688.0 83.737433 ... 90.102270 90.102270
20 2816.0 79.733474 ... 84.523664 82.916747
21 2944.0 81.967162 ... 83.758038 83.758038
22 3072.0 81.707223 ... 89.451983 87.924073
23 3200.0 84.321474 ... 93.023256 95.167286
24 3328.0 83.516586 ... 85.806075 86.011103
25 3456.0 82.519518 ... 92.297157 91.407671
26 3584.0 85.552231 ... 91.467482 98.268190
27 3712.0 85.675250 ... 88.876645 87.399253
28 3840.0 83.528704 ... 89.403396 87.424508
29 3968.0 88.040360 ... 87.976885 90.388098
30 4096.0 86.592080 ... 90.260743 93.401342
17 2432.0 71.305746 ... 75.522751 74.521127
18 2560.0 77.833728 ... 82.331658 81.310171
19 2688.0 83.737433 ... 91.185232 89.888756
20 2816.0 83.873477 ... 84.523664 81.067298
21 2944.0 81.967162 ... 83.477440 82.237674
22 3072.0 82.301023 ... 88.960098 87.651868
23 3200.0 84.656085 ... 96.240602 94.814812
24 3328.0 83.130825 ... 84.995628 84.496824
25 3456.0 82.604067 ... 84.686523 81.189898
26 3584.0 87.381330 ... 99.684470 97.311031
27 3712.0 85.055211 ... 88.015279 87.552452
28 3840.0 84.940091 ... 93.090912 84.164384
29 3968.0 92.793868 ... 84.917596 91.403695
30 4096.0 88.272097 ... 89.299883 93.206754
[31 rows x 5 columns]
@@ -499,7 +499,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 41.088 seconds)
**Total running time of the script:** ( 6 minutes 9.260 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -240,7 +240,7 @@ References
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 0.548 seconds)
**Total running time of the script:** ( 0 minutes 0.484 seconds)
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:

View File

@@ -42,25 +42,25 @@ Layer Normalization
1 1536.0 630.153868 323.368435 511.999982
2 2048.0 682.666643 334.367358 520.126988
3 2560.0 694.237267 362.477870 512.000013
4 3072.0 712.347810 376.643666 501.551037
5 3584.0 725.873439 384.859062 455.111115
6 4096.0 728.177767 381.023256 455.111095
7 4608.0 670.254540 396.387087 428.651163
8 5120.0 688.403381 397.669909 422.268057
9 5632.0 704.000002 395.228063 417.185184
4 3072.0 712.347810 375.206126 501.551037
5 3584.0 725.873439 384.859062 458.751978
6 4096.0 728.177767 381.023256 458.293714
7 4608.0 670.254540 396.387087 426.173427
8 5120.0 688.403381 397.669909 426.666652
9 5632.0 704.000002 395.228063 413.357796
10 6144.0 702.171410 402.885254 411.313806
11 6656.0 705.271522 400.360920 400.360920
12 7168.0 690.891575 396.844306 387.459443
13 7680.0 682.666656 392.587863 386.415087
14 8192.0 639.375598 393.609605 371.308771
15 8704.0 630.153861 389.005597 380.502740
16 9216.0 609.322328 407.337026 383.999986
11 6656.0 705.271522 398.861429 398.861429
12 7168.0 695.078767 396.844306 388.772874
13 7680.0 682.666656 392.587863 387.634072
14 8192.0 642.509816 393.609605 372.363633
15 8704.0 627.315309 389.005597 380.502740
16 9216.0 606.814809 407.337026 383.002605
17 9728.0 589.575753 409.599987 383.369452
18 10240.0 568.888869 408.578556 381.911416
18 10240.0 566.920437 408.578556 382.803739
19 10752.0 551.384634 411.559798 381.445676
20 11264.0 536.380957 406.826188 373.134567
21 11776.0 524.835658 409.599991 377.587162
22 12288.0 517.389457 413.911572 383.251457
21 11776.0 523.377770 409.599991 377.587162
22 12288.0 518.754611 413.911572 383.251457
23 12800.0 505.679014 410.420828 376.470582
24 13312.0 494.180982 405.699062 376.310952
25 13824.0 482.934503 411.888257 379.389355
@@ -389,7 +389,7 @@ Layer Normalization
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 5 minutes 12.730 seconds)
**Total running time of the script:** ( 5 minutes 19.676 seconds)
.. _sphx_glr_download_getting-started_tutorials_05-layer-norm.py:

View File

@@ -5,16 +5,16 @@
Computation times
=================
**15:35.109** total execution time for **getting-started_tutorials** files:
**16:32.599** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 05:41.088 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:09.260 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:12.730 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_05-layer-norm.py` (``05-layer-norm.py``) | 05:19.676 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:13.971 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 03:22.248 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:26.773 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:40.930 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.548 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.484 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+