[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2021-07-28 11:39:54 +00:00
parent 040ec5e252
commit a86020efbc
17 changed files with 83 additions and 83 deletions

View File

@@ -216,13 +216,13 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
vector-add-performance:
size Triton Torch
0 4096.0 9.540372 9.600000
0 4096.0 8.000000 9.600000
1 8192.0 19.200000 19.200000
2 16384.0 38.400001 38.400001
3 32768.0 63.999998 76.800002
3 32768.0 76.800002 76.800002
4 65536.0 127.999995 127.999995
5 131072.0 219.428568 219.428568
6 262144.0 341.333321 341.333321
6 262144.0 384.000001 384.000001
7 524288.0 472.615390 472.615390
8 1048576.0 614.400016 614.400016
9 2097152.0 722.823517 722.823517
@@ -230,7 +230,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
11 8388608.0 812.429770 812.429770
12 16777216.0 833.084721 833.084721
13 33554432.0 843.811163 843.811163
14 67108864.0 849.278610 848.362445
14 67108864.0 848.362445 848.362445
15 134217728.0 851.577704 850.656574
@@ -239,7 +239,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 0 minutes 10.980 seconds)
**Total running time of the script:** ( 0 minutes 10.964 seconds)
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:

View File

@@ -261,15 +261,15 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 264.258068
1 384.0 585.142862 558.545450 261.446801
2 512.0 630.153853 606.814814 264.258068
0 256.0 512.000001 546.133347 273.066674
1 384.0 585.142862 585.142862 261.446801
2 512.0 630.153853 585.142849 264.258068
3 640.0 682.666684 640.000002 265.974036
4 768.0 702.171410 664.216187 273.066663
.. ... ... ... ...
93 12160.0 812.359066 406.179533 329.483481
93 12160.0 812.359066 405.755985 329.204728
94 12288.0 812.429770 415.661740 329.602681
95 12416.0 810.840807 412.149375 328.900662
95 12416.0 810.840807 411.722274 329.173158
96 12544.0 810.925276 412.971190 329.292871
97 12672.0 811.007961 412.097543 329.142870
@@ -290,7 +290,7 @@ In the above plot, we can see that:
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 1 minutes 8.191 seconds)
**Total running time of the script:** ( 1 minutes 8.185 seconds)
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:

View File

@@ -371,37 +371,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
matmul-performance:
M cuBLAS ... Triton Triton (+ LeakyReLU)
0 128.0 0.455111 ... 0.512000 0.512000
1 256.0 2.730667 ... 3.276800 2.978909
1 256.0 2.730667 ... 3.276800 3.276800
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 15.420235 15.420235
3 512.0 14.563555 ... 16.384000 15.420235
4 640.0 22.260869 ... 24.380953 24.380953
5 768.0 32.768000 ... 34.028308 34.028308
6 896.0 39.025776 ... 39.025776 37.971025
6 896.0 37.971025 ... 39.025776 39.025776
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 45.242181 ... 46.656000 46.656000
8 1152.0 44.566925 ... 46.656000 45.938215
9 1280.0 51.200001 ... 56.109587 56.109587
10 1408.0 64.138541 ... 65.684049 65.684049
11 1536.0 80.430545 ... 76.106321 75.296679
12 1664.0 63.372618 ... 61.636381 61.636381
10 1408.0 64.138541 ... 65.684049 58.601554
11 1536.0 78.643199 ... 75.296679 75.296679
12 1664.0 62.929456 ... 61.636381 61.636381
13 1792.0 72.983276 ... 68.953520 68.533074
14 1920.0 69.467336 ... 69.120002 69.467336
15 2048.0 73.908442 ... 75.234154 75.573044
16 2176.0 83.500614 ... 79.855747 80.494588
17 2304.0 68.446623 ... 72.607513 72.387489
18 2432.0 71.125224 ... 81.197876 80.041209
19 2560.0 77.833728 ... 74.983980 75.328737
20 2688.0 80.880718 ... 78.862903 82.642823
21 2816.0 83.392363 ... 78.726003 77.056904
22 2944.0 83.060049 ... 79.356738 79.610276
23 3072.0 78.643199 ... 82.540970 83.269271
24 3200.0 83.116885 ... 88.888888 87.431696
25 3328.0 82.939284 ... 86.528001 82.181847
26 3456.0 79.351933 ... 84.156124 82.266905
27 3584.0 87.466332 ... 95.960933 96.166193
28 3712.0 80.629426 ... 88.404730 88.326564
29 3840.0 81.919998 ... 80.428721 81.198237
30 3968.0 86.116179 ... 85.331427 85.391135
31 4096.0 92.309303 ... 88.651075 88.243079
14 1920.0 69.120002 ... 69.120002 69.467336
15 2048.0 73.584279 ... 75.573044 75.234154
16 2176.0 83.155572 ... 79.855747 79.855747
17 2304.0 68.251065 ... 72.607513 72.828879
18 2432.0 71.305746 ... 80.963875 80.963875
19 2560.0 77.649287 ... 76.740048 75.155963
20 2688.0 83.552988 ... 81.053536 83.552988
21 2816.0 79.154642 ... 78.726003 78.161663
22 2944.0 81.967162 ... 78.605729 79.737653
23 3072.0 79.415291 ... 81.825298 83.391907
24 3200.0 84.210524 ... 89.385477 85.333333
25 3328.0 83.905938 ... 81.346098 81.808290
26 3456.0 81.108217 ... 81.026701 85.133652
27 3584.0 87.381330 ... 91.750399 85.064084
28 3712.0 84.159518 ... 85.309435 88.326564
29 3840.0 84.550462 ... 87.217666 87.493673
30 3968.0 92.442373 ... 84.680037 83.692683
31 4096.0 93.662059 ... 91.867031 91.616198
[32 rows x 5 columns]
@@ -411,7 +411,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 2 minutes 8.651 seconds)
**Total running time of the script:** ( 2 minutes 12.186 seconds)
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:

View File

@@ -5,12 +5,12 @@
Computation times
=================
**03:27.823** total execution time for **getting-started_tutorials** files:
**03:31.335** total execution time for **getting-started_tutorials** files:
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:08.651 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:12.186 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.191 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:08.185 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.980 | 0.0 MB |
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.964 | 0.0 MB |
+---------------------------------------------------------------------------------------------------------+-----------+--------+