[GH-PAGES] Updated website
This commit is contained in:
@@ -232,12 +232,12 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
vector-add-performance:
|
||||
size Triton Torch
|
||||
0 4096.0 9.600000 9.600000
|
||||
1 8192.0 19.200000 15.999999
|
||||
1 8192.0 15.999999 19.200000
|
||||
2 16384.0 38.400001 38.400001
|
||||
3 32768.0 76.800002 76.800002
|
||||
4 65536.0 127.999995 127.999995
|
||||
5 131072.0 219.428568 219.428568
|
||||
6 262144.0 384.000001 341.333321
|
||||
6 262144.0 341.333321 341.333321
|
||||
7 524288.0 472.615390 472.615390
|
||||
8 1048576.0 614.400016 614.400016
|
||||
9 2097152.0 722.823517 722.823517
|
||||
@@ -254,7 +254,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 0 minutes 11.086 seconds)
|
||||
**Total running time of the script:** ( 0 minutes 11.024 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
|
||||
|
@@ -301,16 +301,16 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
|
||||
softmax-performance:
|
||||
N Triton Torch (native) Torch (jit)
|
||||
0 256.0 512.000001 546.133347 186.181817
|
||||
1 384.0 585.142862 585.142862 153.600004
|
||||
2 512.0 630.153853 606.814814 154.566038
|
||||
1 384.0 585.142862 558.545450 151.703707
|
||||
2 512.0 630.153853 585.142849 154.566038
|
||||
3 640.0 682.666684 640.000002 160.000000
|
||||
4 768.0 702.171410 664.216187 163.839992
|
||||
.. ... ... ... ...
|
||||
93 12160.0 812.359066 405.755985 199.140227
|
||||
94 12288.0 812.429770 415.661740 199.399583
|
||||
95 12416.0 810.840807 411.722274 199.054102
|
||||
96 12544.0 810.925276 412.546756 199.308841
|
||||
97 12672.0 811.007961 412.097543 199.362843
|
||||
93 12160.0 812.359066 406.179533 199.038365
|
||||
94 12288.0 812.429770 415.222812 199.399583
|
||||
95 12416.0 810.840807 412.149375 198.854847
|
||||
96 12544.0 809.290334 412.971190 199.209928
|
||||
97 12672.0 809.389265 412.097543 199.362843
|
||||
|
||||
[98 rows x 4 columns]
|
||||
|
||||
@@ -328,7 +328,7 @@ In the above plot, we can see that:
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 1 minutes 12.683 seconds)
|
||||
**Total running time of the script:** ( 1 minutes 12.613 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
|
||||
|
@@ -463,37 +463,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
matmul-performance:
|
||||
M cuBLAS ... Triton Triton (+ LeakyReLU)
|
||||
0 128.0 0.455111 ... 0.512000 0.512000
|
||||
1 256.0 2.730667 ... 3.276800 2.978909
|
||||
2 384.0 7.372800 ... 8.507077 7.899428
|
||||
3 512.0 14.563555 ... 15.420235 15.420235
|
||||
1 256.0 2.978909 ... 2.978909 2.978909
|
||||
2 384.0 7.372800 ... 8.507077 8.507077
|
||||
3 512.0 14.563555 ... 16.384000 15.420235
|
||||
4 640.0 22.260869 ... 24.380953 24.380953
|
||||
5 768.0 32.768000 ... 34.028308 34.028308
|
||||
6 896.0 39.025776 ... 40.140799 39.025776
|
||||
6 896.0 37.971025 ... 40.140799 39.025776
|
||||
7 1024.0 49.932191 ... 52.428801 52.428801
|
||||
8 1152.0 44.566925 ... 46.656000 46.656000
|
||||
9 1280.0 51.200001 ... 56.109587 56.109587
|
||||
10 1408.0 64.138541 ... 64.902096 64.138541
|
||||
11 1536.0 79.526831 ... 76.106321 75.296679
|
||||
9 1280.0 51.200001 ... 56.888887 56.109587
|
||||
10 1408.0 64.138541 ... 64.902096 64.902096
|
||||
11 1536.0 80.430545 ... 76.933564 76.106321
|
||||
12 1664.0 63.372618 ... 62.492442 62.492442
|
||||
13 1792.0 72.983276 ... 70.246402 69.810085
|
||||
14 1920.0 66.782607 ... 68.435645 69.467336
|
||||
15 2048.0 73.262953 ... 75.234154 68.191406
|
||||
16 2176.0 82.813365 ... 81.143743 77.398646
|
||||
17 2304.0 68.056616 ... 73.501144 73.051599
|
||||
18 2432.0 71.125224 ... 80.731218 80.499895
|
||||
19 2560.0 77.649287 ... 77.283019 76.382283
|
||||
20 2688.0 81.752274 ... 82.642823 81.752274
|
||||
21 2816.0 83.873477 ... 79.298560 79.154642
|
||||
22 2944.0 81.431424 ... 79.993627 76.907458
|
||||
23 3072.0 81.825298 ... 83.514905 82.782312
|
||||
24 3200.0 80.604535 ... 85.106381 89.510493
|
||||
25 3328.0 83.905938 ... 82.558825 81.622783
|
||||
26 3456.0 81.849303 ... 84.244062 84.244062
|
||||
27 3584.0 87.211821 ... 90.458141 92.887804
|
||||
28 3712.0 85.601834 ... 88.955779 88.797643
|
||||
29 3840.0 84.874902 ... 83.214447 85.136259
|
||||
30 3968.0 88.938731 ... 88.487262 87.723894
|
||||
31 4096.0 93.727466 ... 88.709668 88.475759
|
||||
14 1920.0 69.467336 ... 70.892307 70.530615
|
||||
15 2048.0 73.908442 ... 75.234154 74.898285
|
||||
16 2176.0 83.500614 ... 80.817862 80.173899
|
||||
17 2304.0 68.446623 ... 73.501144 73.051599
|
||||
18 2432.0 71.125224 ... 80.499895 79.587714
|
||||
19 2560.0 77.833728 ... 77.283019 76.740048
|
||||
20 2688.0 84.108772 ... 83.552988 84.108772
|
||||
21 2816.0 81.674548 ... 77.882512 79.733474
|
||||
22 2944.0 81.832567 ... 78.235527 77.990663
|
||||
23 3072.0 81.121923 ... 83.761985 80.544956
|
||||
24 3200.0 84.768213 ... 89.635851 89.635851
|
||||
25 3328.0 79.812967 ... 84.200347 87.580655
|
||||
26 3456.0 81.189898 ... 84.420490 85.404201
|
||||
27 3584.0 86.707226 ... 95.047985 90.549237
|
||||
28 3712.0 84.159518 ... 84.301560 82.423549
|
||||
29 3840.0 83.655065 ... 87.562949 87.493673
|
||||
30 3968.0 93.076994 ... 88.040360 87.913500
|
||||
31 4096.0 93.596744 ... 86.816123 83.571059
|
||||
|
||||
[32 rows x 5 columns]
|
||||
|
||||
@@ -503,7 +503,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
|
||||
|
||||
.. rst-class:: sphx-glr-timing
|
||||
|
||||
**Total running time of the script:** ( 2 minutes 9.474 seconds)
|
||||
**Total running time of the script:** ( 2 minutes 2.006 seconds)
|
||||
|
||||
|
||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
|
||||
|
@@ -5,12 +5,12 @@
|
||||
|
||||
Computation times
|
||||
=================
|
||||
**03:33.244** total execution time for **getting-started_tutorials** files:
|
||||
**03:25.643** total execution time for **getting-started_tutorials** files:
|
||||
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:09.474 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:02.006 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:12.683 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:12.613 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:11.086 | 0.0 MB |
|
||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:11.024 | 0.0 MB |
|
||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
|
||||
|
Reference in New Issue
Block a user