diff --git a/_downloads/662999063954282841dc90b8945f85ce/tutorials_jupyter.zip b/_downloads/662999063954282841dc90b8945f85ce/tutorials_jupyter.zip index c889466d0..3a84c0113 100644 Binary files a/_downloads/662999063954282841dc90b8945f85ce/tutorials_jupyter.zip and b/_downloads/662999063954282841dc90b8945f85ce/tutorials_jupyter.zip differ diff --git a/_downloads/763344228ae6bc253ed1a6cf586aa30d/tutorials_python.zip b/_downloads/763344228ae6bc253ed1a6cf586aa30d/tutorials_python.zip index 52bea0f86..f0c199168 100644 Binary files a/_downloads/763344228ae6bc253ed1a6cf586aa30d/tutorials_python.zip and b/_downloads/763344228ae6bc253ed1a6cf586aa30d/tutorials_python.zip differ diff --git a/_images/sphx_glr_01-vector-add_001.png b/_images/sphx_glr_01-vector-add_001.png index 4cbdf43ed..1fb65d7ca 100644 Binary files a/_images/sphx_glr_01-vector-add_001.png and b/_images/sphx_glr_01-vector-add_001.png differ diff --git a/_images/sphx_glr_01-vector-add_thumb.png b/_images/sphx_glr_01-vector-add_thumb.png index 625d7667e..c5b3ecc30 100644 Binary files a/_images/sphx_glr_01-vector-add_thumb.png and b/_images/sphx_glr_01-vector-add_thumb.png differ diff --git a/_images/sphx_glr_02-fused-softmax_001.png b/_images/sphx_glr_02-fused-softmax_001.png index a57ba5dac..a2e869a27 100644 Binary files a/_images/sphx_glr_02-fused-softmax_001.png and b/_images/sphx_glr_02-fused-softmax_001.png differ diff --git a/_images/sphx_glr_02-fused-softmax_thumb.png b/_images/sphx_glr_02-fused-softmax_thumb.png index 0ef88655b..5741abfb9 100644 Binary files a/_images/sphx_glr_02-fused-softmax_thumb.png and b/_images/sphx_glr_02-fused-softmax_thumb.png differ diff --git a/_images/sphx_glr_03-matrix-multiplication_001.png b/_images/sphx_glr_03-matrix-multiplication_001.png index c373e6147..d5ae8c7d1 100644 Binary files a/_images/sphx_glr_03-matrix-multiplication_001.png and b/_images/sphx_glr_03-matrix-multiplication_001.png differ diff --git a/_images/sphx_glr_03-matrix-multiplication_thumb.png b/_images/sphx_glr_03-matrix-multiplication_thumb.png index c96d1b619..409ce0f2e 100644 Binary files a/_images/sphx_glr_03-matrix-multiplication_thumb.png and b/_images/sphx_glr_03-matrix-multiplication_thumb.png differ diff --git a/_sources/getting-started/tutorials/01-vector-add.rst.txt b/_sources/getting-started/tutorials/01-vector-add.rst.txt index 812086c14..aeb59f262 100644 --- a/_sources/getting-started/tutorials/01-vector-add.rst.txt +++ b/_sources/getting-started/tutorials/01-vector-add.rst.txt @@ -231,20 +231,20 @@ We can now run the decorated function above. Pass `print_data=True` to see the p vector-add-performance: size Triton Torch - 0 4096.0 9.600000 9.600000 + 0 4096.0 8.000000 9.600000 1 8192.0 19.200000 19.200000 2 16384.0 38.400001 38.400001 3 32768.0 76.800002 76.800002 4 65536.0 127.999995 127.999995 5 131072.0 219.428568 219.428568 - 6 262144.0 341.333321 384.000001 + 6 262144.0 341.333321 341.333321 7 524288.0 472.615390 472.615390 8 1048576.0 614.400016 614.400016 - 9 2097152.0 722.823517 722.823517 + 9 2097152.0 702.171410 702.171410 10 4194304.0 780.190482 780.190482 11 8388608.0 812.429770 812.429770 12 16777216.0 833.084721 833.084721 - 13 33554432.0 843.811163 843.811163 + 13 33554432.0 843.811163 842.004273 14 67108864.0 848.362445 848.362445 15 134217728.0 851.577704 850.656574 @@ -254,7 +254,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 9.393 seconds) + **Total running time of the script:** ( 0 minutes 10.996 seconds) .. _sphx_glr_download_getting-started_tutorials_01-vector-add.py: diff --git a/_sources/getting-started/tutorials/02-fused-softmax.rst.txt b/_sources/getting-started/tutorials/02-fused-softmax.rst.txt index fba34d5a1..cdee6a404 100644 --- a/_sources/getting-started/tutorials/02-fused-softmax.rst.txt +++ b/_sources/getting-started/tutorials/02-fused-softmax.rst.txt @@ -302,15 +302,15 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t N Triton Torch (native) Torch (jit) 0 256.0 512.000001 546.133347 186.181817 1 384.0 585.142862 585.142862 153.600004 - 2 512.0 630.153853 606.814814 154.566038 - 3 640.0 682.666684 640.000002 160.000000 - 4 768.0 702.171410 664.216187 163.839992 + 2 512.0 630.153853 585.142849 154.566038 + 3 640.0 660.645170 640.000002 160.000000 + 4 768.0 702.171410 664.216187 162.754967 .. ... ... ... ... 93 12160.0 812.359066 405.755985 199.038365 - 94 12288.0 812.429770 415.661740 199.298541 - 95 12416.0 810.840807 412.149375 198.854847 + 94 12288.0 812.429770 415.222812 199.298541 + 95 12416.0 810.840807 411.722274 198.954424 96 12544.0 810.925276 412.971190 199.111113 - 97 12672.0 811.007961 412.516771 199.264875 + 97 12672.0 811.007961 412.097543 199.264875 [98 rows x 4 columns] @@ -328,7 +328,7 @@ In the above plot, we can see that: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 14.917 seconds) + **Total running time of the script:** ( 1 minutes 12.586 seconds) .. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py: diff --git a/_sources/getting-started/tutorials/03-matrix-multiplication.rst.txt b/_sources/getting-started/tutorials/03-matrix-multiplication.rst.txt index 2e5cd5949..e4c717096 100644 --- a/_sources/getting-started/tutorials/03-matrix-multiplication.rst.txt +++ b/_sources/getting-started/tutorials/03-matrix-multiplication.rst.txt @@ -462,37 +462,37 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we matmul-performance: M cuBLAS ... Triton Triton (+ LeakyReLU) - 0 256.0 2.978909 ... 2.978909 2.978909 - 1 384.0 7.372800 ... 8.507077 7.899428 - 2 512.0 14.563555 ... 16.384000 16.384000 + 0 256.0 2.730667 ... 2.978909 2.978909 + 1 384.0 7.372800 ... 7.899428 7.899428 + 2 512.0 14.563555 ... 16.384000 15.420235 3 640.0 22.260869 ... 24.380953 24.380953 4 768.0 32.768000 ... 34.028308 34.028308 - 5 896.0 39.025776 ... 40.140799 39.025776 + 5 896.0 37.971025 ... 40.140799 39.025776 6 1024.0 49.932191 ... 52.428801 52.428801 - 7 1152.0 44.566925 ... 46.656000 45.938215 + 7 1152.0 44.566925 ... 46.656000 46.656000 8 1280.0 51.200001 ... 56.888887 56.109587 9 1408.0 64.138541 ... 63.392744 62.664092 - 10 1536.0 80.430545 ... 76.106321 76.106321 + 10 1536.0 80.430545 ... 76.106321 75.296679 11 1664.0 63.372618 ... 62.061463 62.061463 - 12 1792.0 72.983276 ... 62.790080 62.441243 - 13 1920.0 69.467336 ... 67.434145 70.172588 - 14 2048.0 73.908442 ... 74.565406 74.565406 - 15 2176.0 81.143743 ... 80.494588 80.817862 - 16 2304.0 68.056616 ... 73.728002 73.501144 - 17 2432.0 71.305746 ... 80.499895 80.269900 - 18 2560.0 78.019048 ... 77.465723 77.283019 - 19 2688.0 83.004501 ... 83.737433 84.483418 - 20 2816.0 81.521884 ... 78.301990 80.767055 - 21 2944.0 82.102191 ... 77.626218 71.194334 - 22 3072.0 82.540970 ... 83.025078 83.269271 - 23 3200.0 84.656085 ... 90.651555 84.993363 - 24 3328.0 83.130825 ... 86.528001 87.262177 - 25 3456.0 79.430113 ... 80.460651 83.459178 - 26 3584.0 87.466332 ... 91.938029 88.761490 - 27 3712.0 85.091436 ... 83.666116 83.247783 - 28 3840.0 81.019778 ... 79.965290 82.778440 - 29 3968.0 85.992909 ... 84.974886 87.535103 - 30 4096.0 93.531519 ... 90.200084 89.837839 + 12 1792.0 72.983276 ... 62.441243 62.441243 + 13 1920.0 68.098521 ... 70.172588 69.818184 + 14 2048.0 73.584279 ... 74.235468 67.932543 + 15 2176.0 82.813365 ... 79.855747 79.540109 + 16 2304.0 68.643310 ... 73.501144 73.275679 + 17 2432.0 71.305746 ... 72.037087 81.908060 + 18 2560.0 77.833728 ... 76.560748 76.560748 + 19 2688.0 80.880718 ... 81.752274 83.552988 + 20 2816.0 81.981598 ... 79.298560 78.442822 + 21 2944.0 81.564701 ... 77.505492 77.265163 + 22 3072.0 81.355034 ... 82.782312 82.903517 + 23 3200.0 81.321474 ... 85.906037 87.671229 + 24 3328.0 82.748617 ... 81.994643 82.275764 + 25 3456.0 81.683457 ... 81.932484 84.244062 + 26 3584.0 87.211821 ... 87.722333 91.284657 + 27 3712.0 85.675250 ... 82.017526 86.267139 + 28 3840.0 84.485870 ... 86.738820 86.602979 + 29 3968.0 92.864488 ... 87.284643 87.159957 + 30 4096.0 93.531519 ... 88.885914 88.827088 [31 rows x 5 columns] @@ -502,7 +502,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 2 minutes 19.244 seconds) + **Total running time of the script:** ( 2 minutes 13.178 seconds) .. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py: diff --git a/_sources/getting-started/tutorials/04-low-memory-dropout.rst.txt b/_sources/getting-started/tutorials/04-low-memory-dropout.rst.txt index 620519e57..c3b814a25 100644 --- a/_sources/getting-started/tutorials/04-low-memory-dropout.rst.txt +++ b/_sources/getting-started/tutorials/04-low-memory-dropout.rst.txt @@ -238,7 +238,7 @@ References .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.306 seconds) + **Total running time of the script:** ( 0 minutes 0.189 seconds) .. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py: diff --git a/_sources/getting-started/tutorials/sg_execution_times.rst.txt b/_sources/getting-started/tutorials/sg_execution_times.rst.txt index 755ca7d7f..610dcb0f9 100644 --- a/_sources/getting-started/tutorials/sg_execution_times.rst.txt +++ b/_sources/getting-started/tutorials/sg_execution_times.rst.txt @@ -5,14 +5,14 @@ Computation times ================= -**04:43.861** total execution time for **getting-started_tutorials** files: +**03:36.949** total execution time for **getting-started_tutorials** files: +---------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:19.244 | 0.0 MB | +| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 02:13.178 | 0.0 MB | +---------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:14.917 | 0.0 MB | +| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``) | 01:12.586 | 0.0 MB | +---------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 01:09.393 | 0.0 MB | +| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``) | 00:10.996 | 0.0 MB | +---------------------------------------------------------------------------------------------------------+-----------+--------+ -| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.306 | 0.0 MB | +| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``) | 00:00.189 | 0.0 MB | +---------------------------------------------------------------------------------------------------------+-----------+--------+ diff --git a/getting-started/tutorials/01-vector-add.html b/getting-started/tutorials/01-vector-add.html index b9b8c072e..20a105751 100644 --- a/getting-started/tutorials/01-vector-add.html +++ b/getting-started/tutorials/01-vector-add.html @@ -320,25 +320,25 @@ for different problem sizes.

Out:

vector-add-performance:
            size      Triton       Torch
-0        4096.0    9.600000    9.600000
+0        4096.0    8.000000    9.600000
 1        8192.0   19.200000   19.200000
 2       16384.0   38.400001   38.400001
 3       32768.0   76.800002   76.800002
 4       65536.0  127.999995  127.999995
 5      131072.0  219.428568  219.428568
-6      262144.0  341.333321  384.000001
+6      262144.0  341.333321  341.333321
 7      524288.0  472.615390  472.615390
 8     1048576.0  614.400016  614.400016
-9     2097152.0  722.823517  722.823517
+9     2097152.0  702.171410  702.171410
 10    4194304.0  780.190482  780.190482
 11    8388608.0  812.429770  812.429770
 12   16777216.0  833.084721  833.084721
-13   33554432.0  843.811163  843.811163
+13   33554432.0  843.811163  842.004273
 14   67108864.0  848.362445  848.362445
 15  134217728.0  851.577704  850.656574
 
-

Total running time of the script: ( 1 minutes 9.393 seconds)

+

Total running time of the script: ( 0 minutes 10.996 seconds)