[GH-PAGES] Updated website
| 
		 Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB  | 
| 
		 Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB  | 
| 
		 Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 38 KiB  | 
| 
		 Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 24 KiB  | 
| 
		 Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 57 KiB  | 
| 
		 Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 33 KiB  | 
@@ -254,7 +254,7 @@ We can now run the decorated function above. Pass `print_data=True` to see the p
 | 
			
		||||
 | 
			
		||||
.. rst-class:: sphx-glr-timing
 | 
			
		||||
 | 
			
		||||
   **Total running time of the script:** ( 1 minutes  50.877 seconds)
 | 
			
		||||
   **Total running time of the script:** ( 1 minutes  52.706 seconds)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _sphx_glr_download_getting-started_tutorials_01-vector-add.py:
 | 
			
		||||
 
 | 
			
		||||
@@ -286,17 +286,17 @@ We will then compare its performance against (1) :code:`torch.softmax` and (2) t
 | 
			
		||||
 | 
			
		||||
    softmax-performance:
 | 
			
		||||
              N      Triton  Torch (native)  Torch (jit)
 | 
			
		||||
    0     256.0  512.000001      546.133347   186.181817
 | 
			
		||||
    0     256.0  512.000001      546.133347   188.321838
 | 
			
		||||
    1     384.0  585.142862      585.142862   153.600004
 | 
			
		||||
    2     512.0  630.153853      606.814814   154.566038
 | 
			
		||||
    3     640.0  682.666684      640.000002   160.000000
 | 
			
		||||
    4     768.0  702.171410      664.216187   162.754967
 | 
			
		||||
    4     768.0  702.171410      664.216187   163.839992
 | 
			
		||||
    ..      ...         ...             ...          ...
 | 
			
		||||
    93  12160.0  810.666687      406.179533   198.834951
 | 
			
		||||
    94  12288.0  810.754644      416.101597   199.197579
 | 
			
		||||
    95  12416.0  809.189387      412.149375   198.755369
 | 
			
		||||
    96  12544.0  807.661970      412.971190   199.012395
 | 
			
		||||
    97  12672.0  807.776923      412.097543   199.069228
 | 
			
		||||
    93  12160.0  810.666687      406.179533   199.038365
 | 
			
		||||
    94  12288.0  810.754644      415.661740   199.298541
 | 
			
		||||
    95  12416.0  809.189387      412.149375   198.954424
 | 
			
		||||
    96  12544.0  807.661970      412.971190   199.209928
 | 
			
		||||
    97  12672.0  807.776923      412.097543   199.264875
 | 
			
		||||
 | 
			
		||||
    [98 rows x 4 columns]
 | 
			
		||||
 | 
			
		||||
@@ -314,7 +314,7 @@ In the above plot, we can see that:
 | 
			
		||||
 | 
			
		||||
.. rst-class:: sphx-glr-timing
 | 
			
		||||
 | 
			
		||||
   **Total running time of the script:** ( 3 minutes  28.167 seconds)
 | 
			
		||||
   **Total running time of the script:** ( 3 minutes  28.107 seconds)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:
 | 
			
		||||
 
 | 
			
		||||
@@ -463,36 +463,36 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
 | 
			
		||||
    matmul-performance:
 | 
			
		||||
             M     cuBLAS  ...     Triton  Triton (+ LeakyReLU)
 | 
			
		||||
    0    256.0   2.730667  ...   2.978909              2.978909
 | 
			
		||||
    1    384.0   7.372800  ...   8.507077              8.507077
 | 
			
		||||
    1    384.0   7.372800  ...   8.507077              7.899428
 | 
			
		||||
    2    512.0  14.563555  ...  16.384000             16.384000
 | 
			
		||||
    3    640.0  22.260869  ...  24.380953             24.380953
 | 
			
		||||
    4    768.0  32.768000  ...  34.028308             34.028308
 | 
			
		||||
    5    896.0  39.025776  ...  40.140799             39.025776
 | 
			
		||||
    6   1024.0  49.932191  ...  53.773130             52.428801
 | 
			
		||||
    7   1152.0  44.566925  ...  46.656000             45.938215
 | 
			
		||||
    8   1280.0  51.200001  ...  56.109587             56.109587
 | 
			
		||||
    9   1408.0  64.138541  ...  66.485074             65.684049
 | 
			
		||||
    10  1536.0  80.430545  ...  78.643199             78.643199
 | 
			
		||||
    11  1664.0  62.929456  ...  62.061463             61.636381
 | 
			
		||||
    12  1792.0  72.512412  ...  72.047592             71.588687
 | 
			
		||||
    13  1920.0  69.120002  ...  70.530615             70.530615
 | 
			
		||||
    7   1152.0  44.566925  ...  46.656000             46.656000
 | 
			
		||||
    8   1280.0  51.200001  ...  56.888887             56.109587
 | 
			
		||||
    9   1408.0  64.138541  ...  67.305878             66.485074
 | 
			
		||||
    10  1536.0  79.526831  ...  78.643199             78.643199
 | 
			
		||||
    11  1664.0  62.929456  ...  62.061463             62.061463
 | 
			
		||||
    12  1792.0  72.983276  ...  72.047592             72.047592
 | 
			
		||||
    13  1920.0  69.120002  ...  70.530615             70.172588
 | 
			
		||||
    14  2048.0  73.908442  ...  76.959706             76.608294
 | 
			
		||||
    15  2176.0  83.500614  ...  86.367588             85.632545
 | 
			
		||||
    16  2304.0  68.446623  ...  76.809875             76.563695
 | 
			
		||||
    17  2432.0  71.305746  ...  74.521127             85.134737
 | 
			
		||||
    18  2560.0  77.833728  ...  81.108913             81.108913
 | 
			
		||||
    19  2688.0  83.552988  ...  89.676257             89.044730
 | 
			
		||||
    20  2816.0  83.392363  ...  82.759409             83.074685
 | 
			
		||||
    21  2944.0  82.237674  ...  81.698415             82.921853
 | 
			
		||||
    22  3072.0  82.121977  ...  89.170242             88.335577
 | 
			
		||||
    23  3200.0  83.116885  ...  94.955488             94.534716
 | 
			
		||||
    24  3328.0  82.275764  ...  84.200347             84.151136
 | 
			
		||||
    25  3456.0  82.688790  ...  85.676480             90.079964
 | 
			
		||||
    26  3584.0  87.296493  ...  98.699661             91.473299
 | 
			
		||||
    27  3712.0  81.482335  ...  88.640059             83.947349
 | 
			
		||||
    28  3840.0  84.292684  ...  91.701494             84.712368
 | 
			
		||||
    29  3968.0  91.678389  ...  84.152622             91.266964
 | 
			
		||||
    30  4096.0  86.536250  ...  85.436280             91.867031
 | 
			
		||||
    16  2304.0  68.446623  ...  77.057651             77.057651
 | 
			
		||||
    17  2432.0  71.305746  ...  75.118889             85.393507
 | 
			
		||||
    18  2560.0  78.019048  ...  81.310171             80.511054
 | 
			
		||||
    19  2688.0  84.108772  ...  89.464755             89.044730
 | 
			
		||||
    20  2816.0  81.067298  ...  83.392363             83.074685
 | 
			
		||||
    21  2944.0  82.373605  ...  83.060049             82.921853
 | 
			
		||||
    22  3072.0  82.540970  ...  88.473602             88.473602
 | 
			
		||||
    23  3200.0  84.544253  ...  95.025983             95.808380
 | 
			
		||||
    24  3328.0  83.419811  ...  82.181847             83.808259
 | 
			
		||||
    25  3456.0  81.849303  ...  90.484366             90.892410
 | 
			
		||||
    26  3584.0  83.332156  ...  90.552082             96.475743
 | 
			
		||||
    27  3712.0  85.896254  ...  86.416391             88.015279
 | 
			
		||||
    28  3840.0  80.139129  ...  85.333335             91.097196
 | 
			
		||||
    29  3968.0  86.053553  ...  91.472214             86.053553
 | 
			
		||||
    30  4096.0  93.727466  ...  91.428970             85.325956
 | 
			
		||||
 | 
			
		||||
    [31 rows x 5 columns]
 | 
			
		||||
 | 
			
		||||
@@ -502,7 +502,7 @@ We can now compare the performance of our kernel against that of cuBLAS. Here we
 | 
			
		||||
 | 
			
		||||
.. rst-class:: sphx-glr-timing
 | 
			
		||||
 | 
			
		||||
   **Total running time of the script:** ( 6 minutes  11.392 seconds)
 | 
			
		||||
   **Total running time of the script:** ( 6 minutes  5.192 seconds)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
 | 
			
		||||
 
 | 
			
		||||
@@ -238,7 +238,7 @@ References
 | 
			
		||||
 | 
			
		||||
.. rst-class:: sphx-glr-timing
 | 
			
		||||
 | 
			
		||||
   **Total running time of the script:** ( 0 minutes  0.265 seconds)
 | 
			
		||||
   **Total running time of the script:** ( 0 minutes  0.186 seconds)
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _sphx_glr_download_getting-started_tutorials_04-low-memory-dropout.py:
 | 
			
		||||
 
 | 
			
		||||
@@ -5,14 +5,14 @@
 | 
			
		||||
 | 
			
		||||
Computation times
 | 
			
		||||
=================
 | 
			
		||||
**11:30.701** total execution time for **getting-started_tutorials** files:
 | 
			
		||||
**11:26.192** total execution time for **getting-started_tutorials** files:
 | 
			
		||||
 | 
			
		||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:11.392 | 0.0 MB |
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_03-matrix-multiplication.py` (``03-matrix-multiplication.py``) | 06:05.192 | 0.0 MB |
 | 
			
		||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``)                 | 03:28.167 | 0.0 MB |
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_02-fused-softmax.py` (``02-fused-softmax.py``)                 | 03:28.107 | 0.0 MB |
 | 
			
		||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``)                       | 01:50.877 | 0.0 MB |
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_01-vector-add.py` (``01-vector-add.py``)                       | 01:52.706 | 0.0 MB |
 | 
			
		||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``)       | 00:00.265 | 0.0 MB |
 | 
			
		||||
| :ref:`sphx_glr_getting-started_tutorials_04-low-memory-dropout.py` (``04-low-memory-dropout.py``)       | 00:00.186 | 0.0 MB |
 | 
			
		||||
+---------------------------------------------------------------------------------------------------------+-----------+--------+
 | 
			
		||||
 
 | 
			
		||||
@@ -338,7 +338,7 @@ for different problem sizes.</p>
 | 
			
		||||
15  134217728.0  849.737435  850.656574
 | 
			
		||||
</pre></div>
 | 
			
		||||
</div>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  50.877 seconds)</p>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes  52.706 seconds)</p>
 | 
			
		||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
 | 
			
		||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
 | 
			
		||||
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>
 | 
			
		||||
 
 | 
			
		||||
@@ -373,17 +373,17 @@ We will then compare its performance against (1) <code class="code docutils lite
 | 
			
		||||
<p class="sphx-glr-script-out">Out:</p>
 | 
			
		||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
 | 
			
		||||
          N      Triton  Torch (native)  Torch (jit)
 | 
			
		||||
0     256.0  512.000001      546.133347   186.181817
 | 
			
		||||
0     256.0  512.000001      546.133347   188.321838
 | 
			
		||||
1     384.0  585.142862      585.142862   153.600004
 | 
			
		||||
2     512.0  630.153853      606.814814   154.566038
 | 
			
		||||
3     640.0  682.666684      640.000002   160.000000
 | 
			
		||||
4     768.0  702.171410      664.216187   162.754967
 | 
			
		||||
4     768.0  702.171410      664.216187   163.839992
 | 
			
		||||
..      ...         ...             ...          ...
 | 
			
		||||
93  12160.0  810.666687      406.179533   198.834951
 | 
			
		||||
94  12288.0  810.754644      416.101597   199.197579
 | 
			
		||||
95  12416.0  809.189387      412.149375   198.755369
 | 
			
		||||
96  12544.0  807.661970      412.971190   199.012395
 | 
			
		||||
97  12672.0  807.776923      412.097543   199.069228
 | 
			
		||||
93  12160.0  810.666687      406.179533   199.038365
 | 
			
		||||
94  12288.0  810.754644      415.661740   199.298541
 | 
			
		||||
95  12416.0  809.189387      412.149375   198.954424
 | 
			
		||||
96  12544.0  807.661970      412.971190   199.209928
 | 
			
		||||
97  12672.0  807.776923      412.097543   199.264875
 | 
			
		||||
 | 
			
		||||
[98 rows x 4 columns]
 | 
			
		||||
</pre></div>
 | 
			
		||||
@@ -396,7 +396,7 @@ We will then compare its performance against (1) <code class="code docutils lite
 | 
			
		||||
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
 | 
			
		||||
</ul>
 | 
			
		||||
</div></blockquote>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  28.167 seconds)</p>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes  28.107 seconds)</p>
 | 
			
		||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
 | 
			
		||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
 | 
			
		||||
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>
 | 
			
		||||
 
 | 
			
		||||
@@ -568,41 +568,41 @@ torch_output=tensor([[  1.1045, -36.9688,  31.4688,  ..., -11.3906,  24.4531, -3
 | 
			
		||||
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>matmul-performance:
 | 
			
		||||
         M     cuBLAS  ...     Triton  Triton (+ LeakyReLU)
 | 
			
		||||
0    256.0   2.730667  ...   2.978909              2.978909
 | 
			
		||||
1    384.0   7.372800  ...   8.507077              8.507077
 | 
			
		||||
1    384.0   7.372800  ...   8.507077              7.899428
 | 
			
		||||
2    512.0  14.563555  ...  16.384000             16.384000
 | 
			
		||||
3    640.0  22.260869  ...  24.380953             24.380953
 | 
			
		||||
4    768.0  32.768000  ...  34.028308             34.028308
 | 
			
		||||
5    896.0  39.025776  ...  40.140799             39.025776
 | 
			
		||||
6   1024.0  49.932191  ...  53.773130             52.428801
 | 
			
		||||
7   1152.0  44.566925  ...  46.656000             45.938215
 | 
			
		||||
8   1280.0  51.200001  ...  56.109587             56.109587
 | 
			
		||||
9   1408.0  64.138541  ...  66.485074             65.684049
 | 
			
		||||
10  1536.0  80.430545  ...  78.643199             78.643199
 | 
			
		||||
11  1664.0  62.929456  ...  62.061463             61.636381
 | 
			
		||||
12  1792.0  72.512412  ...  72.047592             71.588687
 | 
			
		||||
13  1920.0  69.120002  ...  70.530615             70.530615
 | 
			
		||||
7   1152.0  44.566925  ...  46.656000             46.656000
 | 
			
		||||
8   1280.0  51.200001  ...  56.888887             56.109587
 | 
			
		||||
9   1408.0  64.138541  ...  67.305878             66.485074
 | 
			
		||||
10  1536.0  79.526831  ...  78.643199             78.643199
 | 
			
		||||
11  1664.0  62.929456  ...  62.061463             62.061463
 | 
			
		||||
12  1792.0  72.983276  ...  72.047592             72.047592
 | 
			
		||||
13  1920.0  69.120002  ...  70.530615             70.172588
 | 
			
		||||
14  2048.0  73.908442  ...  76.959706             76.608294
 | 
			
		||||
15  2176.0  83.500614  ...  86.367588             85.632545
 | 
			
		||||
16  2304.0  68.446623  ...  76.809875             76.563695
 | 
			
		||||
17  2432.0  71.305746  ...  74.521127             85.134737
 | 
			
		||||
18  2560.0  77.833728  ...  81.108913             81.108913
 | 
			
		||||
19  2688.0  83.552988  ...  89.676257             89.044730
 | 
			
		||||
20  2816.0  83.392363  ...  82.759409             83.074685
 | 
			
		||||
21  2944.0  82.237674  ...  81.698415             82.921853
 | 
			
		||||
22  3072.0  82.121977  ...  89.170242             88.335577
 | 
			
		||||
23  3200.0  83.116885  ...  94.955488             94.534716
 | 
			
		||||
24  3328.0  82.275764  ...  84.200347             84.151136
 | 
			
		||||
25  3456.0  82.688790  ...  85.676480             90.079964
 | 
			
		||||
26  3584.0  87.296493  ...  98.699661             91.473299
 | 
			
		||||
27  3712.0  81.482335  ...  88.640059             83.947349
 | 
			
		||||
28  3840.0  84.292684  ...  91.701494             84.712368
 | 
			
		||||
29  3968.0  91.678389  ...  84.152622             91.266964
 | 
			
		||||
30  4096.0  86.536250  ...  85.436280             91.867031
 | 
			
		||||
16  2304.0  68.446623  ...  77.057651             77.057651
 | 
			
		||||
17  2432.0  71.305746  ...  75.118889             85.393507
 | 
			
		||||
18  2560.0  78.019048  ...  81.310171             80.511054
 | 
			
		||||
19  2688.0  84.108772  ...  89.464755             89.044730
 | 
			
		||||
20  2816.0  81.067298  ...  83.392363             83.074685
 | 
			
		||||
21  2944.0  82.373605  ...  83.060049             82.921853
 | 
			
		||||
22  3072.0  82.540970  ...  88.473602             88.473602
 | 
			
		||||
23  3200.0  84.544253  ...  95.025983             95.808380
 | 
			
		||||
24  3328.0  83.419811  ...  82.181847             83.808259
 | 
			
		||||
25  3456.0  81.849303  ...  90.484366             90.892410
 | 
			
		||||
26  3584.0  83.332156  ...  90.552082             96.475743
 | 
			
		||||
27  3712.0  85.896254  ...  86.416391             88.015279
 | 
			
		||||
28  3840.0  80.139129  ...  85.333335             91.097196
 | 
			
		||||
29  3968.0  86.053553  ...  91.472214             86.053553
 | 
			
		||||
30  4096.0  93.727466  ...  91.428970             85.325956
 | 
			
		||||
 | 
			
		||||
[31 rows x 5 columns]
 | 
			
		||||
</pre></div>
 | 
			
		||||
</div>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes  11.392 seconds)</p>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 6 minutes  5.192 seconds)</p>
 | 
			
		||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
 | 
			
		||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
 | 
			
		||||
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>
 | 
			
		||||
 
 | 
			
		||||
@@ -370,7 +370,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
 | 
			
		||||
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
 | 
			
		||||
</dd>
 | 
			
		||||
</dl>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes  0.265 seconds)</p>
 | 
			
		||||
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes  0.186 seconds)</p>
 | 
			
		||||
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
 | 
			
		||||
<div class="sphx-glr-download sphx-glr-download-python docutils container">
 | 
			
		||||
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>
 | 
			
		||||
 
 | 
			
		||||
@@ -174,7 +174,7 @@
 | 
			
		||||
            
 | 
			
		||||
  <div class="section" id="computation-times">
 | 
			
		||||
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline">¶</a></h1>
 | 
			
		||||
<p><strong>11:30.701</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
 | 
			
		||||
<p><strong>11:26.192</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
 | 
			
		||||
<table class="docutils align-default">
 | 
			
		||||
<colgroup>
 | 
			
		||||
<col style="width: 85%" />
 | 
			
		||||
@@ -183,19 +183,19 @@
 | 
			
		||||
</colgroup>
 | 
			
		||||
<tbody>
 | 
			
		||||
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
 | 
			
		||||
<td><p>06:11.392</p></td>
 | 
			
		||||
<td><p>06:05.192</p></td>
 | 
			
		||||
<td><p>0.0 MB</p></td>
 | 
			
		||||
</tr>
 | 
			
		||||
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
 | 
			
		||||
<td><p>03:28.167</p></td>
 | 
			
		||||
<td><p>03:28.107</p></td>
 | 
			
		||||
<td><p>0.0 MB</p></td>
 | 
			
		||||
</tr>
 | 
			
		||||
<tr class="row-odd"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
 | 
			
		||||
<td><p>01:50.877</p></td>
 | 
			
		||||
<td><p>01:52.706</p></td>
 | 
			
		||||
<td><p>0.0 MB</p></td>
 | 
			
		||||
</tr>
 | 
			
		||||
<tr class="row-even"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
 | 
			
		||||
<td><p>00:00.265</p></td>
 | 
			
		||||
<td><p>00:00.186</p></td>
 | 
			
		||||
<td><p>0.0 MB</p></td>
 | 
			
		||||
</tr>
 | 
			
		||||
</tbody>
 | 
			
		||||
 
 | 
			
		||||