[GH-PAGES] Updated website

This commit is contained in:
Philippe Tillet
2022-08-08 00:48:35 +00:00
parent 355b06f4b3
commit d155d9a166
167 changed files with 292 additions and 292 deletions

View File

@@ -339,7 +339,7 @@ for different problem sizes.</p>
15 134217728.0 849.737435 850.656574
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 41.742 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 1 minutes 44.952 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-01-vector-add-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/62d97d49a32414049819dd8bb8378080/01-vector-add.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">01-vector-add.py</span></code></a></p>

View File

@@ -375,16 +375,16 @@ We will then compare its performance against (1) <code class="code docutils lite
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 188.321838
1 384.0 614.400016 585.142862 153.600004
1 384.0 585.142862 585.142862 151.703707
2 512.0 655.360017 606.814814 154.566038
3 640.0 682.666684 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
3 640.0 682.666684 640.000002 158.759699
4 768.0 722.823517 664.216187 163.839992
.. ... ... ... ...
93 12160.0 814.058574 406.179533 198.834951
94 12288.0 814.111783 416.101597 199.197579
95 12416.0 812.498981 412.149375 198.755369
96 12544.0 812.566838 412.971190 199.012395
97 12672.0 812.633240 412.097543 199.069228
93 12160.0 812.359066 406.179533 198.631953
94 12288.0 814.111783 415.661740 198.995960
95 12416.0 812.498981 411.296057 198.556711
96 12544.0 812.566838 412.971190 198.766042
97 12672.0 812.633240 411.679167 198.971549
[98 rows x 4 columns]
</pre></div>
@@ -397,7 +397,7 @@ We will then compare its performance against (1) <code class="code docutils lite
Note however that the PyTorch <cite>softmax</cite> operation is more general and will works on tensors of any shape.</p></li>
</ul>
</div></blockquote>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 23.052 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 3 minutes 22.346 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-02-fused-softmax-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">02-fused-softmax.py</span></code></a></p>

View File

@@ -578,32 +578,32 @@ torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -3
7 1152.0 45.242181 ... 46.656000 46.656000
8 1280.0 51.200001 ... 56.888887 56.109587
9 1408.0 64.138541 ... 67.305878 66.485074
10 1536.0 80.430545 ... 79.526831 79.526831
10 1536.0 80.430545 ... 79.526831 78.643199
11 1664.0 62.929456 ... 62.061463 62.061463
12 1792.0 72.512412 ... 72.047592 71.588687
13 1920.0 69.120002 ... 70.530615 70.172588
14 2048.0 73.908442 ... 77.314362 76.959706
15 2176.0 83.155572 ... 86.367588 85.632545
16 2304.0 68.251065 ... 76.809875 76.563695
17 2432.0 71.305746 ... 74.918570 84.115159
18 2560.0 78.019048 ... 81.512437 80.908642
19 2688.0 83.186525 ... 89.464755 90.102270
20 2816.0 79.879498 ... 83.552120 82.602666
21 2944.0 81.100130 ... 82.102191 82.102191
22 3072.0 82.661468 ... 87.313963 86.579673
23 3200.0 84.656085 ... 95.952022 94.604578
24 3328.0 82.891535 ... 84.397770 84.003845
25 3456.0 81.849303 ... 91.511426 90.994998
26 3584.0 85.308722 ... 90.458141 96.579370
27 3712.0 85.528545 ... 88.015279 86.641231
28 3840.0 81.079177 ... 84.290672 91.209894
29 3968.0 85.871877 ... 91.130650 83.406657
30 4096.0 93.142072 ... 91.428970 89.299883
12 1792.0 72.512412 ... 71.588687 71.588687
13 1920.0 69.120002 ... 70.530615 70.530615
14 2048.0 73.908442 ... 76.959706 76.959706
15 2176.0 83.500614 ... 85.998493 85.269692
16 2304.0 68.446623 ... 76.319081 76.563695
17 2432.0 71.305746 ... 74.918570 84.367759
18 2560.0 78.019048 ... 81.310171 81.108913
19 2688.0 83.461070 ... 89.044730 89.888756
20 2816.0 83.392363 ... 81.981598 82.602666
21 2944.0 81.967162 ... 80.251257 81.967162
22 3072.0 81.825298 ... 88.060814 87.924073
23 3200.0 84.656085 ... 95.665176 95.238096
24 3328.0 83.226931 ... 84.795401 84.596116
25 3456.0 82.266905 ... 91.355888 90.841203
26 3584.0 83.876297 ... 93.661869 86.958797
27 3712.0 84.946722 ... 85.970176 87.552452
28 3840.0 82.747472 ... 86.265212 91.625518
29 3968.0 86.114283 ... 90.994735 83.807647
30 4096.0 92.563952 ... 85.434583 85.271746
[31 rows x 5 columns]
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 23.909 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 5 minutes 59.817 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-03-matrix-multiplication-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/d5fee5b55a64e47f1b5724ec39adf171/03-matrix-multiplication.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">03-matrix-multiplication.py</span></code></a></p>

View File

@@ -371,7 +371,7 @@ to explore the <cite>triton/language/random</cite> folder!</p>
<dd><p>Nitish Srivastava and Geoffrey Hinton and Alex Krizhevsky and Ilya Sutskever and Ruslan Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”, JMLR 2014</p>
</dd>
</dl>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.011 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 0 minutes 0.114 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-04-low-memory-dropout-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/c9aed78977a4c05741d675a38dde3d7d/04-low-memory-dropout.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">04-low-memory-dropout.py</span></code></a></p>

View File

@@ -194,36 +194,36 @@ to download the full example code</p>
<p class="sphx-glr-script-out">Out:</p>
<div class="sphx-glr-script-out highlight-none notranslate"><div class="highlight"><pre><span></span>layer-norm-backward:
N Triton Torch Apex
0 1024.0 311.088617 99.096776 311.088617
1 1536.0 351.085717 133.083026 341.333333
2 2048.0 423.724127 162.217818 336.657521
3 2560.0 461.954908 182.857144 330.322572
4 3072.0 515.580429 191.501303 320.556515
5 3584.0 554.941930 208.271186 308.301075
6 4096.0 568.231237 220.412561 297.890900
7 4608.0 498.162157 231.849059 287.251954
8 5120.0 525.128191 242.845844 283.787523
9 5632.0 538.517949 243.107920 291.310338
10 6144.0 544.118087 248.661056 286.322318
11 6656.0 527.207907 256.000009 286.793541
12 7168.0 505.976473 262.243907 288.644296
13 7680.0 481.253256 260.707203 276.756754
14 8192.0 460.440290 269.326017 287.018988
15 8704.0 416.958106 267.472468 284.987724
16 9216.0 428.651187 273.066667 289.507855
17 9728.0 438.857162 280.615388 288.950501
18 10240.0 447.650282 286.767793 290.153487
19 10752.0 430.079980 246.699797 290.267711
20 11264.0 429.786952 245.091565 285.767446
21 11776.0 421.826879 249.447482 288.686414
22 12288.0 420.102570 254.453844 294.911986
23 12800.0 415.696898 253.465340 290.084977
24 13312.0 412.242569 252.759501 290.179836
25 13824.0 405.098897 257.190689 292.829653
26 14336.0 397.761846 254.862216 286.481278
27 14848.0 384.414233 257.293872 289.246765
28 15360.0 374.253788 257.790220 288.000007
29 15872.0 367.336555 262.890274 291.229369
0 1024.0 311.088617 99.497980 303.407414
1 1536.0 351.085717 134.540150 341.333333
2 2048.0 423.724127 161.684218 325.509933
3 2560.0 465.454542 182.857144 332.108113
4 3072.0 515.580429 191.501303 315.076914
5 3584.0 551.384634 207.768111 308.301075
6 4096.0 568.231237 220.412561 300.623865
7 4608.0 500.416301 231.849059 292.571431
8 5120.0 525.128191 242.845844 288.450695
9 5632.0 542.843364 242.236559 287.591490
10 6144.0 548.163546 249.925419 288.000001
11 6656.0 537.858601 254.369423 284.748652
12 7168.0 510.480705 254.862216 278.368936
13 7680.0 482.513091 262.190612 276.341823
14 8192.0 462.607053 267.130429 280.068380
15 8704.0 416.127506 265.096445 283.056921
16 9216.0 429.483477 272.394084 288.375482
17 9728.0 436.396262 281.630872 290.027323
18 10240.0 446.025405 285.435547 288.789653
19 10752.0 432.966444 246.935876 290.922209
20 11264.0 429.104745 244.869560 287.285864
21 11776.0 422.457417 249.667843 288.981596
22 12288.0 420.102570 254.234486 294.617366
23 12800.0 416.824953 253.256381 288.180121
24 13312.0 411.181478 250.972500 288.346556
25 13824.0 405.594132 257.091040 292.056329
26 14336.0 400.074432 255.051144 287.678923
27 14848.0 383.999990 255.816222 287.380642
28 15360.0 373.495460 259.422943 286.656296
29 15872.0 370.192407 262.347108 290.120338
</pre></div>
</div>
<div class="line-block">
@@ -477,7 +477,7 @@ to download the full example code</p>
<span class="n">bench_layer_norm</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">save_path</span><span class="o">=</span><span class="s1">&#39;.&#39;</span><span class="p">,</span> <span class="n">print_data</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
</pre></div>
</div>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 11.919 seconds)</p>
<p class="sphx-glr-timing"><strong>Total running time of the script:</strong> ( 2 minutes 10.468 seconds)</p>
<div class="sphx-glr-footer class sphx-glr-footer-example docutils container" id="sphx-glr-download-getting-started-tutorials-05-layer-norm-py">
<div class="sphx-glr-download sphx-glr-download-python docutils container">
<p><a class="reference download internal" download="" href="../../_downloads/935c0dd0fbeb4b2e69588471cbb2d4b2/05-layer-norm.py"><code class="xref download docutils literal notranslate"><span class="pre">Download</span> <span class="pre">Python</span> <span class="pre">source</span> <span class="pre">code:</span> <span class="pre">05-layer-norm.py</span></code></a></p>

View File

@@ -174,7 +174,7 @@
<div class="section" id="computation-times">
<span id="sphx-glr-getting-started-tutorials-sg-execution-times"></span><h1>Computation times<a class="headerlink" href="#computation-times" title="Permalink to this headline"></a></h1>
<p><strong>12:40.633</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<p><strong>13:17.697</strong> total execution time for <strong>getting-started_tutorials</strong> files:</p>
<table class="docutils align-default">
<colgroup>
<col style="width: 85%" />
@@ -183,23 +183,23 @@
</colgroup>
<tbody>
<tr class="row-odd"><td><p><a class="reference internal" href="03-matrix-multiplication.html#sphx-glr-getting-started-tutorials-03-matrix-multiplication-py"><span class="std std-ref">Matrix Multiplication</span></a> (<code class="docutils literal notranslate"><span class="pre">03-matrix-multiplication.py</span></code>)</p></td>
<td><p>05:23.909</p></td>
<td><p>05:59.817</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="02-fused-softmax.html#sphx-glr-getting-started-tutorials-02-fused-softmax-py"><span class="std std-ref">Fused Softmax</span></a> (<code class="docutils literal notranslate"><span class="pre">02-fused-softmax.py</span></code>)</p></td>
<td><p>03:23.052</p></td>
<td><p>03:22.346</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="05-layer-norm.html#sphx-glr-getting-started-tutorials-05-layer-norm-py"><span class="std std-ref">Layer Normalization</span></a> (<code class="docutils literal notranslate"><span class="pre">05-layer-norm.py</span></code>)</p></td>
<td><p>02:11.919</p></td>
<td><p>02:10.468</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference internal" href="01-vector-add.html#sphx-glr-getting-started-tutorials-01-vector-add-py"><span class="std std-ref">Vector Addition</span></a> (<code class="docutils literal notranslate"><span class="pre">01-vector-add.py</span></code>)</p></td>
<td><p>01:41.742</p></td>
<td><p>01:44.952</p></td>
<td><p>0.0 MB</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference internal" href="04-low-memory-dropout.html#sphx-glr-getting-started-tutorials-04-low-memory-dropout-py"><span class="std std-ref">Low-Memory Dropout</span></a> (<code class="docutils literal notranslate"><span class="pre">04-low-memory-dropout.py</span></code>)</p></td>
<td><p>00:00.011</p></td>
<td><p>00:00.114</p></td>
<td><p>0.0 MB</p></td>
</tr>
</tbody>