2021-03-06 17:35:11 -05:00
<!DOCTYPE html>
< html class = "writer-html5" lang = "en" >
< head >
< meta charset = "utf-8" / >
< meta name = "viewport" content = "width=device-width, initial-scale=1.0" / >
< title > Fused Softmax — Triton documentation< / title >
< link rel = "stylesheet" href = "../../_static/css/theme.css" type = "text/css" / >
< link rel = "stylesheet" href = "../../_static/pygments.css" type = "text/css" / >
< link rel = "stylesheet" href = "../../_static/gallery.css" type = "text/css" / >
< link rel = "stylesheet" href = "../../_static/gallery-binder.css" type = "text/css" / >
< link rel = "stylesheet" href = "../../_static/gallery-dataframe.css" type = "text/css" / >
< link rel = "stylesheet" href = "../../_static/gallery-rendered-html.css" type = "text/css" / >
2021-03-19 16:19:37 -04:00
< link rel = "stylesheet" href = "../../_static/css/custom.css" type = "text/css" / >
2021-03-06 17:35:11 -05:00
<!-- [if lt IE 9]>
< script src = "../../_static/js/html5shiv.min.js" > < / script >
<![endif]-->
< script type = "text/javascript" id = "documentation_options" data-url_root = "../../" src = "../../_static/documentation_options.js" > < / script >
< script src = "../../_static/jquery.js" > < / script >
< script src = "../../_static/underscore.js" > < / script >
< script src = "../../_static/doctools.js" > < / script >
< script async = "async" src = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_HTMLorMML" > < / script >
< script type = "text/javascript" src = "../../_static/js/theme.js" > < / script >
< link rel = "index" title = "Index" href = "../../genindex.html" / >
< link rel = "search" title = "Search" href = "../../search.html" / >
2021-03-15 13:58:20 -04:00
< link rel = "next" title = "Matrix Multiplication" href = "03-matrix-multiplication.html" / >
2021-03-06 17:35:11 -05:00
< link rel = "prev" title = "Vector Addition" href = "01-vector-add.html" / >
< / head >
< body class = "wy-body-for-nav" >
< div class = "wy-grid-for-nav" >
< nav data-toggle = "wy-nav-shift" class = "wy-nav-side" >
< div class = "wy-side-scroll" >
< div class = "wy-side-nav-search" >
< a href = "../../index.html" class = "icon icon-home" > Triton
< / a >
< div role = "search" >
< form id = "rtd-search-form" class = "wy-form" action = "../../search.html" method = "get" >
< input type = "text" name = "q" placeholder = "Search docs" / >
< input type = "hidden" name = "check_keywords" value = "yes" / >
< input type = "hidden" name = "area" value = "default" / >
< / form >
< / div >
< / div >
< div class = "wy-menu wy-menu-vertical" data-spy = "affix" role = "navigation" aria-label = "main navigation" >
< p class = "caption" > < span class = "caption-text" > Getting Started< / span > < / p >
< ul class = "current" >
< li class = "toctree-l1" > < a class = "reference internal" href = "../installation.html" > Installation< / a > < / li >
< li class = "toctree-l1 current" > < a class = "reference internal" href = "index.html" > Tutorials< / a > < ul class = "current" >
< li class = "toctree-l2" > < a class = "reference internal" href = "01-vector-add.html" > Vector Addition< / a > < / li >
< li class = "toctree-l2 current" > < a class = "current reference internal" href = "#" > Fused Softmax< / a > < ul >
< li class = "toctree-l3" > < a class = "reference internal" href = "#motivations" > Motivations< / a > < / li >
< li class = "toctree-l3" > < a class = "reference internal" href = "#compute-kernel" > Compute Kernel< / a > < / li >
< li class = "toctree-l3" > < a class = "reference internal" href = "#unit-test" > Unit Test< / a > < / li >
2021-03-15 13:58:20 -04:00
< li class = "toctree-l3" > < a class = "reference internal" href = "#benchmark" > Benchmark< / a > < / li >
2021-03-06 17:35:11 -05:00
< / ul >
< / li >
2021-03-15 13:58:20 -04:00
< li class = "toctree-l2" > < a class = "reference internal" href = "03-matrix-multiplication.html" > Matrix Multiplication< / a > < / li >
2021-03-06 17:35:11 -05:00
< / ul >
< / li >
< / ul >
2021-04-21 01:40:29 -04:00
< p class = "caption" > < span class = "caption-text" > Language Reference< / span > < / p >
< ul >
< li class = "toctree-l1" > < a class = "reference internal" href = "../../language-reference/python-api/index.html" > Python API< / a > < / li >
< / ul >
2021-03-19 16:19:37 -04:00
< p class = "caption" > < span class = "caption-text" > Programming Guide< / span > < / p >
< ul >
2021-03-23 17:10:07 -04:00
< li class = "toctree-l1" > < a class = "reference internal" href = "../../programming-guide/chapter-1/introduction.html" > Introduction< / a > < / li >
< li class = "toctree-l1" > < a class = "reference internal" href = "../../programming-guide/chapter-2/related-work.html" > Related Work< / a > < / li >
2021-03-19 16:19:37 -04:00
< / ul >
2021-03-06 17:35:11 -05:00
< / div >
< / div >
< / nav >
< section data-toggle = "wy-nav-shift" class = "wy-nav-content-wrap" >
< nav class = "wy-nav-top" aria-label = "top navigation" >
< i data-toggle = "wy-nav-top" class = "fa fa-bars" > < / i >
< a href = "../../index.html" > Triton< / a >
< / nav >
< div class = "wy-nav-content" >
< div class = "rst-content" >
< div role = "navigation" aria-label = "breadcrumbs navigation" >
< ul class = "wy-breadcrumbs" >
< li > < a href = "../../index.html" class = "icon icon-home" > < / a > » < / li >
< li > < a href = "index.html" > Tutorials< / a > » < / li >
< li > Fused Softmax< / li >
< li class = "wy-breadcrumbs-aside" >
< a href = "../../_sources/getting-started/tutorials/02-fused-softmax.rst.txt" rel = "nofollow" > View page source< / a >
< / li >
< / ul >
< hr / >
< / div >
< div role = "main" class = "document" itemscope = "itemscope" itemtype = "http://schema.org/Article" >
< div itemprop = "articleBody" >
< div class = "sphx-glr-download-link-note admonition note" >
< p class = "admonition-title" > Note< / p >
< p > Click < a class = "reference internal" href = "#sphx-glr-download-getting-started-tutorials-02-fused-softmax-py" > < span class = "std std-ref" > here< / span > < / a >
to download the full example code< / p >
< / div >
< div class = "sphx-glr-example-title section" id = "fused-softmax" >
< span id = "sphx-glr-getting-started-tutorials-02-fused-softmax-py" > < / span > < h1 > Fused Softmax< a class = "headerlink" href = "#fused-softmax" title = "Permalink to this headline" > ¶< / a > < / h1 >
2021-03-06 22:06:32 -05:00
< p > In this tutorial, you will write a fused softmax operation (that outperforms PyTorch) and learn about:< / p >
2021-03-06 17:35:11 -05:00
< ul class = "simple" >
< li > < p > The benefits of kernel fusion for bandwidth-bound operations.< / p > < / li >
2021-04-21 01:40:29 -04:00
< li > < p > The reduction operators in Triton.< / p > < / li >
2021-03-06 17:35:11 -05:00
< / ul >
< div class = "section" id = "motivations" >
< h2 > Motivations< a class = "headerlink" href = "#motivations" title = "Permalink to this headline" > ¶< / a > < / h2 >
< p > Custom GPU kernels for elementwise additions are educationally valuable but won’ t get you very far in practice.
Let us consider instead the case of a simple (numerically stabilized) softmax operation:< / p >
< div class = "highlight-default notranslate" > < div class = "highlight" > < pre > < span > < / span > < span class = "kn" > import< / span > < span class = "nn" > torch< / span >
< span class = "c1" > # Compute the row-wise softmax of x< / span >
< span class = "k" > def< / span > < span class = "nf" > naive_softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ):< / span >
< span class = "c1" > # read MN elements ; write M elements< / span >
< span class = "n" > x_max< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > max< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =< / span > < span class = "mi" > 1< / span > < span class = "p" > )[< / span > < span class = "mi" > 0< / span > < span class = "p" > ]< / span >
< span class = "c1" > # read 2MN elements ; write MN elements< / span >
< span class = "n" > z< / span > < span class = "o" > =< / span > < span class = "n" > x< / span > < span class = "o" > -< / span > < span class = "n" > x_max< / span > < span class = "p" > [:,< / span > < span class = "kc" > None< / span > < span class = "p" > ]< / span >
< span class = "c1" > # read MN elements ; write MN elements< / span >
< span class = "n" > numerator< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > exp< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > )< / span >
< span class = "c1" > # read MN elements ; write M elements< / span >
< span class = "n" > denominator< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > sum< / span > < span class = "p" > (< / span > < span class = "n" > numerator< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =< / span > < span class = "mi" > 1< / span > < span class = "p" > )< / span >
< span class = "c1" > # read 2MN elements ; write MN elements< / span >
< span class = "n" > ret< / span > < span class = "o" > =< / span > < span class = "n" > numerator< / span > < span class = "o" > /< / span > < span class = "n" > denominator< / span > < span class = "p" > [:,< / span > < span class = "kc" > None< / span > < span class = "p" > ]< / span >
< span class = "c1" > # in total: read 7MN elements ; wrote 3MN + 2M elements< / span >
< span class = "k" > return< / span > < span class = "n" > ret< / span >
< / pre > < / div >
< / div >
< p > When implemented naively in pytorch, computing < code class = "code docutils literal notranslate" > < span class = "pre" > y< / span > < span class = "pre" > =< / span > < span class = "pre" > naive_softmax(x)< / span > < / code > for < span class = "math notranslate nohighlight" > \(x \in R^{M \times N}\)< / span > requires reading < span class = "math notranslate nohighlight" > \(7MN\)< / span > elements from DRAM and writing back < span class = "math notranslate nohighlight" > \(3MN + 2M\)< / span > elements.
2021-03-06 22:06:32 -05:00
This is obviously wasteful; we’ d prefer to have a custom “fused” kernel that only reads X once and does all the necessary computations on-chip.
2021-04-21 01:40:29 -04:00
This solution would require reading and writing back only < span class = "math notranslate nohighlight" > \(MN\)< / span > bytes, so we could expect a theoretical speed-up of ~5x (i.e., < span class = "math notranslate nohighlight" > \((10MN + 2M) / 2MN\)< / span > ).
2021-03-06 22:06:32 -05:00
In practice, though, we would be getting a bit less as our kernel computes exponentials and internally moves data around in shared memory.< / p >
2021-03-06 17:35:11 -05:00
< / div >
< div class = "section" id = "compute-kernel" >
< h2 > Compute Kernel< a class = "headerlink" href = "#compute-kernel" title = "Permalink to this headline" > ¶< / a > < / h2 >
2021-04-21 01:40:29 -04:00
< p > Our softmax kernel works as follows: each program loads a row of the input matrix X, normalizes it and writes back the result to the output Y.
2021-03-06 22:06:32 -05:00
Note that one important limitation of Triton is that each block must have a power-of-two number of elements,
so we need to internally “pad” tiles and guard the memory operations properly if we want to handle any possible input shapes:< / p >
2021-04-21 01:40:29 -04:00
< div class = "highlight-default notranslate" > < div class = "highlight" > < pre > < span > < / span > < span class = "kn" > import< / span > < span class = "nn" > triton< / span >
< span class = "nd" > @triton< / span > < span class = "o" > .< / span > < span class = "n" > jit< / span >
< span class = "k" > def< / span > < span class = "nf" > _softmax< / span > < span class = "p" > (< / span > < span class = "n" > Y< / span > < span class = "p" > ,< / span > < span class = "n" > X< / span > < span class = "p" > ,< / span > < span class = "n" > stride_xm< / span > < span class = "p" > ,< / span > < span class = "n" > stride_ym< / span > < span class = "p" > ,< / span > < span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "n" > N< / span > < span class = "p" > ,< / span > < span class = "o" > **< / span > < span class = "n" > meta< / span > < span class = "p" > ):< / span >
< span class = "c1" > # row index< / span >
< span class = "n" > m< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > program_id< / span > < span class = "p" > (< / span > < span class = "mi" > 0< / span > < span class = "p" > )< / span >
< span class = "c1" > # col indices< / span >
< span class = "n" > n< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > arange< / span > < span class = "p" > (< / span > < span class = "mi" > 0< / span > < span class = "p" > ,< / span > < span class = "n" > meta< / span > < span class = "p" > [< / span > < span class = "s1" > ' BLOCK' < / span > < span class = "p" > ])< / span >
< span class = "c1" > # the memory address of all the elements< / span >
< span class = "c1" > # that we want to load can be computed as follows< / span >
< span class = "n" > X< / span > < span class = "o" > =< / span > < span class = "n" > X< / span > < span class = "o" > +< / span > < span class = "n" > m< / span > < span class = "o" > *< / span > < span class = "n" > stride_xm< / span > < span class = "o" > +< / span > < span class = "n" > n< / span >
< span class = "n" > x< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > load< / span > < span class = "p" > (< / span > < span class = "n" > X< / span > < span class = "p" > ,< / span > < span class = "n" > mask< / span > < span class = "o" > =< / span > < span class = "n" > n< / span > < span class = "o" > < < / span > < span class = "n" > N< / span > < span class = "p" > ,< / span > < span class = "n" > other< / span > < span class = "o" > =-< / span > < span class = "nb" > float< / span > < span class = "p" > (< / span > < span class = "s1" > ' inf' < / span > < span class = "p" > ))< / span >
< span class = "c1" > # Substract maximum for numerical stability< / span >
< span class = "n" > z< / span > < span class = "o" > =< / span > < span class = "n" > x< / span > < span class = "o" > -< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > max< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =< / span > < span class = "mi" > 0< / span > < span class = "p" > )< / span >
< span class = "c1" > # Note that exponentials in Triton are fast< / span >
< span class = "c1" > # but approximate (i.e., think __expf in CUDA)< / span >
< span class = "n" > num< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > exp< / span > < span class = "p" > (< / span > < span class = "n" > z< / span > < span class = "p" > )< / span >
< span class = "n" > denom< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > sum< / span > < span class = "p" > (< / span > < span class = "n" > num< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =< / span > < span class = "mi" > 0< / span > < span class = "p" > )< / span >
< span class = "n" > y< / span > < span class = "o" > =< / span > < span class = "n" > num< / span > < span class = "o" > /< / span > < span class = "n" > denom< / span >
< span class = "c1" > # Write back to Y< / span >
< span class = "n" > Y< / span > < span class = "o" > =< / span > < span class = "n" > Y< / span > < span class = "o" > +< / span > < span class = "n" > m< / span > < span class = "o" > *< / span > < span class = "n" > stride_ym< / span > < span class = "o" > +< / span > < span class = "n" > n< / span >
< span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > store< / span > < span class = "p" > (< / span > < span class = "n" > Y< / span > < span class = "p" > ,< / span > < span class = "n" > y< / span > < span class = "p" > ,< / span > < span class = "n" > mask< / span > < span class = "o" > =< / span > < span class = "n" > n< / span > < span class = "o" > < < / span > < span class = "n" > N< / span > < span class = "p" > )< / span >
2021-03-06 17:35:11 -05:00
< / pre > < / div >
< / div >
2021-04-21 01:40:29 -04:00
< p > We can create a helper function that enqueues the kernel and its (meta-)arguments for any given input tensor.< / p >
< div class = "highlight-default notranslate" > < div class = "highlight" > < pre > < span > < / span > < span class = "k" > def< / span > < span class = "nf" > next_power_of_2< / span > < span class = "p" > (< / span > < span class = "n" > n< / span > < span class = "p" > ):< / span >
2021-03-06 17:35:11 -05:00
< span class = "n" > n< / span > < span class = "o" > -=< / span > < span class = "mi" > 1< / span >
< span class = "n" > n< / span > < span class = "o" > |=< / span > < span class = "n" > n< / span > < span class = "o" > > > < / span > < span class = "mi" > 1< / span >
< span class = "n" > n< / span > < span class = "o" > |=< / span > < span class = "n" > n< / span > < span class = "o" > > > < / span > < span class = "mi" > 2< / span >
< span class = "n" > n< / span > < span class = "o" > |=< / span > < span class = "n" > n< / span > < span class = "o" > > > < / span > < span class = "mi" > 4< / span >
< span class = "n" > n< / span > < span class = "o" > |=< / span > < span class = "n" > n< / span > < span class = "o" > > > < / span > < span class = "mi" > 8< / span >
< span class = "n" > n< / span > < span class = "o" > |=< / span > < span class = "n" > n< / span > < span class = "o" > > > < / span > < span class = "mi" > 16< / span >
< span class = "n" > n< / span > < span class = "o" > +=< / span > < span class = "mi" > 1< / span >
< span class = "k" > return< / span > < span class = "n" > n< / span >
2021-04-21 01:40:29 -04:00
< span class = "k" > def< / span > < span class = "nf" > softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ):< / span >
< span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "n" > N< / span > < span class = "o" > =< / span > < span class = "n" > x< / span > < span class = "o" > .< / span > < span class = "n" > shape< / span >
< span class = "c1" > # The block size is the smallest power of two greater than the number of columns in `x`< / span >
2021-03-06 17:35:11 -05:00
< span class = "n" > BLOCK< / span > < span class = "o" > =< / span > < span class = "n" > next_power_of_2< / span > < span class = "p" > (< / span > < span class = "n" > N< / span > < span class = "p" > )< / span >
2021-03-15 13:58:20 -04:00
< span class = "c1" > # Another trick we can use is to ask the compiler to parallelize each< / span >
< span class = "c1" > # row-normalization more aggressively -- i.e., with more warps -- vectors< / span >
< span class = "c1" > # that are longer< / span >
< span class = "c1" > # You will see in the next tutorial how to auto-tune this value in a more natural< / span >
< span class = "c1" > # way so you don' t have to come up with manual heuristics yourself< / span >
< span class = "n" > num_warps< / span > < span class = "o" > =< / span > < span class = "mi" > 4< / span >
< span class = "k" > if< / span > < span class = "n" > BLOCK< / span > < span class = "o" > > =< / span > < span class = "mi" > 2048< / span > < span class = "p" > :< / span > < span class = "n" > num_warps< / span > < span class = "o" > =< / span > < span class = "mi" > 8< / span >
< span class = "k" > if< / span > < span class = "n" > BLOCK< / span > < span class = "o" > > =< / span > < span class = "mi" > 4096< / span > < span class = "p" > :< / span > < span class = "n" > num_warps< / span > < span class = "o" > =< / span > < span class = "mi" > 16< / span >
2021-04-21 01:40:29 -04:00
< span class = "c1" > # Allocate output< / span >
< span class = "n" > y< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > empty_like< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > )< / span >
< span class = "c1" > # Enqueue kernel. The launch grid is simple: we have one kernel instance per row of the input matrix< / span >
< span class = "n" > _softmax< / span > < span class = "p" > [(< / span > < span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "p" > )](< / span > < span class = "n" > y< / span > < span class = "p" > ,< / span > < span class = "n" > x< / span > < span class = "p" > ,< / span > < span class = "n" > x< / span > < span class = "o" > .< / span > < span class = "n" > stride< / span > < span class = "p" > (< / span > < span class = "mi" > 0< / span > < span class = "p" > ),< / span > < span class = "n" > y< / span > < span class = "o" > .< / span > < span class = "n" > stride< / span > < span class = "p" > (< / span > < span class = "mi" > 0< / span > < span class = "p" > ),< / span > < span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "n" > N< / span > < span class = "p" > ,< / span > < span class = "n" > BLOCK< / span > < span class = "o" > =< / span > < span class = "n" > BLOCK< / span > < span class = "p" > )< / span >
< span class = "k" > return< / span > < span class = "n" > y< / span >
2021-03-06 17:35:11 -05:00
< / pre > < / div >
< / div >
< / div >
< div class = "section" id = "unit-test" >
< h2 > Unit Test< a class = "headerlink" href = "#unit-test" title = "Permalink to this headline" > ¶< / a > < / h2 >
2021-03-06 22:06:32 -05:00
< p > We make sure that we test our kernel on a matrix with an irregular number of rows and columns.
This will allow us to verify that our padding mechanism works.< / p >
< div class = "highlight-default notranslate" > < div class = "highlight" > < pre > < span > < / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > manual_seed< / span > < span class = "p" > (< / span > < span class = "mi" > 0< / span > < span class = "p" > )< / span >
< span class = "n" > x< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > randn< / span > < span class = "p" > (< / span > < span class = "mi" > 1823< / span > < span class = "p" > ,< / span > < span class = "mi" > 781< / span > < span class = "p" > ,< / span > < span class = "n" > device< / span > < span class = "o" > =< / span > < span class = "s1" > ' cuda' < / span > < span class = "p" > )< / span >
2021-03-06 17:35:11 -05:00
< span class = "n" > y_tri< / span > < span class = "o" > =< / span > < span class = "n" > softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > )< / span >
< span class = "n" > y_ref< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =< / span > < span class = "mi" > 1< / span > < span class = "p" > )< / span >
< span class = "nb" > print< / span > < span class = "p" > (< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > allclose< / span > < span class = "p" > (< / span > < span class = "n" > y_tri< / span > < span class = "p" > ,< / span > < span class = "n" > y_ref< / span > < span class = "p" > ))< / span >
< / pre > < / div >
< / div >
< p class = "sphx-glr-script-out" > Out:< / p >
2021-03-06 22:06:32 -05:00
< div class = "sphx-glr-script-out highlight-none notranslate" > < div class = "highlight" > < pre > < span > < / span > True
2021-03-06 17:35:11 -05:00
< / pre > < / div >
< / div >
2021-03-06 22:06:32 -05:00
< p > As expected, the results are identical.< / p >
2021-03-06 17:35:11 -05:00
< / div >
2021-03-15 13:58:20 -04:00
< div class = "section" id = "benchmark" >
< h2 > Benchmark< a class = "headerlink" href = "#benchmark" title = "Permalink to this headline" > ¶< / a > < / h2 >
2021-03-06 22:06:32 -05:00
< p > Here we will benchmark our operation as a function of the number of columns in the input matrix – assuming 4096 rows.
We will then compare its performance against (1) < code class = "code docutils literal notranslate" > < span class = "pre" > torch.softmax< / span > < / code > and (2) the < code class = "code docutils literal notranslate" > < span class = "pre" > naive_softmax< / span > < / code > defined above.< / p >
2021-03-11 11:58:42 -05:00
< div class = "highlight-default notranslate" > < div class = "highlight" > < pre > < span > < / span > < span class = "nd" > @triton< / span > < span class = "o" > .< / span > < span class = "n" > testing< / span > < span class = "o" > .< / span > < span class = "n" > perf_report< / span > < span class = "p" > (< / span >
< span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > testing< / span > < span class = "o" > .< / span > < span class = "n" > Benchmark< / span > < span class = "p" > (< / span >
< span class = "n" > x_names< / span > < span class = "o" > =< / span > < span class = "p" > [< / span > < span class = "s1" > ' N' < / span > < span class = "p" > ],< / span > < span class = "c1" > # argument names to use as an x-axis for the plot< / span >
< span class = "n" > x_vals< / span > < span class = "o" > =< / span > < span class = "p" > [< / span > < span class = "mi" > 256< / span > < span class = "o" > *< / span > < span class = "n" > i< / span > < span class = "k" > for< / span > < span class = "n" > i< / span > < span class = "ow" > in< / span > < span class = "nb" > range< / span > < span class = "p" > (< / span > < span class = "mi" > 2< / span > < span class = "p" > ,< / span > < span class = "mi" > 50< / span > < span class = "p" > )],< / span > < span class = "c1" > # different possible values for `x_name`< / span >
< span class = "n" > y_name< / span > < span class = "o" > =< / span > < span class = "s1" > ' provider' < / span > < span class = "p" > ,< / span > < span class = "c1" > # argument name whose value corresponds to a different line in the plot< / span >
< span class = "n" > y_vals< / span > < span class = "o" > =< / span > < span class = "p" > [< / span > < span class = "s1" > ' torch' < / span > < span class = "p" > ,< / span > < span class = "s1" > ' triton' < / span > < span class = "p" > ,< / span > < span class = "s1" > ' naive' < / span > < span class = "p" > ],< / span > < span class = "c1" > # possible keys for `y_name`< / span >
< span class = "n" > y_lines< / span > < span class = "o" > =< / span > < span class = "p" > [< / span > < span class = "s2" > " Torch" < / span > < span class = "p" > ,< / span > < span class = "s2" > " Triton" < / span > < span class = "p" > ,< / span > < span class = "s1" > ' Naive' < / span > < span class = "p" > ],< / span > < span class = "c1" > # label name for the lines< / span >
< span class = "n" > ylabel< / span > < span class = "o" > =< / span > < span class = "s2" > " GB/s" < / span > < span class = "p" > ,< / span > < span class = "c1" > # label name for the y-axis< / span >
< span class = "n" > plot_name< / span > < span class = "o" > =< / span > < span class = "s2" > " softmax-performance" < / span > < span class = "p" > ,< / span > < span class = "c1" > # name for the plot. Used also as a file name for saving the plot.< / span >
< span class = "n" > args< / span > < span class = "o" > =< / span > < span class = "p" > {< / span > < span class = "s1" > ' M' < / span > < span class = "p" > :< / span > < span class = "mi" > 4096< / span > < span class = "p" > }< / span > < span class = "c1" > # values for function arguments not in `x_names` and `y_name`< / span >
< span class = "p" > )< / span >
< span class = "p" > )< / span >
< span class = "k" > def< / span > < span class = "nf" > benchmark< / span > < span class = "p" > (< / span > < span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "n" > N< / span > < span class = "p" > ,< / span > < span class = "n" > provider< / span > < span class = "p" > ):< / span >
2021-03-06 17:35:11 -05:00
< span class = "n" > x< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > randn< / span > < span class = "p" > (< / span > < span class = "n" > M< / span > < span class = "p" > ,< / span > < span class = "n" > N< / span > < span class = "p" > ,< / span > < span class = "n" > device< / span > < span class = "o" > =< / span > < span class = "s1" > ' cuda' < / span > < span class = "p" > ,< / span > < span class = "n" > dtype< / span > < span class = "o" > =< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > float32< / span > < span class = "p" > )< / span >
2021-03-11 11:58:42 -05:00
< span class = "k" > if< / span > < span class = "n" > provider< / span > < span class = "o" > ==< / span > < span class = "s1" > ' torch' < / span > < span class = "p" > :< / span >
< span class = "n" > ms< / span > < span class = "p" > ,< / span > < span class = "n" > min_ms< / span > < span class = "p" > ,< / span > < span class = "n" > max_ms< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > testing< / span > < span class = "o" > .< / span > < span class = "n" > do_bench< / span > < span class = "p" > (< / span > < span class = "k" > lambda< / span > < span class = "p" > :< / span > < span class = "n" > torch< / span > < span class = "o" > .< / span > < span class = "n" > softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ,< / span > < span class = "n" > axis< / span > < span class = "o" > =-< / span > < span class = "mi" > 1< / span > < span class = "p" > ))< / span >
< span class = "k" > if< / span > < span class = "n" > provider< / span > < span class = "o" > ==< / span > < span class = "s1" > ' triton' < / span > < span class = "p" > :< / span >
< span class = "n" > ms< / span > < span class = "p" > ,< / span > < span class = "n" > min_ms< / span > < span class = "p" > ,< / span > < span class = "n" > max_ms< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > testing< / span > < span class = "o" > .< / span > < span class = "n" > do_bench< / span > < span class = "p" > (< / span > < span class = "k" > lambda< / span > < span class = "p" > :< / span > < span class = "n" > softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ))< / span >
< span class = "k" > if< / span > < span class = "n" > provider< / span > < span class = "o" > ==< / span > < span class = "s1" > ' naive' < / span > < span class = "p" > :< / span >
< span class = "n" > ms< / span > < span class = "p" > ,< / span > < span class = "n" > min_ms< / span > < span class = "p" > ,< / span > < span class = "n" > max_ms< / span > < span class = "o" > =< / span > < span class = "n" > triton< / span > < span class = "o" > .< / span > < span class = "n" > testing< / span > < span class = "o" > .< / span > < span class = "n" > do_bench< / span > < span class = "p" > (< / span > < span class = "k" > lambda< / span > < span class = "p" > :< / span > < span class = "n" > naive_softmax< / span > < span class = "p" > (< / span > < span class = "n" > x< / span > < span class = "p" > ))< / span >
< span class = "n" > gbps< / span > < span class = "o" > =< / span > < span class = "k" > lambda< / span > < span class = "n" > ms< / span > < span class = "p" > :< / span > < span class = "mi" > 2< / span > < span class = "o" > *< / span > < span class = "n" > x< / span > < span class = "o" > .< / span > < span class = "n" > nelement< / span > < span class = "p" > ()< / span > < span class = "o" > *< / span > < span class = "n" > x< / span > < span class = "o" > .< / span > < span class = "n" > element_size< / span > < span class = "p" > ()< / span > < span class = "o" > *< / span > < span class = "mf" > 1e-9< / span > < span class = "o" > /< / span > < span class = "p" > (< / span > < span class = "n" > ms< / span > < span class = "o" > *< / span > < span class = "mf" > 1e-3< / span > < span class = "p" > )< / span >
< span class = "k" > return< / span > < span class = "n" > gbps< / span > < span class = "p" > (< / span > < span class = "n" > ms< / span > < span class = "p" > ),< / span > < span class = "n" > gbps< / span > < span class = "p" > (< / span > < span class = "n" > max_ms< / span > < span class = "p" > ),< / span > < span class = "n" > gbps< / span > < span class = "p" > (< / span > < span class = "n" > min_ms< / span > < span class = "p" > )< / span >
< span class = "n" > benchmark< / span > < span class = "o" > .< / span > < span class = "n" > run< / span > < span class = "p" > (< / span > < span class = "n" > show_plots< / span > < span class = "o" > =< / span > < span class = "kc" > True< / span > < span class = "p" > )< / span >
2021-03-06 17:35:11 -05:00
< / pre > < / div >
< / div >
2021-03-29 11:59:18 -04:00
< img alt = "02 fused softmax" class = "sphx-glr-single-img" src = "../../_images/sphx_glr_02-fused-softmax_001.png" / >
2021-03-06 22:06:32 -05:00
< p > In the above plot, we can see that:< / p >
< blockquote >
< div > < ul class = "simple" >
< li > < p > Triton is 4-5x faster than the naive implementation, which is consistent with our theoretical predictions.< / p > < / li >
< li > < p > Triton is significantly faster than < code class = "code docutils literal notranslate" > < span class = "pre" > torch.softmax< / span > < / code > for very large input matrices. My guess from looking at the source-code of the < a class = "reference external" href = "https://github.com/pytorch/pytorch/blob/9409a3a39b7149bb2d833a89e0c944109bef7c27/caffe2/operators/softmax_ops.cu#L240" > PyTorch kernel< / a > is that PyTorch only partially fuses the computation of the softmax.
This means that – when temporary data is too large to fit entirely in the GPU’ s cache – it transfers almost twice the amount of data necessary.
Note that our Triton kernel is not only faster than PyTorch’ s CUDA kernel, it is also < strong > easier to read, understand and maintain< / strong > .< / p > < / li >
< / ul >
< / div > < / blockquote >
2021-04-21 01:40:29 -04:00
< p class = "sphx-glr-timing" > < strong > Total running time of the script:< / strong > ( 0 minutes 20.767 seconds)< / p >
2021-03-06 17:35:11 -05:00
< div class = "sphx-glr-footer class sphx-glr-footer-example docutils container" id = "sphx-glr-download-getting-started-tutorials-02-fused-softmax-py" >
< div class = "sphx-glr-download sphx-glr-download-python docutils container" >
< p > < a class = "reference download internal" download = "" href = "../../_downloads/d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py" > < code class = "xref download docutils literal notranslate" > < span class = "pre" > Download< / span > < span class = "pre" > Python< / span > < span class = "pre" > source< / span > < span class = "pre" > code:< / span > < span class = "pre" > 02-fused-softmax.py< / span > < / code > < / a > < / p >
< / div >
< div class = "sphx-glr-download sphx-glr-download-jupyter docutils container" >
< p > < a class = "reference download internal" download = "" href = "../../_downloads/034d953b6214fedce6ea03803c712b89/02-fused-softmax.ipynb" > < code class = "xref download docutils literal notranslate" > < span class = "pre" > Download< / span > < span class = "pre" > Jupyter< / span > < span class = "pre" > notebook:< / span > < span class = "pre" > 02-fused-softmax.ipynb< / span > < / code > < / a > < / p >
< / div >
< / div >
< p class = "sphx-glr-signature" > < a class = "reference external" href = "https://sphinx-gallery.github.io" > Gallery generated by Sphinx-Gallery< / a > < / p >
< / div >
< / div >
< / div >
< / div >
< footer >
< div class = "rst-footer-buttons" role = "navigation" aria-label = "footer navigation" >
2021-03-15 13:58:20 -04:00
< a href = "03-matrix-multiplication.html" class = "btn btn-neutral float-right" title = "Matrix Multiplication" accesskey = "n" rel = "next" > Next < span class = "fa fa-arrow-circle-right" aria-hidden = "true" > < / span > < / a >
2021-03-06 17:35:11 -05:00
< a href = "01-vector-add.html" class = "btn btn-neutral float-left" title = "Vector Addition" accesskey = "p" rel = "prev" > < span class = "fa fa-arrow-circle-left" aria-hidden = "true" > < / span > Previous< / a >
< / div >
< hr / >
< div role = "contentinfo" >
< p >
© Copyright 2020, Philippe Tillet.
< / p >
< / div >
Built with < a href = "https://www.sphinx-doc.org/" > Sphinx< / a > using a
< a href = "https://github.com/readthedocs/sphinx_rtd_theme" > theme< / a >
provided by < a href = "https://readthedocs.org" > Read the Docs< / a > .
< / footer >
< / div >
< / div >
< / section >
< / div >
< script type = "text/javascript" >
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
< / script >
< / body >
< / html >