Files
triton/v1.1.2/.doctrees/getting-started/tutorials/02-fused-softmax.doctree

349 lines
34 KiB
Plaintext
Raw Normal View History

2022-02-09 07:15:50 +00:00
<EFBFBD><05><><EFBFBD><00>sphinx.addnodes<65><73>document<6E><74><EFBFBD>)<29><>}<7D>(<28> rawsource<63><65><00><>children<65>]<5D>(<28>docutils.nodes<65><73>comment<6E><74><EFBFBD>)<29><>}<7D>(h<05> DO NOT EDIT.<2E>h]<5D>h <09>Text<78><74><EFBFBD><EFBFBD> DO NOT EDIT.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hh<06>parent<6E>h uba<62>
attributes<EFBFBD>}<7D>(<28>ids<64>]<5D><>classes<65>]<5D><>names<65>]<5D><>dupnames<65>]<5D><>backrefs<66>]<5D><> xml:space<63><65>preserve<76>u<EFBFBD>tagname<6D>h
hhhh<03>source<63><65>m/tmp/tmpv4nhk24r/2d6df9b518a8152f777eb79b6b0a84becb706353/docs/getting-started/tutorials/02-fused-softmax.rst<73><74>line<6E>Kubh )<29><>}<7D>(h<05>8THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.<2E>h]<5D>h<11>8THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhh)ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hhhhh&h'h(Kubh )<29><>}<7D>(h<05>-TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:<3A>h]<5D>h<11>-TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhh7ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hhhhh&h'h(Kubh )<29><>}<7D>(h<05>/"getting-started/tutorials/02-fused-softmax.py"<22>h]<5D>h<11>/"getting-started/tutorials/02-fused-softmax.py"<22><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhhEubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hhhhh&h'h(Kubh )<29><>}<7D>(h<05>LINE NUMBERS ARE GIVEN BELOW.<2E>h]<5D>h<11>LINE NUMBERS ARE GIVEN BELOW.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhhSubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hhhhh&h'h(Kubh<00>only<6C><79><EFBFBD>)<29><>}<7D>(hhh]<5D>h <09>note<74><65><EFBFBD>)<29><>}<7D>(h<05>uClick :ref:`here <sphx_glr_download_getting-started_tutorials_02-fused-softmax.py>`
to download the full example code<64>h]<5D>h <09> paragraph<70><68><EFBFBD>)<29><>}<7D>(h<05>uClick :ref:`here <sphx_glr_download_getting-started_tutorials_02-fused-softmax.py>`
to download the full example code<64>h]<5D>(h<11>Click <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>Click <20>hhnubh<00> pending_xref<65><66><EFBFBD>)<29><>}<7D>(h<05>M:ref:`here <sphx_glr_download_getting-started_tutorials_02-fused-softmax.py>`<60>h]<5D>h <09>inline<6E><65><EFBFBD>)<29><>}<7D>(hh{h]<5D>h<11>here<72><65><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhhubah}<7D>(h]<5D>h]<5D>(<28>xref<65><66>std<74><64>std-ref<65>eh]<5D>h]<5D>h!]<5D>uh%h}hhyubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>refdoc<6F><63>*getting-started/tutorials/02-fused-softmax<61><78> refdomain<69>h<EFBFBD><68>reftype<70><65>ref<65><66> refexplicit<69><74><EFBFBD>refwarn<72><6E><EFBFBD> reftarget<65><74>?sphx_glr_download_getting-started_tutorials_02-fused-softmax.py<70>uh%hwh&h'h(K hhnubh<11>"
to download the full example code<64><65><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>"
to download the full example code<64>hhnubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K hhhubah}<7D>(h]<5D>h]<5D><>sphx-glr-download-link-note<74>ah]<5D>h]<5D>h!]<5D>uh%hfhhchhh&h'h(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>expr<70><72>html<6D>uh%hahhh&h'h(Khhubh <09>target<65><74><EFBFBD>)<29><>}<7D>(h<05>;.. _sphx_glr_getting-started_tutorials_02-fused-softmax.py:<3A>h]<5D>h}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>refid<69><64>6sphx-glr-getting-started-tutorials-02-fused-softmax-py<70>uh%h<>h(Khhhhh&h'ubh <09>section<6F><6E><EFBFBD>)<29><>}<7D>(hhh]<5D>(h <09>title<6C><65><EFBFBD>)<29><>}<7D>(h<05> Fused Softmax<61>h]<5D>h<11> Fused Softmax<61><78><EFBFBD><EFBFBD><EFBFBD>}<7D>(hh<>hh<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(Kubhm)<29><>}<7D>(h<05><>In this tutorial, you will write a fused softmax operation that is significantly faster
than PyTorch's native op for a particular class of matrices: those whose rows can fit in
the GPU's SRAM.
You will learn about:<3A>h]<5D>h<11><>In this tutorial, you will write a fused softmax operation that is significantly faster
than PyTorchs native op for a particular class of matrices: those whose rows can fit in
the GPUs SRAM.
You will learn about:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hh<>hh<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(Khh<>hhubh <09> bullet_list<73><74><EFBFBD>)<29><>}<7D>(hhh]<5D>(h <09> list_item<65><6D><EFBFBD>)<29><>}<7D>(h<05>=The benefits of kernel fusion for bandwidth-bound operations.<2E>h]<5D>hm)<29><>}<7D>(hh<>h]<5D>h<11>=The benefits of kernel fusion for bandwidth-bound operations.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hh<>hh<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(Khh<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(Nubh<62>)<29><>}<7D>(h<05>Reduction operators in Triton.
<EFBFBD>h]<5D>hm)<29><>}<7D>(h<05>Reduction operators in Triton.<2E>h]<5D>h<11>Reduction operators in Triton.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj hj ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(Khjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(Nubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>bullet<65><74>-<2D>uh%h<>h&h'h(Khh<>hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 14-18<31>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 14-18<31><38><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj'ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hh<>hhh&h'h(K ubh<62>)<29><>}<7D>(hhh]<5D>(h<>)<29><>}<7D>(h<05> Motivations<6E>h]<5D>h<11> Motivations<6E><73><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj:hj8hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hj5hhh&h'h(K"ubhm)<29><>}<7D>(h<05><>Custom GPU kernels for elementwise additions are educationally valuable but won't get you very far in practice.
Let us consider instead the case of a simple (numerically stabilized) softmax operation:<3A>h]<5D>h<11><>Custom GPU kernels for elementwise additions are educationally valuable but wont get you very far in practice.
Let us consider instead the case of a simple (numerically stabilized) softmax operation:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hjHhjFhhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K#hj5hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 18-43<34>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 18-43<34><33><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjTubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj5hhh&h'h(K'ubh <09> literal_block<63><6B><EFBFBD>)<29><>}<7D>(hX<>import torch
@torch.jit.script
def naive_softmax(x):
"""Compute row-wise softmax of X using native pytorch
We subtract the maximum element in order to avoid overflows. Softmax is invariant to
this shift.
"""
# read MN elements ; write M elements
x_max = x.max(dim=1)[0]
# read MN + M elements ; write MN elements
z = x - x_max[:, None]
# read MN elements ; write MN elements
numerator = torch.exp(z)
# read MN elements ; write M elements
denominator = numerator.sum(dim=1)
# read MN + M elements ; write MN elements
ret = numerator / denominator[:, None]
# in total: read 5MN + 2M elements ; wrote 3MN + 2M elements
return ret<65>h]<5D>hX<>import torch
@torch.jit.script
def naive_softmax(x):
"""Compute row-wise softmax of X using native pytorch
We subtract the maximum element in order to avoid overflows. Softmax is invariant to
this shift.
"""
# read MN elements ; write M elements
x_max = x.max(dim=1)[0]
# read MN + M elements ; write MN elements
z = x - x_max[:, None]
# read MN elements ; write MN elements
numerator = torch.exp(z)
# read MN elements ; write M elements
denominator = numerator.sum(dim=1)
# read MN + M elements ; write MN elements
ret = numerator / denominator[:, None]
# in total: read 5MN + 2M elements ; wrote 3MN + 2M elements
return ret<65><74><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjdubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$<24>force<63><65><EFBFBD>language<67><65>default<6C><74>highlight_args<67>}<7D>uh%jbh&h'h(K(hj5hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 44-52<35>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 44-52<35><32><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjwubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj5hhh&h'h(KKubhm)<29><>}<7D>(hX<>When implemented naively in PyTorch, computing :code:`y = naive_softmax(x)` for :math:`x \in R^{M \times N}`
requires reading :math:`5MN + 2M` elements from DRAM and writing back :math:`3MN + 2M` elements.
This is obviously wasteful; we'd prefer to have a custom "fused" kernel that only reads
X once and does all the necessary computations on-chip.
Doing so would require reading and writing back only :math:`MN` bytes, so we could
expect a theoretical speed-up of ~4x (i.e., :math:`(8MN + 4M) / 2MN`).
The `torch.jit.script` flags aims to perform this kind of "kernel fusion" automatically
but, as we will see later, it is still far from ideal.<2E>h]<5D>(h<11>/When implemented naively in PyTorch, computing <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>/When implemented naively in PyTorch, computing <20>hj<>hhh&Nh(Nubh <09>literal<61><6C><EFBFBD>)<29><>}<7D>(h<05>:code:`y = naive_softmax(x)`<60>h]<5D>h<11>y = naive_softmax(x)<29><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>y = naive_softmax(x)<29>hj<>ubah}<7D>(h]<5D>h]<5D><>code<64>ah]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11> for <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> for <20>hj<>hhh&Nh(Nubh <09>math<74><68><EFBFBD>)<29><>}<7D>(h<05>:math:`x \in R^{M \times N}`<60>h]<5D>h<11>x \in R^{M \times N}<7D><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11>
requires reading <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>
requires reading <20>hj<>hhh&Nh(Nubj<62>)<29><>}<7D>(h<05>:math:`5MN + 2M`<60>h]<5D>h<11>5MN + 2M<32><4D><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11>% elements from DRAM and writing back <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>% elements from DRAM and writing back <20>hj<>hhh&Nh(Nubj<62>)<29><>}<7D>(h<05>:math:`3MN + 2M`<60>h]<5D>h<11>3MN + 2M<32><4D><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11><> elements.
This is obviously wasteful; wed prefer to have a custom “fused” kernel that only reads
X once and does all the necessary computations on-chip.
Doing so would require reading and writing back only <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05><> elements.
This is obviously wasteful; we'd prefer to have a custom "fused" kernel that only reads
X once and does all the necessary computations on-chip.
Doing so would require reading and writing back only <20>hj<>hhh&Nh(Nubj<62>)<29><>}<7D>(h<05>
:math:`MN`<60>h]<5D>h<11>MN<4D><4E><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11>@ bytes, so we could
expect a theoretical speed-up of ~4x (i.e., <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>@ bytes, so we could
expect a theoretical speed-up of ~4x (i.e., <20>hj<>hhh&Nh(Nubj<62>)<29><>}<7D>(h<05>:math:`(8MN + 4M) / 2MN`<60>h]<5D>h<11>(8MN + 4M) / 2MN<4D><4E><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubh<11>).
The <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>).
The <20>hj<>hhh&Nh(Nubh <09>title_reference<63><65><EFBFBD>)<29><>}<7D>(h<05>`torch.jit.script`<60>h]<5D>h<11>torch.jit.script<70><74><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%jhj<>ubh<11>| flags aims to perform this kind of “kernel fusion” automatically
but, as we will see later, it is still far from ideal.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>x flags aims to perform this kind of "kernel fusion" automatically
but, as we will see later, it is still far from ideal.<2E>hj<>hhh&Nh(Nubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(KLhj5hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 54-61<36>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 54-61<36><31><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj!ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj5hhh&h'h(KVubeh}<7D>(h]<5D><> motivations<6E>ah]<5D>h]<5D><> motivations<6E>ah]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(K"ubh<62>)<29><>}<7D>(hhh]<5D>(h<>)<29><>}<7D>(h<05>Compute Kernel<65>h]<5D>h<11>Compute Kernel<65><6C><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj<hj:hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hj7hhh&h'h(KXubhm)<29><>}<7D>(hX|Our softmax kernel works as follows: each program loads a row of the input matrix X,
normalizes it and writes back the result to the output Y.
Note that one important limitation of Triton is that each block must have a
power-of-two number of elements, so we need to internally "pad" each row and guard the
memory operations properly if we want to handle any possible input shapes:<3A>h]<5D>hX<>Our softmax kernel works as follows: each program loads a row of the input matrix X,
normalizes it and writes back the result to the output Y.
Note that one important limitation of Triton is that each block must have a
power-of-two number of elements, so we need to internally “pad” each row and guard the
memory operations properly if we want to handle any possible input shapes:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hjJhjHhhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(KYhj7hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 61-93<39>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 61-93<39><33><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjVubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj7hhh&h'h(K`ubjc)<29><>}<7D>(hXAimport triton
import triton.language as tl
@triton.jit
def softmax_kernel(
output_ptr, input_ptr, input_row_stride, output_row_stride, n_cols, **meta
):
# The rows of the softmax are independent, so we parallelize across those
row_idx = tl.program_id(0)
BLOCK_SIZE = meta['BLOCK_SIZE']
# The stride represents how much we need to increase the pointer to advance 1 row
row_start_ptr = input_ptr + row_idx * input_row_stride
# The block size is the next power of two greater than n_cols, so we can fit each
# row in a single block
col_offsets = tl.arange(0, BLOCK_SIZE)
input_ptrs = row_start_ptr + col_offsets
# Load the row into SRAM, using a mask since BLOCK_SIZE may be > than n_cols
row = tl.load(input_ptrs, mask=col_offsets < n_cols, other=-float('inf'))
# Substract maximum for numerical stability
row_minus_max = row - tl.max(row, axis=0)
# Note that exponentials in Triton are fast but approximate (i.e., think __expf in CUDA)
numerator = tl.exp(row_minus_max)
denominator = tl.sum(numerator, axis=0)
softmax_output = numerator / denominator
# Write back output to DRAM
output_row_start_ptr = output_ptr + row_idx * output_row_stride
output_ptrs = output_row_start_ptr + col_offsets
tl.store(output_ptrs, softmax_output, mask=col_offsets < n_cols)<29>h]<5D>hXAimport triton
import triton.language as tl
@triton.jit
def softmax_kernel(
output_ptr, input_ptr, input_row_stride, output_row_stride, n_cols, **meta
):
# The rows of the softmax are independent, so we parallelize across those
row_idx = tl.program_id(0)
BLOCK_SIZE = meta['BLOCK_SIZE']
# The stride represents how much we need to increase the pointer to advance 1 row
row_start_ptr = input_ptr + row_idx * input_row_stride
# The block size is the next power of two greater than n_cols, so we can fit each
# row in a single block
col_offsets = tl.arange(0, BLOCK_SIZE)
input_ptrs = row_start_ptr + col_offsets
# Load the row into SRAM, using a mask since BLOCK_SIZE may be > than n_cols
row = tl.load(input_ptrs, mask=col_offsets < n_cols, other=-float('inf'))
# Substract maximum for numerical stability
row_minus_max = row - tl.max(row, axis=0)
# Note that exponentials in Triton are fast but approximate (i.e., think __expf in CUDA)
numerator = tl.exp(row_minus_max)
denominator = tl.sum(numerator, axis=0)
softmax_output = numerator / denominator
# Write back output to DRAM
output_row_start_ptr = output_ptr + row_idx * output_row_stride
output_ptrs = output_row_start_ptr + col_offsets
tl.store(output_ptrs, softmax_output, mask=col_offsets < n_cols)<29><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjdubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>default<6C>ju}<7D>uh%jbh&h'h(Kahj7hhubh )<29><>}<7D>(h<05>(GENERATED FROM PYTHON SOURCE LINES 94-95<39>h]<5D>h<11>(GENERATED FROM PYTHON SOURCE LINES 94-95<39><35><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjtubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj7hhh&h'h(K<>ubhm)<29><>}<7D>(h<05>mWe can create a helper function that enqueues the kernel and its (meta-)arguments for any given input tensor.<2E>h]<5D>h<11>mWe can create a helper function that enqueues the kernel and its (meta-)arguments for any given input tensor.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj<>hj<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K<>hj7hhubh )<29><>}<7D>(h<05>)GENERATED FROM PYTHON SOURCE LINES 95-125<32>h]<5D>h<11>)GENERATED FROM PYTHON SOURCE LINES 95-125<32><35><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj7hhh&h'h(K<>ubjc)<29><>}<7D>(hX<>def softmax(x):
n_rows, n_cols = x.shape
# The block size is the smallest power of two greater than the number of columns in `x`
BLOCK_SIZE = triton.next_power_of_2(n_cols)
# Another trick we can use is to ask the compiler to use more threads per row by
# increasing the number of warps (`num_warps`) over which each row is distributed.
# You will see in the next tutorial how to auto-tune this value in a more natural
# way so you don't have to come up with manual heuristics yourself.
num_warps = 4
if BLOCK_SIZE >= 2048:
num_warps = 8
if BLOCK_SIZE >= 4096:
num_warps = 16
# Allocate output
y = torch.empty_like(x)
# Enqueue kernel. The 1D launch grid is simple: we have one kernel instance per row o
# f the input matrix
softmax_kernel[(n_rows,)](
y,
x,
x.stride(0),
y.stride(0),
n_cols,
num_warps=num_warps,
BLOCK_SIZE=BLOCK_SIZE,
)
return y<>h]<5D>hX<>def softmax(x):
n_rows, n_cols = x.shape
# The block size is the smallest power of two greater than the number of columns in `x`
BLOCK_SIZE = triton.next_power_of_2(n_cols)
# Another trick we can use is to ask the compiler to use more threads per row by
# increasing the number of warps (`num_warps`) over which each row is distributed.
# You will see in the next tutorial how to auto-tune this value in a more natural
# way so you don't have to come up with manual heuristics yourself.
num_warps = 4
if BLOCK_SIZE >= 2048:
num_warps = 8
if BLOCK_SIZE >= 4096:
num_warps = 16
# Allocate output
y = torch.empty_like(x)
# Enqueue kernel. The 1D launch grid is simple: we have one kernel instance per row o
# f the input matrix
softmax_kernel[(n_rows,)](
y,
x,
x.stride(0),
y.stride(0),
n_cols,
num_warps=num_warps,
BLOCK_SIZE=BLOCK_SIZE,
)
return y<><79><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>default<6C>ju}<7D>uh%jbh&h'h(K<>hj7hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 126-128<32>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 126-128<32><38><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj7hhh&h'h(K<>ubeh}<7D>(h]<5D><>compute-kernel<65>ah]<5D>h]<5D><>compute kernel<65>ah]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(KXubh<62>)<29><>}<7D>(hhh]<5D>(h<>)<29><>}<7D>(h<05> Unit Test<73>h]<5D>h<11> Unit Test<73><74><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj<>hj<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hj<>hhh&h'h(K<>ubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 130-132<33>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 130-132<33><32><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj<>hhh&h'h(K<>ubhm)<29><>}<7D>(h<05><>We make sure that we test our kernel on a matrix with an irregular number of rows and columns.
This will allow us to verify that our padding mechanism works.<2E>h]<5D>h<11><>We make sure that we test our kernel on a matrix with an irregular number of rows and columns.
This will allow us to verify that our padding mechanism works.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj<>hj<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K<>hj<>hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 132-139<33>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 132-139<33><39><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj<>hhh&h'h(K<>ubjc)<29><>}<7D>(h<05><>torch.manual_seed(0)
x = torch.randn(1823, 781, device='cuda')
y_triton = softmax(x)
y_torch = torch.softmax(x, axis=1)
print(torch.allclose(y_triton, y_torch))<29>h]<5D>h<11><>torch.manual_seed(0)
x = torch.randn(1823, 781, device='cuda')
y_triton = softmax(x)
y_torch = torch.softmax(x, axis=1)
print(torch.allclose(y_triton, y_torch))<29><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>default<6C>ju}<7D>uh%jbh&h'h(K<>hj<>hhubhm)<29><>}<7D>(h<05>Out:<3A>h]<5D>h<11>Out:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hjhjhhh&Nh(Nubah}<7D>(h]<5D>h]<5D><>sphx-glr-script-out<75>ah]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K<>hj<>hhubjc)<29><>}<7D>(h<05>True<75>h]<5D>h<11>True<75><65><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjubah}<7D>(h]<5D>h]<5D>jah]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>none<6E>ju}<7D>uh%jbh&h'h(K<>hj<>hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 140-141<34>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 140-141<34><31><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj.ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj<>hhh&h'h(K<>ubhm)<29><>}<7D>(h<05>'As expected, the results are identical.<2E>h]<5D>h<11>'As expected, the results are identical.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj>hj<hhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K<>hj<>hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 143-147<34>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 143-147<34><37><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjJubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj<>hhh&h'h(K<>ubeh}<7D>(h]<5D><> unit-test<73>ah]<5D>h]<5D><> unit test<73>ah]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(K<>ubh<62>)<29><>}<7D>(hhh]<5D>(h<>)<29><>}<7D>(h<05> Benchmark<72>h]<5D>h<11> Benchmark<72><6B><EFBFBD><EFBFBD><EFBFBD>}<7D>(hjehjchhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hj`hhh&h'h(K<>ubhm)<29><>}<7D>(h<05><>Here we will benchmark our operation as a function of the number of columns in the input matrix -- assuming 4096 rows.
We will then compare its performance against (1) :code:`torch.softmax` and (2) the :code:`naive_softmax` defined above.<2E>h]<5D>(h<11><>Here we will benchmark our operation as a function of the number of columns in the input matrix assuming 4096 rows.
We will then compare its performance against (1) <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05><>Here we will benchmark our operation as a function of the number of columns in the input matrix -- assuming 4096 rows.
We will then compare its performance against (1) <20>hjqhhh&Nh(Nubj<62>)<29><>}<7D>(h<05>:code:`torch.softmax`<60>h]<5D>h<11> torch.softmax<61><78><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> torch.softmax<61>hjzubah}<7D>(h]<5D>h]<5D>j<EFBFBD>ah]<5D>h]<5D>h!]<5D>uh%j<>hjqubh<11> and (2) the <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> and (2) the <20>hjqhhh&Nh(Nubj<62>)<29><>}<7D>(h<05>:code:`naive_softmax`<60>h]<5D>h<11> naive_softmax<61><78><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> naive_softmax<61>hj<>ubah}<7D>(h]<5D>h]<5D>j<EFBFBD>ah]<5D>h]<5D>h!]<5D>uh%j<>hjqubh<11> defined above.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> defined above.<2E>hjqhhh&Nh(Nubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(K<>hj`hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 147-186<38>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 147-186<38><36><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj`hhh&h'h(K<>ubjc)<29><>}<7D>(hX#@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=['N'], # argument names to use as an x-axis for the plot
x_vals=[
128 * i for i in range(2, 100)
], # different possible values for `x_name`
line_arg='provider', # argument name whose value corresponds to a different line in the plot
line_vals=[
'triton',
'torch-native',
'torch-jit',
], # possible values for `line_arg``
line_names=[
"Triton",
"Torch (native)",
"Torch (jit)",
], # label name for the lines
styles=[('blue', '-'), ('green', '-'), ('green', '--')], # line styles
ylabel="GB/s", # label name for the y-axis
plot_name="softmax-performance", # name for the plot. Used also as a file name for saving the plot.
args={'M': 4096}, # values for function arguments not in `x_names` and `y_name`
)
)
def benchmark(M, N, provider):
x = torch.randn(M, N, device='cuda', dtype=torch.float32)
if provider == 'torch-native':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.softmax(x, axis=-1))
if provider == 'triton':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: softmax(x))
if provider == 'torch-jit':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: naive_softmax(x))
gbps = lambda ms: 2 * x.nelement() * x.element_size() * 1e-9 / (ms * 1e-3)
return gbps(ms), gbps(max_ms), gbps(min_ms)
benchmark.run(show_plots=True, print_data=True)<29>h]<5D>hX#@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=['N'], # argument names to use as an x-axis for the plot
x_vals=[
128 * i for i in range(2, 100)
], # different possible values for `x_name`
line_arg='provider', # argument name whose value corresponds to a different line in the plot
line_vals=[
'triton',
'torch-native',
'torch-jit',
], # possible values for `line_arg``
line_names=[
"Triton",
"Torch (native)",
"Torch (jit)",
], # label name for the lines
styles=[('blue', '-'), ('green', '-'), ('green', '--')], # line styles
ylabel="GB/s", # label name for the y-axis
plot_name="softmax-performance", # name for the plot. Used also as a file name for saving the plot.
args={'M': 4096}, # values for function arguments not in `x_names` and `y_name`
)
)
def benchmark(M, N, provider):
x = torch.randn(M, N, device='cuda', dtype=torch.float32)
if provider == 'torch-native':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.softmax(x, axis=-1))
if provider == 'triton':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: softmax(x))
if provider == 'torch-jit':
ms, min_ms, max_ms = triton.testing.do_bench(lambda: naive_softmax(x))
gbps = lambda ms: 2 * x.nelement() * x.element_size() * 1e-9 / (ms * 1e-3)
return gbps(ms), gbps(max_ms), gbps(min_ms)
benchmark.run(show_plots=True, print_data=True)<29><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>default<6C>ju}<7D>uh%jbh&h'h(K<>hj`hhubh <09>image<67><65><EFBFBD>)<29><>}<7D>(h<05><>.. image:: /getting-started/tutorials/images/sphx_glr_02-fused-softmax_001.png
:alt: 02 fused softmax
:class: sphx-glr-single-img
<EFBFBD>h]<5D>h}<7D>(h]<5D>h]<5D><>sphx-glr-single-img<6D>ah]<5D>h]<5D>h!]<5D><>alt<6C><74>02 fused softmax<61><78>uri<72><69>Bgetting-started/tutorials/images/sphx_glr_02-fused-softmax_001.png<6E><67>
candidates<EFBFBD>}<7D><>*<2A>j<EFBFBD>suh%j<>hj`hhh&h'h(Nubhm)<29><>}<7D>(h<05>Out:<3A>h]<5D>h<11>Out:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj<>hj<>hhh&Nh(Nubah}<7D>(h]<5D>h]<5D><>sphx-glr-script-out<75>ah]<5D>h]<5D>h!]<5D>uh%hlh&h'h(Mhj`hhubjc)<29><>}<7D>(hX<>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 188.321838
1 384.0 585.142862 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 682.666684 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 814.058574 406.179533 198.530610
94 12288.0 814.111783 415.661740 198.794749
95 12416.0 812.498981 412.149375 198.358474
96 12544.0 812.566838 412.971190 198.569388
97 12672.0 812.633240 412.097543 198.679085
[98 rows x 4 columns]<5D>h]<5D>hX<>softmax-performance:
N Triton Torch (native) Torch (jit)
0 256.0 512.000001 546.133347 188.321838
1 384.0 585.142862 585.142862 153.600004
2 512.0 655.360017 606.814814 154.566038
3 640.0 682.666684 640.000002 160.000000
4 768.0 722.823517 664.216187 162.754967
.. ... ... ... ...
93 12160.0 814.058574 406.179533 198.530610
94 12288.0 814.111783 415.661740 198.794749
95 12416.0 812.498981 412.149375 198.358474
96 12544.0 812.566838 412.971190 198.569388
97 12672.0 812.633240 412.097543 198.679085
[98 rows x 4 columns]<5D><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>j<EFBFBD>ah]<5D>h]<5D>h!]<5D>h#h$jr<00>js<00>none<6E>ju}<7D>uh%jbh&h'h(Mhj`hhubh )<29><>}<7D>(h<05>*GENERATED FROM PYTHON SOURCE LINES 187-192<39>h]<5D>h<11>*GENERATED FROM PYTHON SOURCE LINES 187-192<39><32><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h#h$uh%h
hj`hhh&h'h(M3ubhm)<29><>}<7D>(h<05>#In the above plot, we can see that:<3A>h]<5D>h<11>#In the above plot, we can see that:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hj hjhhh&Nh(Nubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(M4hj`hhubh <09> block_quote<74><65><EFBFBD>)<29><>}<7D>(hhh]<5D>h<EFBFBD>)<29><>}<7D>(hhh]<5D>(h<>)<29><>}<7D>(h<05>tTriton is 4x faster than the Torch JIT. This confirms our suspicions that the Torch JIT does not do any fusion here.<2E>h]<5D>hm)<29><>}<7D>(hjh]<5D>h<11>tTriton is 4x faster than the Torch JIT. This confirms our suspicions that the Torch JIT does not do any fusion here.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hjhj!ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(M6hjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hjubh<62>)<29><>}<7D>(h<05><>Triton is noticeably faster than :code:`torch.softmax` -- in addition to being **easier to read, understand and maintain**.
Note however that the PyTorch `softmax` operation is more general and will works on tensors of any shape.
<EFBFBD>h]<5D>hm)<29><>}<7D>(h<05><>Triton is noticeably faster than :code:`torch.softmax` -- in addition to being **easier to read, understand and maintain**.
Note however that the PyTorch `softmax` operation is more general and will works on tensors of any shape.<2E>h]<5D>(h<11>!Triton is noticeably faster than <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>!Triton is noticeably faster than <20>hj8ubj<62>)<29><>}<7D>(h<05>:code:`torch.softmax`<60>h]<5D>h<11> torch.softmax<61><78><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> torch.softmax<61>hjAubah}<7D>(h]<5D>h]<5D>j<EFBFBD>ah]<5D>h]<5D>h!]<5D>uh%j<>hj8ubh<11> in addition to being <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> -- in addition to being <20>hj8ubh <09>strong<6E><67><EFBFBD>)<29><>}<7D>(h<05>+**easier to read, understand and maintain**<2A>h]<5D>h<11>'easier to read, understand and maintain<69><6E><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjWubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%jUhj8ubh<11> .
Note however that the PyTorch <20><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> .
Note however that the PyTorch <20>hj8ubj)<29><>}<7D>(h<05> `softmax`<60>h]<5D>h<11>softmax<61><78><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%jhj8ubh<11>B operation is more general and will works on tensors of any shape.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>B operation is more general and will works on tensors of any shape.<2E>hj8ubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(M7hj4ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%h<>hjubeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>j%j&uh%h<>h&h'h(M6hjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%jhj`hhh&Nh(Nubhm)<29><>}<7D>(h<05>B**Total running time of the script:** ( 3 minutes 23.528 seconds)<29>h]<5D>(jV)<29><>}<7D>(h<05>%**Total running time of the script:**<2A>h]<5D>h<11>!Total running time of the script:<3A><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%jUhj<>ubh<11> ( 3 minutes 23.528 seconds)<29><><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05> ( 3 minutes 23.528 seconds)<29>hj<>hhh&Nh(Nubeh}<7D>(h]<5D>h]<5D><>sphx-glr-timing<6E>ah]<5D>h]<5D>h!]<5D>uh%hlh&h'h(M=hj`hhubh<62>)<29><>}<7D>(h<05>D.. _sphx_glr_download_getting-started_tutorials_02-fused-softmax.py:<3A>h]<5D>h}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>hČ?sphx-glr-download-getting-started-tutorials-02-fused-softmax-py<70>uh%h<>h(M@hj`hhh&h'ubhb)<29><>}<7D>(hhh]<5D>h <09> container<65><72><EFBFBD>)<29><>}<7D>(hX).. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: 02-fused-softmax.py <02-fused-softmax.py>`
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: 02-fused-softmax.ipynb <02-fused-softmax.ipynb>`<60>h]<5D>(j<>)<29><>}<7D>(h<05>R:download:`Download Python source code: 02-fused-softmax.py <02-fused-softmax.py>`<60>h]<5D>hm)<29><>}<7D>(hj<>h]<5D>h<00>download_reference<63><65><EFBFBD>)<29><>}<7D>(hj<>h]<5D>j<EFBFBD>)<29><>}<7D>(hj<>h]<5D>h<11>0Download Python source code: 02-fused-softmax.py<70><79><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj<>ubah}<7D>(h]<5D>h]<5D>(h<><68>download<61>eh]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>refdoc<6F>h<EFBFBD><68> refdomain<69>h<06>reftype<70>j<EFBFBD><00> refexplicit<69><74><EFBFBD>refwarn<72><6E>h<EFBFBD><68>02-fused-softmax.py<70><79>filename<6D><65>4d91442ac2982c4e0cc3ab0f43534afbc/02-fused-softmax.py<70>uh%j<>h&h'h(MLhj<>ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(MLhj<>ubah}<7D>(h]<5D>h]<5D>(<28>sphx-glr-download<61><64>sphx-glr-download-python<6F>eh]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubj<62>)<29><>}<7D>(h<05>V:download:`Download Jupyter notebook: 02-fused-softmax.ipynb <02-fused-softmax.ipynb>`<60>h]<5D>hm)<29><>}<7D>(hj<>h]<5D>j<EFBFBD>)<29><>}<7D>(hj<>h]<5D>j<EFBFBD>)<29><>}<7D>(hj<>h]<5D>h<11>1Download Jupyter notebook: 02-fused-softmax.ipynb<6E><62><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjubah}<7D>(h]<5D>h]<5D>(h<><68>download<61>eh]<5D>h]<5D>h!]<5D>uh%j<>hjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>refdoc<6F>h<EFBFBD><68> refdomain<69>h<06>reftype<70>j<00> refexplicit<69><74><EFBFBD>refwarn<72><6E>h<EFBFBD><68>02-fused-softmax.ipynb<6E>j<EFBFBD><00>7034d953b6214fedce6ea03803c712b89/02-fused-softmax.ipynb<6E>uh%j<>h&h'h(MRhjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlh&h'h(MRhj<>ubah}<7D>(h]<5D>h]<5D>(<28>sphx-glr-download<61><64>sphx-glr-download-jupyter<65>eh]<5D>h]<5D>h!]<5D>uh%j<>hj<>ubeh}<7D>(h]<5D>h]<5D>(<28>sphx-glr-footer<65><72>class<73><73>sphx-glr-footer-example<6C>eh]<5D>h]<5D>h!]<5D>uh%j<>hj<>hhh&Nh(Nubah}<7D>(h]<5D>j<EFBFBD>ah]<5D>h]<5D><>?sphx_glr_download_getting-started_tutorials_02-fused-softmax.py<70>ah]<5D>h!]<5D>h<EFBFBD><68>html<6D>uh%hahhh&h'h(MChj`<00>expect_referenced_by_name<6D>}<7D>j=j<>s<>expect_referenced_by_id<69>}<7D>j<EFBFBD>j<>subhb)<29><>}<7D>(hhh]<5D>hm)<29><>}<7D>(h<05>I`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_<>h]<5D>(h <09> reference<63><65><EFBFBD>)<29><>}<7D>(hjJh]<5D>h<11>#Gallery generated by Sphinx-Gallery<72><79><EFBFBD><EFBFBD><EFBFBD>}<7D>(h<05>#Gallery generated by Sphinx-Gallery<72>hjNubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>name<6D><65>#Gallery generated by Sphinx-Gallery<72><79>refuri<72><69> https://sphinx-gallery.github.io<69>uh%jLhjHubh<62>)<29><>}<7D>(h<05># <https://sphinx-gallery.github.io><3E>h]<5D>h}<7D>(h]<5D><>#gallery-generated-by-sphinx-gallery<72>ah]<5D>h]<5D><>#gallery generated by sphinx-gallery<72>ah]<5D>h!]<5D><>refuri<72>j_uh%h<><68>
referenced<EFBFBD>KhjHubeh}<7D>(h]<5D>h]<5D><>sphx-glr-signature<72>ah]<5D>h]<5D>h!]<5D>uh%hlh&h'h(MYhjEhhubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>h<EFBFBD><68>html<6D>uh%hahhh&h'h(MUhj`ubeh}<7D>(h]<5D><> benchmark<72>ah]<5D>h]<5D><> benchmark<72>ah]<5D>h!]<5D>uh%h<>hh<>hhh&h'h(K<>ubeh}<7D>(h]<5D>(<28> fused-softmax<61>h<EFBFBD>eh]<5D><>sphx-glr-example-title<6C>ah]<5D>(<28> fused softmax<61><78>6sphx_glr_getting-started_tutorials_02-fused-softmax.py<70>eh]<5D>h!]<5D>uh%h<>hhhhh&h'h(KjA}<7D>j<EFBFBD>h<>sjC}<7D>h<EFBFBD>h<EFBFBD>subeh}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>source<63>h'uh%h<01>current_source<63>N<EFBFBD> current_line<6E>N<EFBFBD>settings<67><73>docutils.frontend<6E><64>Values<65><73><EFBFBD>)<29><>}<7D>(h<>N<EFBFBD> generator<6F>N<EFBFBD> datestamp<6D>N<EFBFBD> source_link<6E>N<EFBFBD>
source_url<EFBFBD>N<EFBFBD> toc_backlinks<6B><73>entry<72><79>footnote_backlinks<6B>K<01> sectnum_xform<72>K<01>strip_comments<74>N<EFBFBD>strip_elements_with_classes<65>N<EFBFBD> strip_classes<65>N<EFBFBD> report_level<65>K<02>
halt_level<EFBFBD>K<05>exit_status_level<65>K<05>debug<75>N<EFBFBD>warning_stream<61>N<EFBFBD> traceback<63><6B><EFBFBD>input_encoding<6E><67> utf-8-sig<69><67>input_encoding_error_handler<65><72>strict<63><74>output_encoding<6E><67>utf-8<><38>output_encoding_error_handler<65>j<EFBFBD><00>error_encoding<6E><67>utf-8<><38>error_encoding_error_handler<65><72>backslashreplace<63><65> language_code<64><65>en<65><6E>record_dependencies<65>N<EFBFBD>config<69>N<EFBFBD> id_prefix<69>h<06>auto_id_prefix<69><78>id<69><64> dump_settings<67>N<EFBFBD>dump_internals<6C>N<EFBFBD>dump_transforms<6D>N<EFBFBD>dump_pseudo_xml<6D>N<EFBFBD>expose_internals<6C>N<EFBFBD>strict_visitor<6F>N<EFBFBD>_disable_config<69>N<EFBFBD>_source<63>h'<27> _destination<6F>N<EFBFBD> _config_files<65>]<5D><>pep_references<65>N<EFBFBD> pep_base_url<72><6C> https://www.python.org/dev/peps/<2F><>pep_file_url_template<74><65>pep-%04d<34><64>rfc_references<65>N<EFBFBD> rfc_base_url<72><6C>https://tools.ietf.org/html/<2F><> tab_width<74>K<08>trim_footnote_reference_space<63><65><EFBFBD>file_insertion_enabled<65><64><EFBFBD> raw_enabled<65>K<01>syntax_highlight<68><74>long<6E><67> smart_quotes<65><73><EFBFBD>smartquotes_locales<65>]<5D><>character_level_inline_markup<75><70><EFBFBD>doctitle_xform<72><6D><EFBFBD> docinfo_xform<72>K<01>sectsubtitle_xform<72><6D><EFBFBD>embed_stylesheet<65><74><EFBFBD>cloak_email_addresses<65><73><EFBFBD>env<6E>Nub<75>reporter<65>N<EFBFBD>indirect_targets<74>]<5D><>substitution_defs<66>}<7D><>substitution_names<65>}<7D><>refnames<65>}<7D><>refids<64>}<7D>(h<>]<5D>h<EFBFBD>aj<61>]<5D>j<EFBFBD>au<61>nameids<64>}<7D>(j<>h<>j<EFBFBD>j<>j4j1j<>j<>j]jZj<>j~j=j<>jijfu<> nametypes<65>}<7D>(j<><00>j<EFBFBD>Nj4Nj<4E>Nj]Nj<4E>Nj=<00>ji<00>uh}<7D>(h<>h<EFBFBD>j<EFBFBD>h<>j1j5j<>j7jZj<>j~j`j<>j<>jfj`u<> footnote_refs<66>}<7D><> citation_refs<66>}<7D><> autofootnotes<65>]<5D><>autofootnote_refs<66>]<5D><>symbol_footnotes<65>]<5D><>symbol_footnote_refs<66>]<5D><> footnotes<65>]<5D><> citations<6E>]<5D><>autofootnote_start<72>K<01>symbol_footnote_start<72>K<00>
id_counter<EFBFBD><EFBFBD> collections<6E><73>Counter<65><72><EFBFBD>}<7D><><EFBFBD>R<EFBFBD><52>parse_messages<65>]<5D><>transform_messages<65>]<5D>(h <09>system_message<67><65><EFBFBD>)<29><>}<7D>(hhh]<5D>hm)<29><>}<7D>(hhh]<5D>h<11>\Hyperlink target "sphx-glr-getting-started-tutorials-02-fused-softmax-py" is not referenced.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlhjubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>level<65>K<01>type<70><65>INFO<46><4F>source<63>h'<27>line<6E>Kuh%jubj)<29><>}<7D>(hhh]<5D>hm)<29><>}<7D>(hhh]<5D>h<11>eHyperlink target "sphx-glr-download-getting-started-tutorials-02-fused-softmax-py" is not referenced.<2E><><EFBFBD><EFBFBD><EFBFBD>}<7D>(hhhj4ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D>uh%hlhj1ubah}<7D>(h]<5D>h]<5D>h]<5D>h]<5D>h!]<5D><>level<65>K<01>type<70>j.<00>source<63>h'<27>line<6E>M@uh%jube<62> transformer<65>N<EFBFBD>
decoration<EFBFBD>Nhhub.