2021-03-15 13:58:20 -04:00
.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "getting-started/tutorials/03-matrix-multiplication.py"
.. LINE NUMBERS ARE GIVEN BELOW.
.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py>`
to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_getting-started_tutorials_03-matrix-multiplication.py:
Matrix Multiplication
======================
2021-04-21 01:40:29 -04:00
In this tutorial, you will write a 25-lines high-performance matrix multiplication kernel that achieves close to peak performance on modern GPUs.
2021-03-15 13:58:20 -04:00
You will specifically learn about:
2021-04-21 01:40:29 -04:00
- Block-level matrix multiplications
2021-03-15 13:58:20 -04:00
- Multi-dimensional pointer arithmetic
- Program re-ordering for improved L2 cache hit rate
- Automatic performance tuning
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 14-37
2021-03-15 13:58:20 -04:00
Motivations
-------------
Matrix multiplications are a key building block of most modern high-performance computing systems.
They are notoriously hard to optimize, hence their implementation is typically done by hardware vendors themselves as part of so-called "kernel libraries" (e.g., cuBLAS).
2021-04-21 01:40:29 -04:00
Unfortunately, these libraries are often proprietary and cannot be easily customized to accomodate the needs of modern deep learning workloads (e.g., mixture of experts, fused activation functions, etc.).
2021-03-15 13:58:20 -04:00
For this reason, this tutorial will show you how to implement efficient matrix multiplications yourself with Triton, in a way that is easy to customize and extend.
Roughly speaking, the kernel that we will write will implement the following blocked algorithm:
.. code-block:: python
# do in parallel
2021-04-21 01:40:29 -04:00
for m in range(0, M, BLOCK_M):
2021-03-15 13:58:20 -04:00
# do in parallel
2021-04-21 01:40:29 -04:00
for n in range(0, N, BLOCK_N):
acc = zeros((BLOCK_M, BLOCK_N), dtype=float32)
for k in range(0, K, BLOCK_K):
a = A[m : m+BLOCK_M, k : k+BLOCK_K]
b = B[k : k+BLOCK_K, n : n+BLOCK_N]
acc += dot(a, b)
C[m : m+BLOCK_M, n : n+BLOCK_N] = acc;
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
where each iteration of the doubly-nested for-loop corresponds to a Triton program instance.
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 39-110
2021-03-15 13:58:20 -04:00
Compute Kernel
----------------
2021-04-21 01:40:29 -04:00
The above algorithm is actually fairly straightforward to implement in Triton.
The main difficulty comes from the 2D pointer arithmetic that must be done to specify the memory locations for the blocks of :code:`A` and :code:`B` that we need to read in the inner loop.
2021-03-15 13:58:20 -04:00
Pointer Arithmetics
~~~~~~~~~~~~~~~~~~~~
2021-04-21 01:40:29 -04:00
For a row-major 2D tensor :code:`X`, the memory location of :code:`X[i, j]` is given by :code:`&X[i, j] = X + i*stride_x_0 + j*stride_x_1`.
Therefore, blocks of pointers for :code:`A[m : m+BLOCK_M, k:k+BLOCK_K]` and :code:`B[k : k+BLOCK_K, n : n+BLOCK_N]` can be defined in pseudo-code as:
2021-03-15 13:58:20 -04:00
.. code-block:: python
2021-04-21 01:40:29 -04:00
&A[m : m+BLOCK_M, k:k+BLOCK_K] = A + (m : m+BLOCK_M)[:, None]*A.stride(0) + (k : k+BLOCK_K)[None, :];
&B[k : k+BLOCK_K, n:n+BLOCK_N] = B + (k : k+BLOCK_K)[:, None]*B.stride(0) + (n : n+BLOCK_N)[None, :];
2021-03-15 13:58:20 -04:00
Which means that, at initialization (i.e., :code:`k = 0`), pointers for blocks of A and B can be initialized in Triton as:
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
pid_m = triton.program_id(0)
pid_n = triton.program_id(1)
rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
rk = triton.arange(0, BLOCK_K)
// pointer for A operand
pa = A + (rm[:, None] * stride_a_0 + rk[None, :] * stride_a_1);
// pointer for B operand
pb = B + (rk[:, None] * stride_b_0 + rn[None, :] * stride_b_1);
2021-03-15 13:58:20 -04:00
These pointers can then be updated in the inner loop as:
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
pa += BLOCK_K * stride_a_1;
pb += BLOCK_K * stride_b_0;
2021-03-15 13:58:20 -04:00
L2 Cache Optimizations
~~~~~~~~~~~~~~~~~~~~~~~~
2021-04-21 01:40:29 -04:00
As mentioned above, each program instance computes an :code:`[BLOCK_M, BLOCK_N]` block of :code:`C`.
2021-03-15 13:58:20 -04:00
However, the order in which these blocks are computer matters, since it affects the L2 cache hit rate of our program.
This means that a naive row-major ordering:
2021-04-21 01:40:29 -04:00
.. code-block:: Python
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
pid = triton.program_id(0);
grid_m = (M + BLOCK_M - 1) // BLOCK_M;
grid_n = (N + BLOCK_N - 1) // BLOCK_N;
pid_m = pid / grid_n;
pid_n = pid % grid_n;
2021-03-15 13:58:20 -04:00
is unlikely to result in optimal performance.
One possible solution is to launch blocks in an order that promotes data reuse.
2021-04-21 01:40:29 -04:00
This can be done by 'super-grouping' blocks in groups of :code:`GROUP_M` rows before switching to the next column:
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
pid = triton.program_id(0);
width = GROUP_M * grid_n;
group_id = pid // width;
# we need to handle the case where M % (GROUP_M*BLOCK_M) != 0
group_size = min(grid_m - group_id * GROUP_M, GROUP_M);
pid_m = group_id * GROUP_M + (pid % group_size);
pid_n = (pid % width) // (group_size);
2021-03-15 13:58:20 -04:00
In practice, this can improve the performance of our matrix multiplication kernel by >10\% on some hardware architecture (e.g., 220 to 245 TFLOPS on A100).
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 112-115
Final Result
-------------
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 115-188
2021-03-15 13:58:20 -04:00
.. code-block:: default
import torch
import triton
2021-04-21 01:40:29 -04:00
# %
# :code:`triton.jit`'ed functions can be auto-tuned by using the `triton.autotune` decorator, which consumes:
# - A list of :code:`triton.Config` objects that define different configurations of meta-parameters (e.g., BLOCK_M) and compilation options (e.g., num_warps) to try
# - A autotuning *key* whose change in values will trigger evaluation of all the provided configs
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
@triton.jit
def sigmoid(x):
ret_true = 1 / (1 + triton.exp(-x))
ret_false = triton.exp(x) / (1 + triton.exp(x))
return triton.where(x >= 0, ret_true, ret_false)
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
@triton.jit
def swish(x):
return x * sigmoid(x)
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
@triton.autotune(
configs=[
triton.Config({'BLOCK_M': 128, 'BLOCK_N': 128, 'BLOCK_K': 32, 'GROUP_M': 8}, num_warps=4),
triton.Config({'BLOCK_M': 64, 'BLOCK_N': 128, 'BLOCK_K': 32, 'GROUP_M': 8}, num_warps=4),
],
key=['M', 'N', 'K'],
)
# %
# We can now define our kernel as normal, using all the techniques presented above
@triton.jit
def _matmul(A, B, C, M, N, K, stride_am, stride_ak, stride_bk, stride_bn, stride_cm, stride_cn, **META):
# extract meta-parameters
BLOCK_M = META['BLOCK_M']
BLOCK_N = META['BLOCK_N']
BLOCK_K = META['BLOCK_K']
GROUP_M = 8
# matrix multiplication
pid = triton.program_id(0)
grid_m = (M + BLOCK_M - 1) // BLOCK_M
grid_n = (N + BLOCK_N - 1) // BLOCK_N
# re-order program ID for better L2 performance
width = GROUP_M * grid_n
group_id = pid // width
group_size = min(grid_m - group_id * GROUP_M, GROUP_M)
pid_m = group_id * GROUP_M + (pid % group_size)
pid_n = (pid % width) // (group_size)
# do matrix multiplication
rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
rk = triton.arange(0, BLOCK_K)
A = A + (rm[:, None] * stride_am + rk[None, :] * stride_ak)
B = B + (rk[:, None] * stride_bk + rn[None, :] * stride_bn)
acc = triton.zeros((BLOCK_M, BLOCK_N), dtype=triton.float32)
for k in range(K, 0, -BLOCK_K):
a = triton.load(A)
b = triton.load(B)
acc += triton.dot(a, b)
A += BLOCK_K * stride_ak
B += BLOCK_K * stride_bk
# triton can accept arbitrary activation function
# via metaparameters!
if META['ACTIVATION']:
acc = META['ACTIVATION'](acc)
# rematerialize rm and rn to save registers
rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
C = C + (rm[:, None] * stride_cm + rn[None, :] * stride_cn)
mask = (rm[:, None] < M) & (rn[None, :] < N)
triton.store(C, acc, mask=mask)
.. GENERATED FROM PYTHON SOURCE LINES 189-191
We can also create a convenience wrapper function that only takes two input tensors
and (1) checks any shape constraint; (2) allocates the output; (3) launches the kernel
.. GENERATED FROM PYTHON SOURCE LINES 191-213
2021-03-15 13:58:20 -04:00
.. code-block:: default
2021-04-21 01:40:29 -04:00
def matmul(a, b, activation=None):
# checks constraints
assert a.shape[1] == b.shape[0], "incompatible dimensions"
assert a.is_contiguous(), "matrix A must be contiguous"
assert b.is_contiguous(), "matrix B must be contiguous"
M, K = a.shape
_, N = b.shape
# allocates output
c = torch.empty((M, N), device=a.device, dtype=a.dtype)
# launch kernel
grid = lambda META: (triton.cdiv(M, META['BLOCK_M']) * triton.cdiv(N, META['BLOCK_N']), )
_matmul[grid](
a, b, c, M, N, K, \
a.stride(0), a.stride(1), b.stride(0), b.stride(1), c.stride(0), c.stride(1),\
ACTIVATION = activation
)
# return output
return c
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 214-218
2021-03-15 13:58:20 -04:00
Unit Test
-----------
2021-04-21 01:40:29 -04:00
We can test our custom matrix multiplication operation against a native torch implementation (i.e., cuBLAS + custom element-wise swish kernel)
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 218-228
2021-03-15 13:58:20 -04:00
.. code-block:: default
2021-04-21 01:40:29 -04:00
#torch.manual_seed(0)
a = torch.randn((512, 512), device='cuda', dtype=torch.float16)
b = torch.randn((512, 512), device='cuda', dtype=torch.float16)
c_0 = matmul(a, b, activation=swish)
c_1 = torch.nn.SiLU()(torch.matmul(a, b))
2021-03-15 13:58:20 -04:00
print(c_0)
print(c_1)
2021-04-21 01:40:29 -04:00
print(triton.testing.allclose(c_0, c_1))
2021-03-15 13:58:20 -04:00
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
2021-04-21 01:58:48 -04:00
tensor([[-5.9605e-08, 5.1094e+01, -1.8477e-05, ..., 2.6547e+01,
-7.2598e-05, -4.2510e-04],
[-2.7100e-01, -3.0220e-05, 5.9414e+00, ..., 2.8340e+00,
-1.8644e-04, 1.3094e+01],
[-1.5332e-01, 4.8125e+00, 8.4277e-01, ..., 3.6387e+00,
4.3375e+01, 1.6865e+00],
2021-03-15 13:58:20 -04:00
...,
2021-04-21 01:58:48 -04:00
[-0.0000e+00, 2.9453e+01, -4.7684e-07, ..., 6.2617e+00,
4.1133e+00, -0.0000e+00],
[ 1.6562e+01, -8.1539e-04, 1.3836e+01, ..., 1.9844e+00,
-1.1238e-02, 8.4375e+00],
[-1.0876e-01, -2.7295e-01, 3.2156e+01, ..., -1.6907e-02,
-0.0000e+00, -0.0000e+00]], device='cuda:0', dtype=torch.float16)
tensor([[-5.9605e-08, 5.1094e+01, -1.8537e-05, ..., 2.6547e+01,
-7.2658e-05, -4.2605e-04],
[-2.7100e-01, -3.0220e-05, 5.9414e+00, ..., 2.8340e+00,
-1.8632e-04, 1.3094e+01],
[-1.5332e-01, 4.8125e+00, 8.4277e-01, ..., 3.6387e+00,
4.3375e+01, 1.6875e+00],
2021-03-15 13:58:20 -04:00
...,
2021-04-21 01:58:48 -04:00
[-0.0000e+00, 2.9453e+01, -4.7684e-07, ..., 6.2617e+00,
4.1133e+00, -0.0000e+00],
[ 1.6562e+01, -8.1778e-04, 1.3836e+01, ..., 1.9844e+00,
-1.1238e-02, 8.4375e+00],
[-1.0876e-01, -2.7295e-01, 3.2156e+01, ..., -1.6891e-02,
-0.0000e+00, -0.0000e+00]], device='cuda:0', dtype=torch.float16)
2021-04-21 01:40:29 -04:00
tensor(True, device='cuda:0')
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 229-235
2021-03-15 13:58:20 -04:00
Benchmark
--------------
Square Matrix Performance
~~~~~~~~~~~~~~~~~~~~~~~~~~
We can now compare the performance of our kernel against CUTLASS. Here we focus on square matrices, but feel free to arrange the script as you wish to compare any other matrix shape.#
2021-04-21 01:40:29 -04:00
.. GENERATED FROM PYTHON SOURCE LINES 235-261
2021-03-15 13:58:20 -04:00
.. code-block:: default
@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=['M', 'N', 'K'], # argument names to use as an x-axis for the plot
x_vals=[256 * i for i in range(2, 33)], # different possible values for `x_name`
y_name='provider', # argument name whose value corresponds to a different line in the plot
2021-04-21 01:40:29 -04:00
y_vals=['cublas', 'triton'], # possible keys for `y_name`
y_lines=["cuBLAS", "Triton"], # label name for the lines
2021-03-15 13:58:20 -04:00
ylabel="TFLOPS", # label name for the y-axis
plot_name="matmul-performance", # name for the plot. Used also as a file name for saving the plot.
args={}
)
)
def benchmark(M, N, K, provider):
2021-04-21 01:40:29 -04:00
silu = torch.nn.SiLU()
2021-03-15 13:58:20 -04:00
a = torch.randn((M, K), device='cuda', dtype=torch.float16)
b = torch.randn((K, N), device='cuda', dtype=torch.float16)
2021-03-29 11:59:18 -04:00
if provider == 'cublas':
2021-03-15 13:58:20 -04:00
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.matmul(a, b))
if provider == 'triton':
2021-04-21 01:40:29 -04:00
ms, min_ms, max_ms = triton.testing.do_bench(lambda: matmul(a, b))
2021-03-15 13:58:20 -04:00
perf = lambda ms: 2 * M * N * K * 1e-12 / (ms * 1e-3)
return perf(ms), perf(max_ms), perf(min_ms)
2021-04-21 01:40:29 -04:00
benchmark.run(print_data=True)
2021-03-15 13:58:20 -04:00
.. image:: /getting-started/tutorials/images/sphx_glr_03-matrix-multiplication_001.png
2021-03-29 11:59:18 -04:00
:alt: 03 matrix multiplication
2021-03-15 13:58:20 -04:00
:class: sphx-glr-single-img
2021-04-21 01:40:29 -04:00
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
M cuBLAS Triton
0 512.0 20.164923 15.420235
2021-04-21 01:58:48 -04:00
1 768.0 58.982401 40.215272
2021-04-21 01:40:29 -04:00
2 1024.0 91.180520 72.315584
3 1280.0 157.538463 117.028568
2021-04-21 01:58:48 -04:00
4 1536.0 153.867127 144.446699
5 1792.0 208.137481 190.498706
6 2048.0 199.728763 152.520144
7 2304.0 246.266731 178.267699
8 2560.0 235.741014 215.578957
9 2816.0 231.990461 198.246398
2021-04-21 01:40:29 -04:00
10 3072.0 236.916752 221.184001
2021-04-21 01:58:48 -04:00
11 3328.0 239.173747 210.500857
2021-04-21 01:40:29 -04:00
12 3584.0 248.385067 230.552287
2021-04-21 01:58:48 -04:00
13 3840.0 251.917998 222.519114
14 4096.0 263.172024 244.032234
15 4352.0 249.595626 232.307632
16 4608.0 276.560014 254.803966
17 4864.0 266.614125 245.366501
18 5120.0 257.003930 238.096276
19 5376.0 252.676487 236.527241
20 5632.0 270.057027 248.514009
21 5888.0 264.206935 242.511113
22 6144.0 259.441481 241.205983
23 6400.0 257.157204 235.078047
24 6656.0 254.161678 232.699140
25 6912.0 251.844029 233.178785
26 7168.0 253.282797 231.740709
27 7424.0 251.868505 230.377264
28 7680.0 250.988932 231.606284
29 7936.0 253.293068 229.692102
30 8192.0 253.002304 231.360005
2021-03-15 13:58:20 -04:00
.. rst-class:: sphx-glr-timing
2021-04-21 01:58:48 -04:00
**Total running time of the script:** ( 0 minutes 32.933 seconds)
2021-03-15 13:58:20 -04:00
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: 03-matrix-multiplication.py <03-matrix-multiplication.py>`
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: 03-matrix-multiplication.ipynb <03-matrix-multiplication.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_