Matrix Multiplication

In this tutorial, you will write a 25-lines high-performance matrix multiplication kernel that achieves close to peak performance on modern GPUs. You will specifically learn about:

  • Block-level matrix multiplications

  • Multi-dimensional pointer arithmetic

  • Program re-ordering for improved L2 cache hit rate

  • Automatic performance tuning

Motivations

Matrix multiplications are a key building block of most modern high-performance computing systems. They are notoriously hard to optimize, hence their implementation is typically done by hardware vendors themselves as part of so-called “kernel libraries” (e.g., cuBLAS). Unfortunately, these libraries are often proprietary and cannot be easily customized to accomodate the needs of modern deep learning workloads (e.g., mixture of experts, fused activation functions, etc.). For this reason, this tutorial will show you how to implement efficient matrix multiplications yourself with Triton, in a way that is easy to customize and extend.

Roughly speaking, the kernel that we will write will implement the following blocked algorithm:

# do in parallel
for m in range(0, M, BLOCK_M):
  # do in parallel
  for n in range(0, N, BLOCK_N):
    acc = zeros((BLOCK_M, BLOCK_N), dtype=float32)
    for k in range(0, K, BLOCK_K):
      a = A[m : m+BLOCK_M, k : k+BLOCK_K]
      b = B[k : k+BLOCK_K, n : n+BLOCK_N]
      acc += dot(a, b)
    C[m : m+BLOCK_M, n : n+BLOCK_N] = acc;

where each iteration of the doubly-nested for-loop corresponds to a Triton program instance.

Compute Kernel

The above algorithm is actually fairly straightforward to implement in Triton. The main difficulty comes from the 2D pointer arithmetic that must be done to specify the memory locations for the blocks of A and B that we need to read in the inner loop.

Pointer Arithmetics

For a row-major 2D tensor X, the memory location of X[i, j] is given by &X[i, j] = X + i*stride_x_0 + j*stride_x_1. Therefore, blocks of pointers for A[m : m+BLOCK_M, k:k+BLOCK_K] and B[k : k+BLOCK_K, n : n+BLOCK_N] can be defined in pseudo-code as:

&A[m : m+BLOCK_M, k:k+BLOCK_K] =  A + (m : m+BLOCK_M)[:, None]*A.stride(0) + (k : k+BLOCK_K)[None, :];
&B[k : k+BLOCK_K, n:n+BLOCK_N] =  B + (k : k+BLOCK_K)[:, None]*B.stride(0) + (n : n+BLOCK_N)[None, :];

Which means that, at initialization (i.e., k = 0), pointers for blocks of A and B can be initialized in Triton as:

pid_m = triton.program_id(0)
pid_n = triton.program_id(1)
rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
rk = triton.arange(0, BLOCK_K)
// pointer for A operand
pa = A + (rm[:, None] * stride_a_0 + rk[None, :] * stride_a_1);
// pointer for B operand
pb = B + (rk[:, None] * stride_b_0 + rn[None, :] * stride_b_1);

These pointers can then be updated in the inner loop as:

pa += BLOCK_K * stride_a_1;
pb += BLOCK_K * stride_b_0;

L2 Cache Optimizations

As mentioned above, each program instance computes an [BLOCK_M, BLOCK_N] block of C. However, the order in which these blocks are computer matters, since it affects the L2 cache hit rate of our program. This means that a naive row-major ordering:

pid = triton.program_id(0);
grid_m = (M + BLOCK_M - 1) // BLOCK_M;
grid_n = (N + BLOCK_N - 1) // BLOCK_N;
pid_m = pid / grid_n;
pid_n = pid % grid_n;

is unlikely to result in optimal performance.

One possible solution is to launch blocks in an order that promotes data reuse. This can be done by ‘super-grouping’ blocks in groups of GROUP_M rows before switching to the next column:

pid = triton.program_id(0);
width = GROUP_M * grid_n;
group_id = pid // width;
# we need to handle the case where M % (GROUP_M*BLOCK_M) != 0
group_size = min(grid_m - group_id * GROUP_M, GROUP_M);
pid_m = group_id * GROUP_M + (pid % group_size);
pid_n = (pid % width) // (group_size);

In practice, this can improve the performance of our matrix multiplication kernel by >10% on some hardware architecture (e.g., 220 to 245 TFLOPS on A100).

Final Result

import torch
import triton

# %
# :code:`triton.jit`'ed functions can be auto-tuned by using the `triton.autotune` decorator, which consumes:
#   - A list of :code:`triton.Config` objects that define different configurations of meta-parameters (e.g., BLOCK_M) and compilation options (e.g., num_warps) to try
#   - A autotuning *key* whose change in values will trigger evaluation of all the provided configs


@triton.jit
def sigmoid(x):
    ret_true = 1 / (1 + triton.exp(-x))
    ret_false = triton.exp(x) / (1 + triton.exp(x))
    return triton.where(x >= 0, ret_true, ret_false)


@triton.jit
def swish(x):
    return x * sigmoid(x)


@triton.autotune(
    configs=[
        triton.Config({'BLOCK_M': 128, 'BLOCK_N': 128, 'BLOCK_K': 32, 'GROUP_M': 8}, num_warps=4),
        triton.Config({'BLOCK_M': 64, 'BLOCK_N': 128, 'BLOCK_K': 32, 'GROUP_M': 8}, num_warps=4),
    ],
    key=['M', 'N', 'K'],
)
# %
# We can now define our kernel as normal, using all the techniques presented above
@triton.jit
def _matmul(A, B, C, M, N, K, stride_am, stride_ak, stride_bk, stride_bn, stride_cm, stride_cn, **META):
    # extract meta-parameters
    BLOCK_M = META['BLOCK_M']
    BLOCK_N = META['BLOCK_N']
    BLOCK_K = META['BLOCK_K']
    GROUP_M = 8
    # matrix multiplication
    pid = triton.program_id(0)
    grid_m = (M + BLOCK_M - 1) // BLOCK_M
    grid_n = (N + BLOCK_N - 1) // BLOCK_N
    # re-order program ID for better L2 performance
    width = GROUP_M * grid_n
    group_id = pid // width
    group_size = min(grid_m - group_id * GROUP_M, GROUP_M)
    pid_m = group_id * GROUP_M + (pid % group_size)
    pid_n = (pid % width) // (group_size)
    # do matrix multiplication
    rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
    rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
    rk = triton.arange(0, BLOCK_K)
    A = A + (rm[:, None] * stride_am + rk[None, :] * stride_ak)
    B = B + (rk[:, None] * stride_bk + rn[None, :] * stride_bn)
    acc = triton.zeros((BLOCK_M, BLOCK_N), dtype=triton.float32)
    for k in range(K, 0, -BLOCK_K):
        a = triton.load(A)
        b = triton.load(B)
        acc += triton.dot(a, b)
        A += BLOCK_K * stride_ak
        B += BLOCK_K * stride_bk
    # triton can accept arbitrary activation function
    # via metaparameters!
    if META['ACTIVATION']:
        acc = META['ACTIVATION'](acc)
    # rematerialize rm and rn to save registers
    rm = pid_m * BLOCK_M + triton.arange(0, BLOCK_M)
    rn = pid_n * BLOCK_N + triton.arange(0, BLOCK_N)
    C = C + (rm[:, None] * stride_cm + rn[None, :] * stride_cn)
    mask = (rm[:, None] < M) & (rn[None, :] < N)
    triton.store(C, acc, mask=mask)

We can also create a convenience wrapper function that only takes two input tensors and (1) checks any shape constraint; (2) allocates the output; (3) launches the kernel

def matmul(a, b, activation=None):
    # checks constraints
    assert a.shape[1] == b.shape[0], "incompatible dimensions"
    assert a.is_contiguous(), "matrix A must be contiguous"
    assert b.is_contiguous(), "matrix B must be contiguous"
    M, K = a.shape
    _, N = b.shape
    # allocates output
    c = torch.empty((M, N), device=a.device, dtype=a.dtype)
    # launch kernel
    grid = lambda META: (triton.cdiv(M, META['BLOCK_M']) * triton.cdiv(N, META['BLOCK_N']), )
    _matmul[grid](
        a, b, c, M, N, K, \
        a.stride(0), a.stride(1), b.stride(0), b.stride(1), c.stride(0), c.stride(1),\
        ACTIVATION = activation
    )
    # return output
    return c

Unit Test

We can test our custom matrix multiplication operation against a native torch implementation (i.e., cuBLAS + custom element-wise swish kernel)

#torch.manual_seed(0)
a = torch.randn((512, 512), device='cuda', dtype=torch.float16)
b = torch.randn((512, 512), device='cuda', dtype=torch.float16)
c_0 = matmul(a, b, activation=swish)
c_1 = torch.nn.SiLU()(torch.matmul(a, b))
print(c_0)
print(c_1)
print(triton.testing.allclose(c_0, c_1))

Out:

tensor([[-5.9605e-08,  5.1094e+01, -1.8477e-05,  ...,  2.6547e+01,
         -7.2598e-05, -4.2510e-04],
        [-2.7100e-01, -3.0220e-05,  5.9414e+00,  ...,  2.8340e+00,
         -1.8644e-04,  1.3094e+01],
        [-1.5332e-01,  4.8125e+00,  8.4277e-01,  ...,  3.6387e+00,
          4.3375e+01,  1.6865e+00],
        ...,
        [-0.0000e+00,  2.9453e+01, -4.7684e-07,  ...,  6.2617e+00,
          4.1133e+00, -0.0000e+00],
        [ 1.6562e+01, -8.1539e-04,  1.3836e+01,  ...,  1.9844e+00,
         -1.1238e-02,  8.4375e+00],
        [-1.0876e-01, -2.7295e-01,  3.2156e+01,  ..., -1.6907e-02,
         -0.0000e+00, -0.0000e+00]], device='cuda:0', dtype=torch.float16)
tensor([[-5.9605e-08,  5.1094e+01, -1.8537e-05,  ...,  2.6547e+01,
         -7.2658e-05, -4.2605e-04],
        [-2.7100e-01, -3.0220e-05,  5.9414e+00,  ...,  2.8340e+00,
         -1.8632e-04,  1.3094e+01],
        [-1.5332e-01,  4.8125e+00,  8.4277e-01,  ...,  3.6387e+00,
          4.3375e+01,  1.6875e+00],
        ...,
        [-0.0000e+00,  2.9453e+01, -4.7684e-07,  ...,  6.2617e+00,
          4.1133e+00, -0.0000e+00],
        [ 1.6562e+01, -8.1778e-04,  1.3836e+01,  ...,  1.9844e+00,
         -1.1238e-02,  8.4375e+00],
        [-1.0876e-01, -2.7295e-01,  3.2156e+01,  ..., -1.6891e-02,
         -0.0000e+00, -0.0000e+00]], device='cuda:0', dtype=torch.float16)
tensor(True, device='cuda:0')

Benchmark

Square Matrix Performance

We can now compare the performance of our kernel against CUTLASS. Here we focus on square matrices, but feel free to arrange the script as you wish to compare any other matrix shape.#

@triton.testing.perf_report(
    triton.testing.Benchmark(
        x_names=['M', 'N', 'K'],  # argument names to use as an x-axis for the plot
        x_vals=[256 * i for i in range(2, 33)],  # different possible values for `x_name`
        y_name='provider',  # argument name whose value corresponds to a different line in the plot
        y_vals=['cublas', 'triton'],  # possible keys for `y_name`
        y_lines=["cuBLAS", "Triton"],  # label name for the lines
        ylabel="TFLOPS",  # label name for the y-axis
        plot_name="matmul-performance",  # name for the plot. Used also as a file name for saving the plot.
        args={}
    )
)
def benchmark(M, N, K, provider):
    silu = torch.nn.SiLU()
    a = torch.randn((M, K), device='cuda', dtype=torch.float16)
    b = torch.randn((K, N), device='cuda', dtype=torch.float16)
    if provider == 'cublas':
        ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.matmul(a, b))
    if provider == 'triton':
        ms, min_ms, max_ms = triton.testing.do_bench(lambda: matmul(a, b))
    perf = lambda ms: 2 * M * N * K * 1e-12 / (ms * 1e-3)
    return perf(ms), perf(max_ms), perf(min_ms)


benchmark.run(print_data=True)
03 matrix multiplication

Out:

         M      cuBLAS      Triton
0    512.0   20.164923   15.420235
1    768.0   58.982401   40.215272
2   1024.0   91.180520   72.315584
3   1280.0  157.538463  117.028568
4   1536.0  153.867127  144.446699
5   1792.0  208.137481  190.498706
6   2048.0  199.728763  152.520144
7   2304.0  246.266731  178.267699
8   2560.0  235.741014  215.578957
9   2816.0  231.990461  198.246398
10  3072.0  236.916752  221.184001
11  3328.0  239.173747  210.500857
12  3584.0  248.385067  230.552287
13  3840.0  251.917998  222.519114
14  4096.0  263.172024  244.032234
15  4352.0  249.595626  232.307632
16  4608.0  276.560014  254.803966
17  4864.0  266.614125  245.366501
18  5120.0  257.003930  238.096276
19  5376.0  252.676487  236.527241
20  5632.0  270.057027  248.514009
21  5888.0  264.206935  242.511113
22  6144.0  259.441481  241.205983
23  6400.0  257.157204  235.078047
24  6656.0  254.161678  232.699140
25  6912.0  251.844029  233.178785
26  7168.0  253.282797  231.740709
27  7424.0  251.868505  230.377264
28  7680.0  250.988932  231.606284
29  7936.0  253.293068  229.692102
30  8192.0  253.002304  231.360005

Total running time of the script: ( 0 minutes 32.933 seconds)

Gallery generated by Sphinx-Gallery