2021-03-15 13:58:20 -04:00
.. DO NOT EDIT.
.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
.. "getting-started/tutorials/03-matrix-multiplication.py"
.. LINE NUMBERS ARE GIVEN BELOW.
.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here <sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py>`
to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_getting-started_tutorials_03-matrix-multiplication.py:
Matrix Multiplication
======================
2021-08-05 23:10:57 +00:00
In this tutorial, you will write a 25-lines high-performance FP16 matrix multiplication
kernel that achieves performance on par with cuBLAS.
2021-03-15 13:58:20 -04:00
You will specifically learn about:
2021-04-21 01:40:29 -04:00
- Block-level matrix multiplications
2021-03-15 13:58:20 -04:00
- Multi-dimensional pointer arithmetic
2021-08-05 23:10:57 +00:00
- Program re-ordering for improved L2 cache hit rate
2021-03-15 13:58:20 -04:00
- Automatic performance tuning
2021-08-05 23:10:57 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 15-42
2021-03-15 13:58:20 -04:00
Motivations
-------------
Matrix multiplications are a key building block of most modern high-performance computing systems.
2021-08-05 23:10:57 +00:00
They are notoriously hard to optimize, hence their implementation is generally done by
hardware vendors themselves as part of so-called "kernel libraries" (e.g., cuBLAS).
Unfortunately, these libraries are often proprietary and cannot be easily customized
to accomodate the needs of modern deep learning workloads (e.g., fused activation functions).
In this tutorial, you will learn how to implement efficient matrix multiplications by
yourself with Triton, in a way that is easy to customize and extend.
2021-03-15 13:58:20 -04:00
2021-08-05 23:10:57 +00:00
Roughly speaking, the kernel that we will write will implement the following blocked
2021-08-12 02:04:29 +00:00
algorithm to multiply a (M, K) by a (K, N) matrix:
2021-03-15 13:58:20 -04:00
.. code-block:: python
# do in parallel
2021-08-05 23:10:57 +00:00
for m in range(0, M, BLOCK_SIZE_M):
2021-03-15 13:58:20 -04:00
# do in parallel
2021-08-05 23:10:57 +00:00
for n in range(0, N, BLOCK_SIZE_N):
acc = zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=float32)
for k in range(0, K, BLOCK_SIZE_K):
a = A[m : m+BLOCK_SIZE_M, k : k+BLOCK_SIZE_K]
b = B[k : k+BLOCK_SIZE_K, n : n+BLOCK_SIZE_N]
2021-04-21 01:40:29 -04:00
acc += dot(a, b)
2021-08-05 23:10:57 +00:00
C[m : m+BLOCK_SIZE_M, n : n+BLOCK_SIZE_N] = acc;
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
where each iteration of the doubly-nested for-loop is performed by a dedicated Triton program instance.
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 44-137
2021-03-15 13:58:20 -04:00
Compute Kernel
----------------
2021-07-23 04:39:46 +00:00
The above algorithm is, actually, fairly straightforward to implement in Triton.
2021-08-05 23:10:57 +00:00
The main difficulty comes from the computation of the memory locations at which blocks
2021-08-06 20:03:44 +00:00
of :code:`A` and :code:`B` must be read in the inner loop. For that, we need
2021-08-05 23:10:57 +00:00
multi-dimensional pointer arithmetics.
2021-03-15 13:58:20 -04:00
Pointer Arithmetics
~~~~~~~~~~~~~~~~~~~~
2021-08-05 23:10:57 +00:00
For a row-major 2D tensor :code:`X`, the memory location of :code:`X[i, j]` is given b
2021-08-12 02:04:29 +00:00
y :code:`&X[i, j] = X + i*stride_xi + j*stride_xj`.
2021-08-05 23:10:57 +00:00
Therefore, blocks of pointers for :code:`A[m : m+BLOCK_SIZE_M, k:k+BLOCK_SIZE_K]` and
:code:`B[k : k+BLOCK_SIZE_K, n : n+BLOCK_SIZE_N]` can be defined in pseudo-code as:
2021-03-15 13:58:20 -04:00
.. code-block:: python
2021-08-12 02:04:29 +00:00
&A[m : m+BLOCK_SIZE_M, k:k+BLOCK_SIZE_K] = a_ptr + (m : m+BLOCK_SIZE_M)[:, None]*A.stride(0) + (k : k+BLOCK_SIZE_K)[None, :]*A.stride(1);
&B[k : k+BLOCK_SIZE_K, n:n+BLOCK_SIZE_N] = b_ptr + (k : k+BLOCK_SIZE_K)[:, None]*B.stride(0) + (n : n+BLOCK_SIZE_N)[None, :]*B.stride(1);
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
Which means that pointers for blocks of A and B can be initialized (i.e., :code:`k=0`) in Triton as:
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
offs_am = pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)
offs_bn = pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)
offs_k = tl.arange(0, BLOCK_SIZE_K)
a_ptrs = a_ptr + (offs_am[:, None]*stride_am + offs_k [None, :]*stride_ak)
b_ptrs = b_ptr + (offs_k [:, None]*stride_bk + offs_bn[None, :]*stride_bn)
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
And then updated in the inner loop as follows:
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
pa += BLOCK_SIZE_K * stride_ak;
pb += BLOCK_SIZE_K * stride_bk;
2021-03-15 13:58:20 -04:00
L2 Cache Optimizations
~~~~~~~~~~~~~~~~~~~~~~~~
2021-08-05 23:10:57 +00:00
As mentioned above, each program instance computes a :code:`[BLOCK_SIZE_M, BLOCK_SIZE_N]`
2021-08-06 20:03:44 +00:00
block of :code:`C`.
2021-08-05 23:10:57 +00:00
It is important to remember that the order in which these blocks are computed does
matter, since it affects the L2 cache hit rate of our program. and unfortunately, a
a simple row-major ordering
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. code-block:: Python
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
pid = triton.program_id(0);
2021-08-05 23:10:57 +00:00
grid_m = (M + BLOCK_SIZE_M - 1) // BLOCK_SIZE_M;
grid_n = (N + BLOCK_SIZE_N - 1) // BLOCK_SIZE_N;
2021-04-21 01:40:29 -04:00
pid_m = pid / grid_n;
pid_n = pid % grid_n;
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
is just not going to cut it.
2021-03-15 13:58:20 -04:00
One possible solution is to launch blocks in an order that promotes data reuse.
2021-08-05 23:10:57 +00:00
This can be done by 'super-grouping' blocks in groups of :code:`GROUP_M` rows before
switching to the next column:
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
.. code-block:: python
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
# program ID
pid = tl.program_id(axis=0)
# number of program ids along the M axis
num_pid_m = tl.cdiv(M, BLOCK_SIZE_M)
# number of programs ids along the N axis
num_pid_n = tl.cdiv(N, BLOCK_SIZE_N)
# number of programs in group
num_pid_in_group = GROUP_SIZE_M * num_pid_n
# id of the group this program is in
group_id = pid // num_pid_in_group
# row-id of the first program in the group
first_pid_m = group_id * GROUP_SIZE_M
# if `num_pid_m` isn't divisible by `GROUP_SIZE_M`, the last group is smaller
group_size_m = min(num_pid_m - first_pid_m, GROUP_SIZE_M)
# *within groups*, programs are ordered in a column-major order
# row-id of the program in the *launch grid*
pid_m = first_pid_m + (pid % group_size_m)
# col-id of the program in the *launch grid*
pid_n = (pid % num_pid_in_group) // group_size_m
2021-03-15 13:58:20 -04:00
2021-08-06 20:03:44 +00:00
For example, in the following matmul where each matrix is 9 blocks by 9 blocks,
we can see that if we compute the output in row-major ordering, we need to load 90
blocks into SRAM to compute the first 9 output blocks, but if we do it in grouped
ordering, we only need to load 54 blocks.
.. image:: grouped_vs_row_major_ordering.png
2021-08-05 23:10:57 +00:00
2021-08-06 20:03:44 +00:00
In practice, this can improve the performance of our matrix multiplication kernel by
more than 10\% on some hardware architecture (e.g., 220 to 245 TFLOPS on A100).
2021-08-05 23:10:57 +00:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 139-142
2021-04-21 01:40:29 -04:00
Final Result
-------------
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 142-262
2021-03-15 13:58:20 -04:00
.. code-block:: default
import torch
import triton
2021-04-23 16:42:55 -04:00
import triton.language as tl
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
# %
2021-08-05 23:10:57 +00:00
# :code:`triton.jit`'ed functions can be auto-tuned by using the `triton.autotune`
# decorator, which consumes:
# - A list of :code:`triton.Config` objects that define different configurations of
# meta-parameters (e.g., BLOCK_SIZE_M) and compilation options (e.g., num_warps) to try
# - An autotuning *key* whose change in values will trigger evaluation of all the
# provided configs
2021-03-15 13:58:20 -04:00
2021-04-21 01:40:29 -04:00
@triton.autotune(
configs=[
2021-08-05 23:10:57 +00:00
triton.Config({'BLOCK_SIZE_M': 128, 'BLOCK_SIZE_N': 256, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=3, num_warps=8),
triton.Config({'BLOCK_SIZE_M': 256, 'BLOCK_SIZE_N': 128, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=3, num_warps=8),
triton.Config({'BLOCK_SIZE_M': 256, 'BLOCK_SIZE_N': 64, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 64 , 'BLOCK_SIZE_N': 256, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 128, 'BLOCK_SIZE_N': 128, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 128, 'BLOCK_SIZE_N': 64 , 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 64 , 'BLOCK_SIZE_N': 128, 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 128, 'BLOCK_SIZE_N': 32 , 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=4, num_warps=4),
triton.Config({'BLOCK_SIZE_M': 64 , 'BLOCK_SIZE_N': 32 , 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=5, num_warps=2),
triton.Config({'BLOCK_SIZE_M': 32 , 'BLOCK_SIZE_N': 64 , 'BLOCK_SIZE_K': 32, 'GROUP_SIZE_M': 8}, num_stages=5, num_warps=2),
2021-04-21 01:40:29 -04:00
],
key=['M', 'N', 'K'],
)
# %
# We can now define our kernel as normal, using all the techniques presented above
@triton.jit
2021-08-05 23:10:57 +00:00
def matmul_kernel(
# Pointers to matrices
2021-08-12 02:04:29 +00:00
a_ptr, b_ptr, c_ptr,
2021-08-05 23:10:57 +00:00
# Matrix dimensions
2021-08-12 02:04:29 +00:00
M, N, K,
2021-08-05 23:10:57 +00:00
# The stride variables represent how much to increase the ptr by when moving by 1
# element in a particular dimension. E.g. stride_am is how much to increase a_ptr
# by to get the element one row down (A has M rows)
2021-08-12 02:04:29 +00:00
stride_am, stride_ak,
stride_bk, stride_bn,
stride_cm, stride_cn,
# Meta-parameters
2021-08-05 23:10:57 +00:00
**meta,
):
2021-08-12 02:04:29 +00:00
"""Kernel for computing the matmul C = A x B.
2021-08-05 23:10:57 +00:00
A has shape (M, K), B has shape (K, N) and C has shape (M, N)
"""
2021-04-21 01:40:29 -04:00
# extract meta-parameters
2021-08-05 23:10:57 +00:00
BLOCK_SIZE_M = meta['BLOCK_SIZE_M']
BLOCK_SIZE_N = meta['BLOCK_SIZE_N']
BLOCK_SIZE_K = meta['BLOCK_SIZE_K']
GROUP_SIZE_M = 8
2021-08-12 02:04:29 +00:00
# -----------------------------------------------------------
# Map program ids `pid` to the block of C it should compute.
# This is done in a grouped ordering to promote L2 data reuse
# See above `L2 Cache Optimizations` section for details
pid = tl.program_id(axis=0)
num_pid_m = tl.cdiv(M, BLOCK_SIZE_M)
num_pid_n = tl.cdiv(N, BLOCK_SIZE_N)
num_pid_in_group = GROUP_SIZE_M * num_pid_n
group_id = pid // num_pid_in_group
first_pid_m = group_id * GROUP_SIZE_M
group_size_m = min(num_pid_m - first_pid_m, GROUP_SIZE_M)
pid_m = first_pid_m + (pid % group_size_m)
pid_n = (pid % num_pid_in_group) // group_size_m
# ----------------------------------------------------------
# Create pointers for the first blocks of A and B.
# We will advance this pointer as we move in the K direction
# and accumulate
# a_ptrs is a block of [BLOCK_SIZE_M, BLOCK_SIZE_K] pointers
# b_ptrs is a block of [BLOCK_SIZE_K, BLOCK_SIZE_n] pointers
# see above `Pointer Arithmetics` section for details
offs_am = pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)
offs_bn = pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)
offs_k = tl.arange(0, BLOCK_SIZE_K)
a_ptrs = a_ptr + (offs_am[:, None]*stride_am + offs_k [None, :]*stride_ak)
b_ptrs = b_ptr + (offs_k [:, None]*stride_bk + offs_bn[None, :]*stride_bn)
# -----------------------------------------------------------
# Iterate to compute a block of the C matrix
# We accumulate into a `[BLOCK_SIZE_M, BLOCK_SIZE_N]` block
# of fp32 values for higher accuracy.
# `accumulator` will be converted back to fp16 after the loop
2021-08-05 23:10:57 +00:00
accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=tl.float32)
for k in range(0, K, BLOCK_SIZE_K):
2021-08-12 02:04:29 +00:00
# Note that for simplicity, we don't apply a mask here.
# This means that if K is not a multiple of BLOCK_SIZE_K,
# this will access out-of-bounds memory and produce an
# error or (worse!) incorrect results.
2021-08-05 23:10:57 +00:00
a = tl.load(a_ptrs)
b = tl.load(b_ptrs)
# We accumulate along the K dimension
accumulator += tl.dot(a, b)
# Advance the ptrs to the next K block
a_ptrs += BLOCK_SIZE_K * stride_ak
b_ptrs += BLOCK_SIZE_K * stride_bk
2021-08-12 02:04:29 +00:00
# you can fuse arbitrary activation functions here
# while the accumulator is still in FP32 !
if meta['ACTIVATION']:
2021-08-05 23:10:57 +00:00
accumulator = meta['ACTIVATION'](accumulator)
2021-08-12 02:04:29 +00:00
c = accumulator.to(tl.float16)
2021-08-05 23:10:57 +00:00
2021-08-12 02:04:29 +00:00
# -----------------------------------------------------------
# Write back the block of the output matrix C
offs_cm = pid_m * BLOCK_SIZE_M + tl.arange(0, BLOCK_SIZE_M)
offs_cn = pid_n * BLOCK_SIZE_N + tl.arange(0, BLOCK_SIZE_N)
c_ptrs = c_ptr + stride_cm * offs_cm[:, None] + stride_cn * offs_cn[None, :]
c_mask = (offs_cm[:, None] < M) & (offs_cn[None, :] < N)
tl.store(c_ptrs, c, mask=c_mask)
2021-04-21 01:40:29 -04:00
2021-07-23 04:39:46 +00:00
# we can fuse `leaky_relu` by providing it as an `ACTIVATION` meta-parameter in `_matmul`
@triton.jit
def leaky_relu(x):
2021-08-05 23:10:57 +00:00
return tl.where(x >= 0, x, 0.01 * x)
2021-04-21 01:40:29 -04:00
2021-08-05 23:10:57 +00:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 263-265
2021-04-21 01:40:29 -04:00
2021-07-23 04:39:46 +00:00
We can now create a convenience wrapper function that only takes two input tensors
and (1) checks any shape constraint; (2) allocates the output; (3) launches the above kernel
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 265-294
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
.. code-block:: default
2021-03-15 13:58:20 -04:00
2021-08-05 23:10:57 +00:00
2021-04-21 01:40:29 -04:00
def matmul(a, b, activation=None):
# checks constraints
assert a.shape[1] == b.shape[0], "incompatible dimensions"
assert a.is_contiguous(), "matrix A must be contiguous"
assert b.is_contiguous(), "matrix B must be contiguous"
M, K = a.shape
2021-08-05 23:10:57 +00:00
K, N = b.shape
assert (
K % 32 == 0
), "We don't check memory-out-of-bounds with K so K must be divisible by BLOCK_SIZE_K"
2021-04-21 01:40:29 -04:00
# allocates output
c = torch.empty((M, N), device=a.device, dtype=a.dtype)
2021-08-05 23:10:57 +00:00
# 1D launch kernel where each block gets its own program.
grid = lambda META: (
triton.cdiv(M, META['BLOCK_SIZE_M']) * triton.cdiv(N, META['BLOCK_SIZE_N']),
)
matmul_kernel[grid](
2021-08-12 02:04:29 +00:00
a, b, c,
M, N, K,
a.stride(0), a.stride(1),
b.stride(0), b.stride(1),
c.stride(0), c.stride(1),
2021-08-05 23:10:57 +00:00
ACTIVATION=activation,
2021-04-21 01:40:29 -04:00
)
return c
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 295-299
2021-03-15 13:58:20 -04:00
Unit Test
-----------
2021-07-23 04:39:46 +00:00
We can test our custom matrix multiplication operation against a native torch implementation (i.e., cuBLAS)
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 299-312
2021-03-15 13:58:20 -04:00
.. code-block:: default
2021-07-23 04:39:46 +00:00
torch.manual_seed(0)
2021-04-21 01:40:29 -04:00
a = torch.randn((512, 512), device='cuda', dtype=torch.float16)
b = torch.randn((512, 512), device='cuda', dtype=torch.float16)
2021-08-05 23:10:57 +00:00
triton_output = matmul(a, b, activation=None)
torch_output = torch.matmul(a, b)
2021-08-06 20:03:44 +00:00
print(f"triton_output={triton_output}")
print(f"torch_output={torch_output}")
2021-08-05 23:10:57 +00:00
if triton.testing.allclose(triton_output, torch_output):
print("✅ Triton and Torch match")
else:
print("❌ Triton and Torch differ")
2021-03-15 13:58:20 -04:00
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
2021-08-05 23:10:57 +00:00
triton_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3984, 24.4531, -32.3438],
2021-07-23 04:39:46 +00:00
[ 6.3555, -19.6094, 34.0938, ..., -5.8945, 5.2891, 6.8867],
[-32.0625, 5.9492, 15.3984, ..., -21.3906, -23.9844, -10.1328],
2021-03-15 13:58:20 -04:00
...,
2021-07-23 04:39:46 +00:00
[ -5.7031, 7.4492, 8.2656, ..., -10.6953, -40.0000, 17.7500],
[ 25.5000, 24.3281, -8.4688, ..., -18.9375, 32.5312, -29.9219],
[ -5.3477, 4.9844, 11.8906, ..., 5.5898, 6.4023, -17.3125]],
device='cuda:0', dtype=torch.float16)
2021-08-05 23:10:57 +00:00
torch_output=tensor([[ 1.1045, -36.9688, 31.4688, ..., -11.3906, 24.4531, -32.3438],
2021-07-23 04:39:46 +00:00
[ 6.3516, -19.6094, 34.0938, ..., -5.8906, 5.2812, 6.8828],
[-32.0625, 5.9531, 15.3984, ..., -21.4062, -23.9844, -10.1328],
2021-03-15 13:58:20 -04:00
...,
2021-07-23 04:39:46 +00:00
[ -5.7070, 7.4492, 8.2656, ..., -10.6953, -40.0000, 17.7500],
[ 25.5000, 24.3438, -8.4609, ..., -18.9375, 32.5312, -29.9219],
[ -5.3477, 4.9805, 11.8828, ..., 5.5859, 6.4023, -17.3125]],
device='cuda:0', dtype=torch.float16)
2021-08-05 23:10:57 +00:00
✅ Triton and Torch match
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 313-319
2021-03-15 13:58:20 -04:00
Benchmark
--------------
Square Matrix Performance
~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-07-23 04:39:46 +00:00
We can now compare the performance of our kernel against that of cuBLAS. Here we focus on square matrices, but feel free to arrange this script as you wish to benchmark any other matrix shape.
2021-03-15 13:58:20 -04:00
2021-08-12 02:04:29 +00:00
.. GENERATED FROM PYTHON SOURCE LINES 319-360
2021-03-15 13:58:20 -04:00
.. code-block:: default
@triton.testing.perf_report(
triton.testing.Benchmark(
x_names=['M', 'N', 'K'], # argument names to use as an x-axis for the plot
2021-08-05 23:10:57 +00:00
x_vals=[
128 * i for i in range(1, 33)
], # different possible values for `x_name`
2021-04-23 16:42:55 -04:00
line_arg='provider', # argument name whose value corresponds to a different line in the plot
2021-08-05 23:10:57 +00:00
# possible values for `line_arg``
line_vals=['cublas', 'cublas + relu', 'triton', 'triton + relu'],
# label name for the lines
line_names=["cuBLAS", "cuBLAS (+ torch.nn.LeakyReLU)", "Triton", "Triton (+ LeakyReLU)"],
# line styles
styles=[('green', '-'), ('green', '--'), ('blue', '-'), ('blue', '--')],
2021-03-15 13:58:20 -04:00
ylabel="TFLOPS", # label name for the y-axis
plot_name="matmul-performance", # name for the plot. Used also as a file name for saving the plot.
2021-08-05 23:10:57 +00:00
args={},
2021-03-15 13:58:20 -04:00
)
)
def benchmark(M, N, K, provider):
a = torch.randn((M, K), device='cuda', dtype=torch.float16)
b = torch.randn((K, N), device='cuda', dtype=torch.float16)
2021-03-29 11:59:18 -04:00
if provider == 'cublas':
2021-03-15 13:58:20 -04:00
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.matmul(a, b))
if provider == 'triton':
2021-04-21 01:40:29 -04:00
ms, min_ms, max_ms = triton.testing.do_bench(lambda: matmul(a, b))
2021-07-23 04:39:46 +00:00
if provider == 'cublas + relu':
torch_relu = torch.nn.ReLU(inplace=True)
2021-08-05 23:10:57 +00:00
ms, min_ms, max_ms = triton.testing.do_bench(
lambda: torch_relu(torch.matmul(a, b))
)
2021-07-23 04:39:46 +00:00
if provider == 'triton + relu':
2021-08-05 23:10:57 +00:00
ms, min_ms, max_ms = triton.testing.do_bench(
lambda: matmul(a, b, activation=leaky_relu)
)
2021-03-15 13:58:20 -04:00
perf = lambda ms: 2 * M * N * K * 1e-12 / (ms * 1e-3)
return perf(ms), perf(max_ms), perf(min_ms)
2021-04-23 16:42:55 -04:00
benchmark.run(show_plots=True, print_data=True)
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
2021-03-15 13:58:20 -04:00
.. image:: /getting-started/tutorials/images/sphx_glr_03-matrix-multiplication_001.png
2021-03-29 11:59:18 -04:00
:alt: 03 matrix multiplication
2021-03-15 13:58:20 -04:00
:class: sphx-glr-single-img
2021-04-21 01:40:29 -04:00
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
2021-03-15 13:58:20 -04:00
2021-07-23 04:39:46 +00:00
matmul-performance:
2021-07-23 05:18:00 +00:00
M cuBLAS ... Triton Triton (+ LeakyReLU)
2021-08-14 00:13:18 +00:00
0 128.0 0.455111 ... 0.512000 0.512000
2021-08-17 00:16:20 +00:00
1 256.0 2.978909 ... 2.978909 2.978909
2 384.0 7.372800 ... 8.507077 8.507077
3 512.0 14.563555 ... 16.384000 15.420235
2021-08-12 02:04:29 +00:00
4 640.0 22.260869 ... 24.380953 24.380953
2021-07-23 05:18:00 +00:00
5 768.0 32.768000 ... 34.028308 34.028308
2021-08-17 00:16:20 +00:00
6 896.0 37.971025 ... 40.140799 39.025776
2021-08-16 00:13:29 +00:00
7 1024.0 49.932191 ... 52.428801 52.428801
8 1152.0 44.566925 ... 46.656000 46.656000
2021-08-17 00:16:20 +00:00
9 1280.0 51.200001 ... 56.888887 56.109587
10 1408.0 64.138541 ... 64.902096 64.902096
11 1536.0 80.430545 ... 76.933564 76.106321
2021-08-16 00:13:29 +00:00
12 1664.0 63.372618 ... 62.492442 62.492442
13 1792.0 72.983276 ... 70.246402 69.810085
2021-08-17 00:16:20 +00:00
14 1920.0 69.467336 ... 70.892307 70.530615
15 2048.0 73.908442 ... 75.234154 74.898285
16 2176.0 83.500614 ... 80.817862 80.173899
17 2304.0 68.446623 ... 73.501144 73.051599
18 2432.0 71.125224 ... 80.499895 79.587714
19 2560.0 77.833728 ... 77.283019 76.740048
20 2688.0 84.108772 ... 83.552988 84.108772
21 2816.0 81.674548 ... 77.882512 79.733474
22 2944.0 81.832567 ... 78.235527 77.990663
23 3072.0 81.121923 ... 83.761985 80.544956
24 3200.0 84.768213 ... 89.635851 89.635851
25 3328.0 79.812967 ... 84.200347 87.580655
26 3456.0 81.189898 ... 84.420490 85.404201
27 3584.0 86.707226 ... 95.047985 90.549237
28 3712.0 84.159518 ... 84.301560 82.423549
29 3840.0 83.655065 ... 87.562949 87.493673
30 3968.0 93.076994 ... 88.040360 87.913500
31 4096.0 93.596744 ... 86.816123 83.571059
2021-07-23 05:18:00 +00:00
[32 rows x 5 columns]
2021-03-15 13:58:20 -04:00
.. rst-class:: sphx-glr-timing
2021-08-17 00:16:20 +00:00
**Total running time of the script:** ( 2 minutes 2.006 seconds)
2021-03-15 13:58:20 -04:00
.. _sphx_glr_download_getting-started_tutorials_03-matrix-multiplication.py:
.. only :: html
.. container:: sphx-glr-footer
:class: sphx-glr-footer-example
.. container:: sphx-glr-download sphx-glr-download-python
:download:`Download Python source code: 03-matrix-multiplication.py <03-matrix-multiplication.py>`
.. container:: sphx-glr-download sphx-glr-download-jupyter
:download:`Download Jupyter notebook: 03-matrix-multiplication.ipynb <03-matrix-multiplication.ipynb>`
.. only:: html
.. rst-class:: sphx-glr-signature
`Gallery generated by Sphinx-Gallery <https://sphinx-gallery.github.io>`_