[DOCS] Various improvements and typo fixes
@@ -18,19 +18,21 @@ Getting Started
|
||||
getting-started/tutorials/index
|
||||
|
||||
Programming Guide
|
||||
--------------
|
||||
------------------
|
||||
|
||||
Check out the following documents to learn more about Triton and how it compares against other DSLs for DNNs:
|
||||
|
||||
- Chapter 1: :doc:`Introduction <programming-guide/introduction>`
|
||||
- Chapter 2: :doc:`Related Work <programming-guide/related-work>`
|
||||
- Chapter 3: :doc:`The Triton-C Kernel Language <programming-guide/triton-c>`
|
||||
- Chapter 1: :doc:`Introduction <programming-guide/chapter-1/introduction>`
|
||||
- Chapter 2: :doc:`Related Work <programming-guide/chapter-2/related-work>`
|
||||
- Chapter 3: :doc:`The Triton-C Language <programming-guide/chapter-3/triton-c>`
|
||||
- Chapter 4: :doc:`The Triton-IR Intermediate Representation <programming-guide/chapter-4/triton-ir>`
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Programming Guide
|
||||
:hidden:
|
||||
|
||||
programming-guide/introduction
|
||||
programming-guide/related-work
|
||||
programming-guide/triton-c
|
||||
programming-guide/chapter-1/introduction
|
||||
programming-guide/chapter-2/related-work
|
||||
programming-guide/chapter-3/triton-c
|
||||
programming-guide/chapter-4/triton-ir
|
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 9.5 KiB |
@@ -6,13 +6,13 @@ Introduction
|
||||
Motivations
|
||||
--------------
|
||||
|
||||
Over the past decade, Deep Neural Networks (DNNs) have emerged as an important class of Machine Learning (ML) models, capable of achieving state-of-the-art performance across many domains ranging from natural language processing [1]_ to computer vision [2]_ to computational neuroscience [3]_. The strength of these models lies in their hierarchical structure, composed of a sequence of parametric (e.g., convolutional) and non-parametric (e.g., rectified linearity) *layers*. This pattern, though notoriously computationally expensive, also generates a large amount of highly parallelizable work particularly well suited for multi- and many- core processors.
|
||||
Over the past decade, Deep Neural Networks (DNNs) have emerged as an important class of Machine Learning (ML) models, capable of achieving state-of-the-art performance across many domains ranging from natural language processing [SUTSKEVER2014]_ to computer vision [REDMON2016]_ to computational neuroscience [LEE2017]_. The strength of these models lies in their hierarchical structure, composed of a sequence of parametric (e.g., convolutional) and non-parametric (e.g., rectified linearity) *layers*. This pattern, though notoriously computationally expensive, also generates a large amount of highly parallelizable work particularly well suited for multi- and many- core processors.
|
||||
|
||||
As a consequence, Graphics Processing Units (GPUs) have become a cheap and accessible resource for exploring and/or deploying novel research ideas in the field. This trend has been accelerated by the release of several frameworks for General-Purpose GPU (GPGPU) computing, such as CUDA and OpenCL, which have made the development of high-performance programs easier. Yet, GPUs remain incredibly challenging to optimize for locality and parallelism, especially for computations that cannot be efficiently implemented using a combination of pre-existing optimized primitives. To make matters worse, GPU architectures are also rapidly evolving and specializing, as evidenced by the addition of tensor cores to NVIDIA (and more recently AMD) micro-architectures.
|
||||
|
||||
This tension between the computational opportunities offered by DNNs and the practical difficulty of GPU programming has created substantial academic and industrial interest for Domain-Specific Languages (DSLs) and compilers. Regrettably, these systems -- whether they be based on polyhedral machinery (*e.g.*, Tiramisu [4]_, Tensor Comprehensions [5]_) or scheduling languages (*e.g.*, Halide [6]_, TVM [7]_) -- remain less flexible and (for the same algorithm) markedly slower than the best handwritten compute kernels available in libraries like `cuBLAS <https://docs.nvidia.com/cuda/cublas/index.html>`_, `cuDNN <https://docs.nvidia.com/deeplearning/cudnn/api/index.html>`_ or `TensorRT <https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html>`_.
|
||||
This tension between the computational opportunities offered by DNNs and the practical difficulty of GPU programming has created substantial academic and industrial interest for Domain-Specific Languages (DSLs) and compilers. Regrettably, these systems -- whether they be based on polyhedral machinery (*e.g.*, Tiramisu [BAGHDADI2021]_, Tensor Comprehensions [VASILACHE2018]_) or scheduling languages (*e.g.*, Halide [JRK2013]_, TVM [CHEN2018]_) -- remain less flexible and (for the same algorithm) markedly slower than the best handwritten compute kernels available in libraries like `cuBLAS <https://docs.nvidia.com/cuda/cublas/index.html>`_, `cuDNN <https://docs.nvidia.com/deeplearning/cudnn/api/index.html>`_ or `TensorRT <https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html>`_.
|
||||
|
||||
The main premise of this project is the following: programming paradigms based on blocked algorithms [8]_ can facilitate the construction of high-performance compute kernels for neural networks. We specifically revisit traditional "Single Program, Multiple Data" (SPMD [9]_) execution models for GPUs, and propose a variant in which programs -- rather than threads -- are blocked. For example, in the case of matrix multiplication, CUDA and Triton differ as follows:
|
||||
The main premise of this project is the following: programming paradigms based on blocked algorithms [LAM1991]_ can facilitate the construction of high-performance compute kernels for neural networks. We specifically revisit traditional "Single Program, Multiple Data" (SPMD [AUGUIN1983]_) execution models for GPUs, and propose a variant in which programs -- rather than threads -- are blocked. For example, in the case of matrix multiplication, CUDA and Triton differ as follows:
|
||||
|
||||
.. table::
|
||||
:widths: 50 50
|
||||
@@ -58,12 +58,12 @@ The main challenge posed by our proposed paradigm is that of work scheduling, i.
|
||||
References
|
||||
--------------
|
||||
|
||||
.. [1] Sutskever et al., "Sequence to Sequence Learning with Neural Networks", NIPS 2014
|
||||
.. [2] Redmon et al., "You Only Look Once: Unified, Real-Time Object Detection", CVPR 2016
|
||||
.. [3] Lee et al., "Superhuman Accuracy on the SNEMI3D Connectomics Challenge", ArXiV 2017
|
||||
.. [4] Baghdadi et al., "Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code", CGO 2021
|
||||
.. [5] Vasilache et al., "Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions", ArXiV 2018
|
||||
.. [6] Ragan-Kelley et al., "Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines", PLDI 2013
|
||||
.. [7] Chen et al., "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning", OSDI 2018
|
||||
.. [8] Lam et al., "The Cache Performance and Optimizations of Blocked Algorithms", ASPLOS 1991
|
||||
.. [9] Auguin et al., "Opsila: an advanced SIMD for numerical analysis and signal processing", EUROMICRO 1983
|
||||
.. [SUTSKEVER2014] I. Sutskever et al., "Sequence to Sequence Learning with Neural Networks", NIPS 2014
|
||||
.. [REDMON2016] J. Redmon et al., "You Only Look Once: Unified, Real-Time Object Detection", CVPR 2016
|
||||
.. [LEE2017] K. Lee et al., "Superhuman Accuracy on the SNEMI3D Connectomics Challenge", ArXiV 2017
|
||||
.. [BAGHDADI2021] R. Baghdadi et al., "Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code", CGO 2021
|
||||
.. [VASILACHE2018] N. Vasilache et al., "Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions", ArXiV 2018
|
||||
.. [JRK2013] J. Ragan-Kelley et al., "Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines", PLDI 2013
|
||||
.. [CHEN2018] T. Chen et al., "TVM: An Automated End-to-End Optimizing Compiler for Deep Learning", OSDI 2018
|
||||
.. [LAM1991] M. Lam et al., "The Cache Performance and Optimizations of Blocked Algorithms", ASPLOS 1991
|
||||
.. [AUGUIN1983] M. Auguin et al., "Opsila: an advanced SIMD for numerical analysis and signal processing", EUROMICRO 1983
|
Before Width: | Height: | Size: 3.0 KiB After Width: | Height: | Size: 3.0 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 59 KiB |
@@ -8,7 +8,7 @@ At first sight, Triton may seem like just yet another DSL for DNNs. The purpose
|
||||
Polyhedral Compilation
|
||||
-----------------------
|
||||
|
||||
Traditional compilers typically rely on intermediate representations, such as LLVM-IR [1]_, that encode control flow information using (un)conditional branches. This relatively low-level format makes it difficult to statically analyze the runtime behavior (e.g., cache misses) of input programs, and to automatically optimize loops accordingly through the use of tiling [2]_, fusion [3]_ and interchange [4]_. To solve this issue, polyhedral compilers [5]_ rely on program representations that have statically predictable control flow, thereby enabling aggressive compile-time program transformations for data locality and parallelism. Though this strategy has been adopted by many languages and compilers for DNNs such as Tiramisu [6]_, Tensor Comprehensions [7]_, Diesel [8]_ and the Affine dialect in MLIR [9]_, it also comes with a number of limitations that will be described later.
|
||||
Traditional compilers typically rely on intermediate representations, such as LLVM-IR [LATTNER2004]_, that encode control flow information using (un)conditional branches. This relatively low-level format makes it difficult to statically analyze the runtime behavior (e.g., cache misses) of input programs, and to automatically optimize loops accordingly through the use of tiling [WOLFE1989]_, fusion [DARTE1999]_ and interchange [ALLEN1984]_. To solve this issue, polyhedral compilers [ANCOURT1991]_ rely on program representations that have statically predictable control flow, thereby enabling aggressive compile-time program transformations for data locality and parallelism. Though this strategy has been adopted by many languages and compilers for DNNs such as Tiramisu [BAGHDADI2021]_, Tensor Comprehensions [VASILACHE2018]_, Diesel [ELANGO2018]_ and the Affine dialect in MLIR [LATTNER2019]_, it also comes with a number of limitations that will be described later in this section.
|
||||
|
||||
+++++++++++++++++++++++
|
||||
Program Representation
|
||||
@@ -111,9 +111,9 @@ Advantages
|
||||
|
||||
Programs amenable to polyhedral compilation can be aggressively transformed and optimized. Most of these transformations actually boil down to the production of schedules and iteration domains that enable loop transformations promoting parallelism and spatial/temporal data locality (e.g., fusion, interchange, tiling, parallelization).
|
||||
|
||||
Polyhedral compilers can also automatically go through complex verification processes to ensure that the semantics of their input program is preserved throughout this optimization phase. Note that polyhedral optimizers are not incompatible with more standard optimization techniques. In fact, it is not uncommon for these systems to be implemented as a set of LLVM passes that can be run ahead of more traditional compilation techniques [10]_.
|
||||
Polyhedral compilers can also automatically go through complex verification processes to ensure that the semantics of their input program is preserved throughout this optimization phase. Note that polyhedral optimizers are not incompatible with more standard optimization techniques. In fact, it is not uncommon for these systems to be implemented as a set of LLVM passes that can be run ahead of more traditional compilation techniques [GROSSER2012]_.
|
||||
|
||||
All in all, polyhedral machinery is extremely powerful, when applicable. It has been shown to support most common loop transformations, and has indeed achieved performance comparable to state-of-the-art GPU libraries for dense matrix multiplication [8]_. Additionally, it is also fully automatic and doesn't require any hint from programmers apart from source-code in a C-like format.
|
||||
All in all, polyhedral machinery is extremely powerful, when applicable. It has been shown to support most common loop transformations, and has indeed achieved performance comparable to state-of-the-art GPU libraries for dense matrix multiplication [ELANGO2018]_. Additionally, it is also fully automatic and doesn't require any hint from programmers apart from source-code in a C-like format.
|
||||
|
||||
++++++++++++
|
||||
Limitations
|
||||
@@ -121,9 +121,9 @@ Limitations
|
||||
|
||||
Unfortunately, polyhedral compilers suffer from two major limitations that have prevented its adoption as a universal method for code generation in neural networks.
|
||||
|
||||
First, the set of possible program transformations $\Omega = \{ \Theta_S ~|~ S \in \text{program} \}$ is large, and grows with the number of statements in the program as well as with the size of their iteration domain. Verifying the legality of each transformation can also require the resolution of complex integer linear programs, making polyhedral compilation very computationally expensive. To make matters worse, hardware properties (e.g., cache size, number of SMs) and contextual characteristics (e.g., input tensor shapes) also have to be taken into account by this framework, leading to expensive auto-tuning procedures [11]_.
|
||||
First, the set of possible program transformations $\Omega = \{ \Theta_S ~|~ S \in \text{program} \}$ is large, and grows with the number of statements in the program as well as with the size of their iteration domain. Verifying the legality of each transformation can also require the resolution of complex integer linear programs, making polyhedral compilation very computationally expensive. To make matters worse, hardware properties (e.g., cache size, number of SMs) and contextual characteristics (e.g., input tensor shapes) also have to be taken into account by this framework, leading to expensive auto-tuning procedures [SATO2019]_.
|
||||
|
||||
Second, the polyhedral framework is not very generally applicable; SCoPs are relatively common [12]_ but require loop bounds and array subscripts to be affine functions of loop indices, which typically only occurs in regular, dense computations. For this reason, this framework still has to be successfully applied to sparse -- or even structured-sparse -- neural networks, whose importance has been rapidly rising over the past few years.
|
||||
Second, the polyhedral framework is not very generally applicable; SCoPs are relatively common [GIRBAL2006]_ but require loop bounds and array subscripts to be affine functions of loop indices, which typically only occurs in regular, dense computations. For this reason, this framework still has to be successfully applied to sparse -- or even structured-sparse -- neural networks, whose importance has been rapidly rising over the past few years.
|
||||
|
||||
On the other hand, blocked program representations advocated by this dissertation are less restricted in scope and can achieve close to peak performance using standard dataflow analysis.
|
||||
|
||||
@@ -154,7 +154,7 @@ Separation of concerns \cite{dijkstra82} is a well-known design principle in com
|
||||
.parallel(y).vectorize(xii).unroll(xi).unroll(yii);
|
||||
|
||||
|
||||
The resulting code may however not be completely portable, as schedules can sometimes rely on execution models (e.g., SPMD) or hardware intrinsics (e.g., matrix-multiply-accumulate) that are not widely available. This issue can be mitigated by auto-scheduling mechanisms [13]_.
|
||||
The resulting code may however not be completely portable, as schedules can sometimes rely on execution models (e.g., SPMD) or hardware intrinsics (e.g., matrix-multiply-accumulate) that are not widely available. This issue can be mitigated by auto-scheduling mechanisms [MULLAPUDI2016]_.
|
||||
|
||||
+++++++++++
|
||||
Advantages
|
||||
@@ -194,16 +194,16 @@ On the other hand, the block-based program representation that we advocate for t
|
||||
References
|
||||
--------------
|
||||
|
||||
.. [1] Lattner et al., "LLVM: a compilation framework for lifelong program analysis transformation"
|
||||
.. [2] Wolfe, "More Iteration Space Tiling", SC 1989
|
||||
.. [3] Darte, "On the Complexity of Loop Fusion", PACT 1999
|
||||
.. [4] Allen et al., "Automatic Loop Interchange", SIGPLAN Notices 1984
|
||||
.. [5] Ancourt et al., "Scanning Polyhedra with DO Loops", PPoPP 1991
|
||||
.. [6] Baghdadi et al., "Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code", CGO 2021
|
||||
.. [7] Vasilache et al., "Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions", ArXiV 2018
|
||||
.. [8] Elango et al. "Diesel: DSL for Linear Algebra and Neural Net Computations on GPUs", MAPL 2018
|
||||
.. [9] Lattner et al., "MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law", Arxiv 2019
|
||||
.. [10] Grosser et al., "Polly - Performing Polyhedral Optimizations on a Low-Level Intermediate Representation", Parallel Processing Letters 2012
|
||||
.. [11] Sato et al., "An Autotuning Framework for Scalable Execution of Tiled Code via Iterative Polyhedral Compilation", TACO 2019
|
||||
.. [12] Girbal et al., "Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies", International Journal of Parallel Programming 2006
|
||||
.. [13] Mullapudi et al., "Automatically scheduling halide image processing pipelines", TOG 2016
|
||||
.. [LATTNER2004] C. Lattner et al., "LLVM: a compilation framework for lifelong program analysis transformation", CGO 2004
|
||||
.. [WOLFE1989] M. Wolfe, "More Iteration Space Tiling", SC 1989
|
||||
.. [DARTE1999] A. Darte, "On the Complexity of Loop Fusion", PACT 1999
|
||||
.. [ALLEN1984] J. Allen et al., "Automatic Loop Interchange", SIGPLAN Notices 1984
|
||||
.. [ANCOURT1991] C. Ancourt et al., "Scanning Polyhedra with DO Loops", PPoPP 1991
|
||||
.. [BAGHDADI2021] R. Baghdadi et al., "Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code", CGO 2021
|
||||
.. [VASILACHE2018] N. Vasilache et al., "Tensor Comprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions", ArXiV 2018
|
||||
.. [ELANGO2018] V. Elango et al. "Diesel: DSL for Linear Algebra and Neural Net Computations on GPUs", MAPL 2018
|
||||
.. [LATTNER2019] C. Lattner et al., "MLIR Primer: A Compiler Infrastructure for the End of Moore’s Law", Arxiv 2019
|
||||
.. [GROSSER2012] T. Grosser et al., "Polly - Performing Polyhedral Optimizations on a Low-Level Intermediate Representation", Parallel Processing Letters 2012
|
||||
.. [SATO2019] Y. Sato et al., "An Autotuning Framework for Scalable Execution of Tiled Code via Iterative Polyhedral Compilation", TACO 2019
|
||||
.. [GIRBAL2006] S. Girbal et al., "Semi-Automatic Composition of Loop Transformations for Deep Parallelism and Memory Hierarchies", International Journal of Parallel Programming 2006
|
||||
.. [MULLAPUDI2016] R. Mullapudi et al., "Automatically scheduling halide image processing pipelines", TOG 2016
|
@@ -25,6 +25,7 @@ Extensions
|
||||
**Masked pointer dereferencement**: Block-level operations in Triton-C are "atomic", in the sense that they execute either completely or not at all. Basic element-wise control-flow for block-level operations can nonetheless be achieved using ternary operators and the *masked pointer dereferencement* operator exemplified below:
|
||||
|
||||
.. code-block:: C
|
||||
:force:
|
||||
|
||||
// create mask
|
||||
bool mask[16, 16] = ...;
|
BIN
docs/programming-guide/chapter-4/broadcast-1.png
Normal file
After Width: | Height: | Size: 2.9 KiB |
BIN
docs/programming-guide/chapter-4/broadcast-2.png
Normal file
After Width: | Height: | Size: 3.6 KiB |
82
docs/programming-guide/chapter-4/triton-ir.rst
Normal file
@@ -0,0 +1,82 @@
|
||||
==========================================
|
||||
The Triton-IR Intermediate Representation
|
||||
==========================================
|
||||
|
||||
Triton-IR is an LLVM-based Intermediate Representation (IR) whose purpose is to provide an environment suitable for block-level program analysis, transformation and optimization.
|
||||
In our implementation, Triton-IR programs are constructed directly from Triton-C after parsing, but they could also be formed directly by higher-level DSLs in the future.
|
||||
Triton-IR and LLVM-IR programs share the same high-level structure, but the former also includes a number of extensions necessary for block-level data-flow analysis.
|
||||
These extensions are crucial for carrying out the optimizations outlined in the next chapter of this document.
|
||||
|
||||
---------------------------------
|
||||
Structure of a Triton-IR Program
|
||||
---------------------------------
|
||||
|
||||
++++++++
|
||||
Modules
|
||||
++++++++
|
||||
|
||||
At the highest level, Triton-IR programs consist of one or multiple basic units of compilation known as *modules*. These modules are compiled independently from one another, and eventually aggregated by a linker whose role is to resolve forward declarations and adequately merge global definitions. Each module itself is composed of functions, global variables, constants and other miscellaneous symbols such as metadata and attributes.
|
||||
|
||||
++++++++++
|
||||
Functions
|
||||
++++++++++
|
||||
|
||||
Triton-IR function definitions consist of a return type, a name and a potentially empty arguments list. Additional visibility, alignment and linkage specifiers can be added if desired. Function attributes (such as inlining hints) and parameter attributes (such as "readonly", aliasing hints) can also be specified, allowing compiler backends to perform more aggressive optimizations by, for instance, making better use of non-coherent caches found on NVIDIA GPUs. This header is followed by a body composed of a list of basic blocks whose interdependencies form the Control Flow Graph (CFG) of the function.
|
||||
|
||||
+++++++++++++
|
||||
Basic Blocks
|
||||
+++++++++++++
|
||||
|
||||
Basic blocks are straight-line code sequences that may only contain so-called *terminator* instructions (i.e., branching, return) at their end. To simplify program analysis, Triton-IR uses the Static Single Assignment (SSA) form, meaning that each variable in each basic block must be (1) assigned to only once and (2) defined before being used. In so doing, each basic block implicitly defines a Data-Flow Graph (DFG). In our case, the SSA form is created directly from Triton-C's Abstract Syntax Trees (ASTs) using an algorithm from the literature [BRAUN13]_.
|
||||
|
||||
---------------------------------
|
||||
Block-Level Dataflow Analysis
|
||||
---------------------------------
|
||||
|
||||
+++++++
|
||||
Types
|
||||
+++++++
|
||||
|
||||
Multi-dimensional blocks are at the center of data-flow analysis in Triton-JIT. They can be declared using syntax similar to vector declarations in LLVM-IR. For example, :code:`i32<8, 8>` is the type corresponding to :math:`8 \times 8` blocks of 32-bit integers. Note that there is no preprocessor in Triton-IR, hence parametric shape values must be resolved before programs are generated. In our case, this is done by Triton-JIT's auto-tuner.
|
||||
|
||||
+++++++++++++
|
||||
Instructions
|
||||
+++++++++++++
|
||||
|
||||
Triton-IR introduces a set of *reblocking* instructions whose purpose is to support broadcasting semantics as described in the previous chapter. The :code:`reshape` instruction creates a block of the specified shape using the raw data from its input argument. This is particularly useful to re-interpret variables as higher-dimensional arrays by padding their input shapes with ones in preparation for broadcasting. The :code:`broadcast` instruction creates a block of the specified shapes by replicating its input argument as many times as necessary along dimensions of size 1 -- as shown below for the :code:`broadcast<3,3>` instruction.
|
||||
|
||||
|pic1| and |pic2|
|
||||
|
||||
.. |pic1| image:: broadcast-1.png
|
||||
:width: 40%
|
||||
|
||||
.. |pic2| image:: broadcast-2.png
|
||||
:width: 40%
|
||||
|
||||
Usual scalar instructions (:code:`cmp`, :code:`getelementptr`, :code:`add`, :code:`load`...) were preserved and extended to signify element-wise operations when applicable. Finally, Triton-IR also exposes specialized arithmetic instructions for reductions (:code:`reduce`) and matrix multiplications (:code:`dot`).
|
||||
|
||||
----------------------------------
|
||||
Block-Level Control Flow Analysis
|
||||
----------------------------------
|
||||
|
||||
In Triton-IR, operations on block variables are atomic: they execute either in full or not at all. As a result, traditional control flow structures (e.g., conditional, loops) are not applicable to individual block elements. This is problematic, since a program may need to e.g., partially guard blocked loads against memory access violations.
|
||||
|
||||
This could be potentially solved through the use of the Predicated SSA (PSSA) [CARTER99]_ [STOUTCHININ01]_ form for Triton-IR. However, this would create a lot of unnecessary complexity for GPUs, where the benefits of PSSA are close to none as divergent program paths within warps are serialized anyway. Therefore, recent versions of Triton handle intra-block control flow in a much simpler way, using conditional instructions such as :code:`select`, :code:`masked_load` and :code:`masked_store`:
|
||||
|
||||
.. code-block:: C
|
||||
|
||||
// For all indices [idx], return cond[idx] ? true_value[idx] : false_value[idx];
|
||||
select TYPE<TS1, ..., TSN> cond, true_value, false_value;
|
||||
// For all indices [idx], return cond[idx] ? *true_addr[idx] : false_value[idx];
|
||||
masked_load TYPE<TS1, ..., TSN> cond, true_addr, false_value;
|
||||
// For all indices [idx], execute *true_addr[idx] = true_value[idx] if cond[idx]
|
||||
masked_store TYPE<TS1, ..., TSN> cond, true_addr, true_value;
|
||||
|
||||
|
||||
------------
|
||||
References
|
||||
------------
|
||||
|
||||
.. [BRAUN13] M. Braun et al., "Simple and Efficient Construction of Static Single Assignment Form", CC 2013
|
||||
.. [CARTER99] L. Carter et al., "Predicated Static Single Assignment", PACT 1999
|
||||
.. [STOUTCHININ01] A. Stoutchinin et al., "Efficient Static Single Assignment Form for Predication", MICRO 2001
|
@@ -17,15 +17,11 @@ square_confs = [
|
||||
x_names=["M", "N", "K"],
|
||||
x_vals=rounded_linspace(512, 8192, 32, 128),
|
||||
y_name="provider",
|
||||
y_vals=["torch", "triton", "cutlass"],
|
||||
y_lines=["Torch", "Triton", "CUTLASS"],
|
||||
y_vals=["cublas", "triton", "cutlass"],
|
||||
y_lines=["cuBLAS", "Triton", "CUTLASS"],
|
||||
ylabel="TFLOPS",
|
||||
plot_name=f"matmul-square-{nt[AT]}{nt[BT]}",
|
||||
args={
|
||||
"AT": AT,
|
||||
"BT": BT,
|
||||
"dtype": torch.float16
|
||||
},
|
||||
args={"AT": AT, "BT": BT, "dtype": torch.float16},
|
||||
) for AT in [False] for BT in [False]
|
||||
]
|
||||
|
||||
@@ -35,8 +31,8 @@ transformer_confs = [
|
||||
x_names=[x],
|
||||
x_vals = rounded_linspace(NK//16, NK, 32, 128),
|
||||
y_name="provider",
|
||||
y_vals=["torch", "triton", "cutlass"],
|
||||
y_lines=["Torch", "Triton", "CUTLASS"],
|
||||
y_vals=["cublas", "triton", "cutlass"],
|
||||
y_lines=["cuBLAS", "Triton", "CUTLASS"],
|
||||
ylabel="TFLOPS",
|
||||
plot_name=f"matmul-M{M}-{'NK'.replace(x, '')}{NK}",
|
||||
args= {"M": M, 'NK'.replace(x,''): NK, "AT": False, "BT": False, "dtype": torch.float16}
|
||||
@@ -46,7 +42,7 @@ transformer_confs = [
|
||||
]
|
||||
|
||||
|
||||
@triton.testing.perf_report(transformer_confs)
|
||||
@triton.testing.perf_report(square_confs)
|
||||
def bench_op(M, N, K, AT, BT, dtype, provider, warmup=25, rep=75):
|
||||
a = torch.rand((K, M) if AT else (M, K), device="cuda", dtype=dtype)
|
||||
b = torch.rand((N, K) if BT else (K, N), device="cuda", dtype=dtype)
|
||||
@@ -54,7 +50,7 @@ def bench_op(M, N, K, AT, BT, dtype, provider, warmup=25, rep=75):
|
||||
if BT: b = b.t()
|
||||
num_flops = 2 * M * N * K
|
||||
tflops = lambda ms: 2. * M * N * K / ms * 1e-9
|
||||
if provider == "torch":
|
||||
if provider == "cublas":
|
||||
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.matmul(a, b), warmup=warmup, rep=rep)
|
||||
return tflops(ms), tflops(max_ms), tflops(min_ms)
|
||||
if provider == "triton":
|
||||
|
@@ -59,7 +59,7 @@ class CMakeBuild(build_ext):
|
||||
if not os.path.exists(llvm_build_dir):
|
||||
os.makedirs(llvm_build_dir)
|
||||
# python directories
|
||||
python_include_dirs = distutils.sysconfig.get_python_inc()
|
||||
python_include_dirs = [distutils.sysconfig.get_python_inc()] + ['/usr/local/cuda/include']
|
||||
python_lib_dirs = distutils.sysconfig.get_config_var("LIBDIR")
|
||||
cmake_args = [
|
||||
"-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=" + extdir,
|
||||
@@ -68,7 +68,7 @@ class CMakeBuild(build_ext):
|
||||
#'-DPYTHON_EXECUTABLE=' + sys.executable,
|
||||
#'-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON,
|
||||
"-DTRITON_LLVM_BUILD_DIR=" + llvm_build_dir,
|
||||
"-DPYTHON_INCLUDE_DIRS=" + ";".join([python_include_dirs])
|
||||
"-DPYTHON_INCLUDE_DIRS=" + ";".join(python_include_dirs)
|
||||
]
|
||||
# configuration
|
||||
cfg = "Debug" if self.debug else "Release"
|
||||
|
@@ -1,32 +1,44 @@
|
||||
import os
|
||||
import struct
|
||||
from typing import Optional, Dict, List
|
||||
from typing import Optional, Dict, List, Callable
|
||||
import torch
|
||||
# C bindings
|
||||
import triton._C.libtriton.triton as _triton
|
||||
|
||||
codes = {
|
||||
_triton.runtime.arg_type.int1: 'B', _triton.runtime.arg_type.int8: 'B', _triton.runtime.arg_type.int32: 'I',
|
||||
_triton.runtime.arg_type.int64: 'Q', _triton.runtime.arg_type.half: 'H', _triton.runtime.arg_type.float: 'f',
|
||||
_triton.runtime.arg_type.double: 'd', _triton.runtime.arg_type.buffer: 'P'
|
||||
_triton.runtime.arg_type.int1: 'B',
|
||||
_triton.runtime.arg_type.int8: 'B',
|
||||
_triton.runtime.arg_type.int32: 'I',
|
||||
_triton.runtime.arg_type.int64: 'Q',
|
||||
_triton.runtime.arg_type.half: 'H',
|
||||
_triton.runtime.arg_type.float: 'f',
|
||||
_triton.runtime.arg_type.double: 'd',
|
||||
_triton.runtime.arg_type.buffer: 'P',
|
||||
}
|
||||
|
||||
|
||||
def th_to_triton(obj):
|
||||
""" Convert a `torch.dtype` to a Triton-C type string. """
|
||||
tys = {
|
||||
torch.int8: 'char', torch.int16: 'short', torch.int32: 'int', torch.int64: 'long',\
|
||||
torch.float16: 'half', torch.float32: 'float', torch.float64: 'double'
|
||||
torch.int8: 'char',
|
||||
torch.int16: 'short',
|
||||
torch.int32: 'int',
|
||||
torch.int64: 'long',
|
||||
torch.float16: 'half',
|
||||
torch.float32: 'float',
|
||||
torch.float64: 'double',
|
||||
}
|
||||
if isinstance(obj, torch.dtype):
|
||||
return tys[obj]
|
||||
return str(obj)
|
||||
|
||||
|
||||
def cdiv(a, b):
|
||||
def cdiv(a: int, b: int) -> int:
|
||||
""" Ceil division (a + b - 1) // b"""
|
||||
return (a + b - 1) // b
|
||||
|
||||
|
||||
def read(path, kernel_names: Optional[List] = None):
|
||||
def read(path: str, kernel_names: Optional[List] = None) -> str:
|
||||
""" Extracts the source code for `kernel_names` from the given `path` file."""
|
||||
if kernel_names is None:
|
||||
kernel_names = []
|
||||
with open(path, 'r') as f:
|
||||
@@ -39,19 +51,31 @@ config = _triton.runtime.config
|
||||
|
||||
|
||||
class kernel:
|
||||
"""
|
||||
A class used to represent a Triton kernel.
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
src,
|
||||
device,
|
||||
src: str,
|
||||
device: torch.device,
|
||||
defines: Optional[Dict] = None,
|
||||
num_warps: int = 4,
|
||||
autotune_vals: Optional[List] = None,
|
||||
autotune_configs: Optional[List] = None,
|
||||
autotune_key: Optional[List] = None
|
||||
):
|
||||
"""
|
||||
:param src: The source code of the kernel.
|
||||
:param device: The device to compile the kernel for.
|
||||
:param defines: A dictionary of preprocessor #define for the compiler.
|
||||
:param num_warps: Optimization flag for the compiler's internal auto-parallelization engine.
|
||||
:param autotune_configs: A list of triton.config objects for the autotuner to try.
|
||||
:param autotune_key: A list of kernel argument names whose change in value should trigger the autotuner to re-run.
|
||||
"""
|
||||
|
||||
if defines is None:
|
||||
defines = {}
|
||||
if autotune_vals is None:
|
||||
autotune_vals = []
|
||||
if autotune_configs is None:
|
||||
autotune_configs = []
|
||||
if autotune_key is None:
|
||||
autotune_key = []
|
||||
# check if src is empty
|
||||
@@ -74,11 +98,17 @@ class kernel:
|
||||
self.opt = _triton.runtime.options()
|
||||
self.opt.defines = {k: th_to_triton(v) for k, v in defines.items()}
|
||||
self.opt.num_warps = num_warps
|
||||
# autotune_vals = [({}, 4)]
|
||||
self.fn = _triton.runtime.function(self.src, self.opt, self.device, autotune_vals, autotune_key)
|
||||
# autotune_configs = [({}, 4)]
|
||||
self.fn = _triton.runtime.function(self.src, self.opt, self.device, autotune_configs, autotune_key)
|
||||
self.tys = ''.join([codes[x] for x in self.fn.signature()])
|
||||
|
||||
def __call__(self, *args, grid):
|
||||
def __call__(self, *args, grid: Callable[[_triton.runtime.options], tuple]):
|
||||
"""
|
||||
Runs the kernel on the given arguments and launch grid.
|
||||
:param args: The arguments to the kernel in the orders that they appear in the Triton-C source.
|
||||
:param grid: The launch grid for the kernel, i.e., callable that transform compilation options into a tuple of at most 3 integers.
|
||||
:return: None
|
||||
"""
|
||||
# make sure that the executing thread is on the right device
|
||||
torch.cuda.set_device(self.device_id)
|
||||
# pack parameters into a byte buffer
|
||||
|
@@ -2,6 +2,7 @@ import torch
|
||||
import triton
|
||||
import os
|
||||
|
||||
|
||||
class _matmul(torch.autograd.Function):
|
||||
src = triton.read(os.path.join(os.path.dirname(__file__), "matmul.c"))
|
||||
|
||||
@@ -83,7 +84,7 @@ class _matmul(torch.autograd.Function):
|
||||
_matmul.src,
|
||||
device,
|
||||
defines=defines,
|
||||
autotune_vals=_matmul._CONFIGS,
|
||||
autotune_configs=_matmul._CONFIGS,
|
||||
autotune_key=["M", "N", "K"],
|
||||
)
|
||||
kernel = _matmul._kernels[key]
|
||||
@@ -93,24 +94,8 @@ class _matmul(torch.autograd.Function):
|
||||
locks = _matmul._locks[device]
|
||||
# enqueue
|
||||
alpha = 1.0
|
||||
args = [
|
||||
a.data_ptr(),
|
||||
b.data_ptr(),
|
||||
c.data_ptr(),
|
||||
alpha,
|
||||
M,
|
||||
N,
|
||||
K,
|
||||
lda,
|
||||
ldb,
|
||||
ldc,
|
||||
locks.data_ptr(),
|
||||
]
|
||||
grid = lambda opt: [
|
||||
triton.cdiv(M, opt.TM) * triton.cdiv(N, opt.TN),
|
||||
1,
|
||||
opt.SPLITK,
|
||||
]
|
||||
args = [a.data_ptr(), b.data_ptr(), c.data_ptr(), alpha, M, N, K, lda, ldb, ldc, locks.data_ptr()]
|
||||
grid = lambda opt: [triton.cdiv(M, opt.TM) * triton.cdiv(N, opt.TN), 1, opt.SPLITK]
|
||||
kernel(*args, grid=grid)
|
||||
return c
|
||||
|
||||
@@ -119,4 +104,5 @@ class _matmul(torch.autograd.Function):
|
||||
c = _matmul._call(a, b)
|
||||
return c
|
||||
|
||||
|
||||
matmul = _matmul.apply
|
||||
|
@@ -108,9 +108,10 @@ class Benchmark:
|
||||
y_name,
|
||||
y_vals,
|
||||
y_lines,
|
||||
ylabel,
|
||||
plot_name,
|
||||
args,
|
||||
xlabel='',
|
||||
ylabel='',
|
||||
x_log=False,
|
||||
y_log=False,
|
||||
):
|
||||
@@ -121,6 +122,8 @@ class Benchmark:
|
||||
self.y_vals = y_vals
|
||||
self.y_lines = y_lines
|
||||
self.y_log = y_log
|
||||
# plot info
|
||||
self.xlabel = xlabel
|
||||
self.ylabel = ylabel
|
||||
self.plot_name = plot_name
|
||||
self.args = args
|
||||
@@ -131,7 +134,7 @@ class Mark:
|
||||
self.fn = fn
|
||||
self.benchmarks = benchmarks
|
||||
|
||||
def _run(self, bench, save_path, show_plots):
|
||||
def _run(self, bench, save_path, show_plots, print_data):
|
||||
import matplotlib.pyplot as plt
|
||||
import pandas as pd
|
||||
import os
|
||||
@@ -155,7 +158,6 @@ class Mark:
|
||||
if bench.plot_name:
|
||||
plt.figure()
|
||||
ax = plt.subplot()
|
||||
xlabel = " = ".join(bench.x_names)
|
||||
x = bench.x_names[0]
|
||||
for y in bench.y_lines:
|
||||
y_min, y_max = df[y + '-min'], df[y + '-max']
|
||||
@@ -163,27 +165,30 @@ class Mark:
|
||||
if y_min is not None and y_max is not None:
|
||||
ax.fill_between(df[x], y_min, y_max, alpha=0.5)
|
||||
ax.legend()
|
||||
xlabel = bench.xlabel if bench.xlabel else " = ".join(bench.x_names)
|
||||
ax.set_xlabel(xlabel)
|
||||
ax.set_ylabel(bench.ylabel)
|
||||
ax.set_title(bench.plot_name)
|
||||
#ax.set_title(bench.plot_name)
|
||||
ax.set_xscale("log" if bench.x_log else "linear")
|
||||
ax.set_yscale("log" if bench.y_log else "linear")
|
||||
if show_plots:
|
||||
plt.show()
|
||||
if save_path:
|
||||
plt.savefig(os.path.join(save_path, f"{bench.plot_name}.png"))
|
||||
df = df[[bench.x_names[0]] + bench.y_lines]
|
||||
if print_data:
|
||||
print(df)
|
||||
if save_path:
|
||||
df = df[[bench.x_names[0]] + bench.y_lines]
|
||||
df.to_csv(os.path.join(save_path, f"{bench.plot_name}.csv"), float_format='%.1f', index=False)
|
||||
|
||||
def run(self, show_plots=False, save_path=''):
|
||||
def run(self, show_plots=False, print_data=False, save_path=''):
|
||||
has_single_bench = isinstance(self.benchmarks, Benchmark)
|
||||
benchmarks = [self.benchmarks] if has_single_bench else self.benchmarks
|
||||
if save_path:
|
||||
html = open(os.path.join(save_path, "results.html"), "w")
|
||||
html.write("<html><body>\n")
|
||||
for bench in benchmarks:
|
||||
self._run(bench, save_path, show_plots)
|
||||
self._run(bench, save_path, show_plots, print_data)
|
||||
if save_path:
|
||||
html.write(f"<image src=\"{bench.plot_name}.png\"/>\n")
|
||||
if save_path:
|
||||
|
@@ -229,7 +229,13 @@ def make_kernel(device, dtype):
|
||||
cache = make_kernel.cache
|
||||
if key not in cache:
|
||||
defines = {'TYPE': dtype}
|
||||
cache[key] = triton.kernel(src, device=device, defines=defines, autotune_vals=autotune_configs, autotune_key=autotune_key)
|
||||
cache[key] = triton.kernel(
|
||||
src,
|
||||
device=device,
|
||||
defines=defines,
|
||||
autotune_configs=autotune_configs,
|
||||
autotune_key=autotune_key,
|
||||
)
|
||||
return cache[key]
|
||||
|
||||
|
||||
@@ -319,7 +325,7 @@ print(torch.allclose(c_0, c_1, rtol=1e-3, atol=1e-3))
|
||||
# .. code-block:: bash
|
||||
#
|
||||
# export CUTLASS_INCLUDE_DIR=/tmp/cutlass/build/install/include/
|
||||
# export CUTLASS_LIBRARY_DIR=/tmp/cutlass/build/install/lib/a
|
||||
# export CUTLASS_LIBRARY_DIR=/tmp/cutlass/build/install/lib/
|
||||
# pip uninstall -y triton
|
||||
# pip install -e "git+https://github.com/ptillet/triton.git#egg=triton&subdirectory=python"
|
||||
#
|
||||
@@ -343,8 +349,8 @@ print(torch.allclose(c_0, c_2, rtol=1e-3, atol=1e-3))
|
||||
x_names=['M', 'N', 'K'], # argument names to use as an x-axis for the plot
|
||||
x_vals=[256 * i for i in range(2, 33)], # different possible values for `x_name`
|
||||
y_name='provider', # argument name whose value corresponds to a different line in the plot
|
||||
y_vals=['torch', 'triton', 'cutlass'], # possible keys for `y_name`
|
||||
y_lines=["Torch", "Triton", 'CUTLASS'], # label name for the lines
|
||||
y_vals=['cublas', 'triton', 'cutlass'], # possible keys for `y_name`
|
||||
y_lines=["cuBLAS", "Triton", 'CUTLASS'], # label name for the lines
|
||||
ylabel="TFLOPS", # label name for the y-axis
|
||||
plot_name="matmul-performance", # name for the plot. Used also as a file name for saving the plot.
|
||||
args={}
|
||||
@@ -353,7 +359,7 @@ print(torch.allclose(c_0, c_2, rtol=1e-3, atol=1e-3))
|
||||
def benchmark(M, N, K, provider):
|
||||
a = torch.randn((M, K), device='cuda', dtype=torch.float16)
|
||||
b = torch.randn((K, N), device='cuda', dtype=torch.float16)
|
||||
if provider == 'torch':
|
||||
if provider == 'cublas':
|
||||
ms, min_ms, max_ms = triton.testing.do_bench(lambda: torch.matmul(a, b))
|
||||
if provider == 'triton':
|
||||
ms, min_ms, max_ms = triton.testing.do_bench(lambda: dot(a, b))
|
||||
|