d8d6b715c878b9156bf42130908a30de7d0bdaf7
Improved codegen for the Ampere GPUs. * Make the layout pass recognize the multistage pipelined pattern. * Now the pipeline pass can automate the multistage pipelining transformation. * Remove extra barriers (from the prefetch pass & WAR) on Ampere. * Update the code generator (generator.cc) to make Triton generate n-buffered shared memory loads/stores.
[BUILD] Added automatic nightly build releases to pip in CI; removed build-time dependence on LLVM and PyTorch (#77)
Triton
This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.
The foundations of this project are described in the following MAPL2019 publication: Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations. Please consider citing us if you use our work!
The official documentation contains installation instructions and tutorials.
Compatibility
Supported Platforms:
- Linux
Supported Hardware:
- NVIDIA GPUs (Compute Capability 7.0+)
- Under development: AMD GPUs, CPUs
Description
Languages
C++
49.7%
Python
35.3%
MLIR
13.3%
CMake
1.7%