Philippe Tillet 85752037eb [PYTHON] Changed benchmarking strategy. Instead of enqueueing many
kernels before synchronizing, the kernels are now  enqueued one by one.

This makes it possible to clear the L2 cache before running the
workload, and also potentially collect some variance data for error bars
in plots
2021-07-27 12:38:49 -07:00
2021-07-27 12:38:49 -07:00

Triton

This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs.

Build Status

The foundations of this project are described in the following MAPL2019 publication: Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations. Please consider citing us if you use our work!

Installation

You can install the latest release with pip as follows:

sudo apt-get install llvm-10-dev
pip install triton

or the latest development version with:

 pip install -e "git+https://github.com/ptillet/triton.git#egg=triton&subdirectory=python"

for the C++ package:

git clone https://github.com/ptillet/triton.git;
cd triton;
mkdir build;
cd build;
cmake ../;
make -j8;

Getting Started

You can find tutorials for Triton for Python and C++.

Description
Development repository for the Triton language and compiler
Readme 146 MiB
Languages
C++ 49.7%
Python 35.3%
MLIR 13.3%
CMake 1.7%