[DOCS] Fixed README.md

This commit is contained in:
Philippe Tillet
2020-02-10 03:47:49 -05:00
committed by Philippe Tillet
parent 97293440fc
commit d7a781dd40

View File

@@ -2,19 +2,22 @@
This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with much higher flexibility than [TVM](https://github.com/apache/incubator-tvm) and without having to manually specify compute schedules.
The main scope of Triton at the moment are:
- **Triton-C**: An imperative, single-threaded language for writing highly efficient compute-kernels at a relatively high abstraction level using numpy-like extensions of the C language.
- **Triton-IR**: An intermediate-representation for optimizing multi-dimensional array operations in linear algebra programs
- **Triton-JIT**: An optimizing just-in-time compiler for Triton-C, which generates GPU code on par with state-of-the-art CUDA-C (e.g., [CUTLASS](https://github.com/NVIDIA/cutlass)) and PTX (e.g., [ISAAC](https://github.com/ptillet/isaac)). This includes transparent support for mixed-precision and Tensor Cores.
The main components of Triton at the moment are:
Bindings for **automatic** PyTorch custom op generations are included in - **PyTriton**, along with a small DSL based on einsum that supports convolutions, shift-convolutions, direct einsums, etc.
- **Triton-C**: An imperative, single-threaded language for writing highly efficient compute-kernels at a relatively high abstraction level (think numpy-like array operations in a C-like language).
- **Triton-IR**: A special-purpose intermediate representation (Triton-IR) for aiding array-level program analysis and optimizations in Triton-C programs.
- **Triton-JIT**: An optimizing just-in-time compiler for Triton-IR, which generates GPU code on par with state-of-the-art CUDA-C (e.g., [CUTLASS](https://github.com/NVIDIA/cutlass)). This includes transparent support for mixed-precision and Tensor Cores.
Bindings for **automatic** PyTorch custom op generations are included in **PyTriton**, along with a small DSL based on einsum that supports convolutions, shift-convolutions, direct einsums, etc.
The formal foundations of this project are described in the following MAPL2019 publication: [Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations](http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf). Please cite us if you use our work!
## Installation
Triton is a fairly self-contained package and uses its own parser (forked from [wgtcc](https://github.com/wgtdkp/wgtcc)) and LLVM code-generator. However, at the moment it relies on LLVM-8.0+ for PTX code generation. The whole compiler stack (~30k lines of C++ code) should take around 15 secs to compile.
Triton is a fairly self-contained package and uses its own parser (forked from [wgtcc](https://github.com/wgtdkp/wgtcc)) and LLVM-8.0+ for code generation.
```
sudo apt-get install llvm-8-dev
@@ -25,13 +28,14 @@ cd examples;
python einsum.py
```
## Tutorials
## Getting Started
- [The Triton-C language](https://github.com/ptillet/triton/blob/master/docs/triton-c.md)
- [The PyTriton API](https://github.com/ptillet/triton/blob/master/docs/pytriton.md)
- Extended Einstein Summations (coming soon...)
- The Triton-IR representation (coming soon...)
- The Triton-JIT compiler (coming soon...)
Please visit the [documentation](https://docs.triton-lang.org) to get started with Triton
## Contributing
Please keep in mind that this is a project I have been carrying out completely on my own as part of my Ph.D. thesis. While I am confident in the approach, there are still many things to fix and to polish. Please contact me (ptillet AT g.harvard.edu) or raise an issue if you want to contribute!
## ISAAC (deprecated) for fast inference