Commit Graph

411 Commits

Author SHA1 Message Date
Nicholas Joseph
68f7eeba92 [DOCS] Improve matmul tutorial readability (#188) 2021-08-05 16:05:56 -07:00
Nicholas Joseph
4e6f667c2f [DOCS] Improve readability of 02-fused-softmax.py (#186) 2021-08-05 09:39:07 -07:00
Nicholas Joseph
23c71538fc [DOCS] Improve tutorial readability (#185) 2021-08-05 09:27:06 -07:00
Xiangru Lian
9967e9d4b4 [DOCS] Fix fused softmax example script naive softmax implementation (#178) 2021-08-02 09:37:31 -07:00
Philippe Tillet
a34c57402f [PYTHON] Improved error message for CPU (#167) 2021-07-30 09:47:27 -07:00
Reid Draper
2322d6df2a [CI] Update ptillet to openai (#152) 2021-07-29 11:39:50 -07:00
Philippe Tillet
4b9df06568 [CI] Bumped dev version to 1.0.1 and fixed permissions in documentation.yml (#149) 2021-07-28 04:35:14 -07:00
Philippe Tillet
acd5e44611 [GENERAL] Some minor improvements here and there to build systems and docs (#148) 2021-07-28 01:51:17 -07:00
Philippe Tillet
57c1fd3366 [BUILD] Now downloading LLVM from web if system does not have llvm-config-11 (#142) 2021-07-28 01:02:31 -07:00
Philippe Tillet
b253b77c71 [DOCS] Improved documentation and integration in CI (#139) 2021-07-27 12:38:49 -07:00
Philippe Tillet
76c6f24fb6 [CI] Made build-wheels compatible with system LLVM setup (#138)
This speeds up wheelhouse build time by ~10x
2021-07-27 12:38:49 -07:00
Philippe Tillet
8eb63bcb01 [CI] Various improvements to CI (#137)
Add clean-up before CI runs. Now using static LLVM-11 libraries from system rather than recompilation. Still no run-time LLVM dependencies
2021-07-27 12:38:49 -07:00
Philippe Tillet
298aead378 [FRONTEND] Fixed bugs in global symbols resolution of @triton.jit'd functions (#136) 2021-07-27 12:38:49 -07:00
Philippe Tillet
01276b5153 [FRONTEND] Added compilation flag to force use of .nc cache modifier (#134)
in DRAM loads. /!\ USE CAREFULLY - THIS CAN BREAK CORRECTNESS IF MISUSED
/!\
2021-07-27 12:38:49 -07:00
Philippe Tillet
2824345065 [LANGUAGE] Added cos/sin (#132) 2021-07-27 12:38:49 -07:00
Philippe Tillet
3169e4355c [PYTHON] Bugfix in trans_c for dsd in blocksparse matmul (#131) 2021-07-27 12:38:49 -07:00
Philippe Tillet
8cea583109 [IR] Preliminary support for BF16 (#129)
This PR adds a BF16 data-type, along with FP32 <-> BF16 conversion instructions in the LLVM codegen. Other kinds of ops on bfloat16 are not yet supported.
2021-07-27 12:38:49 -07:00
Philippe Tillet
9b4e2cae2d [PYTHON] Added isinstance to list of builtin symbols exposed (#128) 2021-07-27 12:38:49 -07:00
daadaada
d8d6b715c8 [CODEGEN] Performance improvement on A100 (#125)
Improved codegen for the Ampere GPUs.

    * Make the layout pass recognize the multistage pipelined pattern.
    * Now the pipeline pass can automate the multistage pipelining transformation.
    * Remove extra barriers (from the prefetch pass & WAR) on Ampere.
    * Update the code generator (generator.cc) to make Triton generate n-buffered shared memory loads/stores.
2021-07-27 12:38:49 -07:00
Philippe Tillet
b7b05a560e [DRIVER] Now giving the option to use system ptxas through environment variable (#123) 2021-07-27 12:38:49 -07:00
Philippe Tillet
0274429429 [IR] Added IR and Codegen support for atomic_rmw (#120) 2021-07-27 12:38:49 -07:00
Philippe Tillet
59b0ac672a [LANGUAGE] Added support for bitcast (#119) 2021-07-27 12:38:49 -07:00
Philippe Tillet
3ab121dbdb [PYTHON] Added support for tuples (#116) 2021-07-27 12:38:49 -07:00
Philippe Tillet
f81012a8cf [CODEGEN] Fixed atomic_add issue (#112)
* [CODEGEN] Fixed atomic_add issue

* [CODEGEN] Fixed liveness analysis bug for instructions that are not
DCE'd but have no users (e.g., atomic_cas)
2021-07-27 12:38:49 -07:00
Philippe Tillet
325ee38581 [PYTHON] Fixed bug in scoping mechanism (#111)
Inline functions didn't restore scope of parents. Also some control flow
structure still had the scoping semantics of C++
2021-07-27 12:38:49 -07:00
Philippe Tillet
9f30af76fb [GENERAL] Minor improvements: (#110)
* Load libcuda.so.1 if libcuda.so is not there. Error if both aren't
there.
* Support for multiple grad_to_none in triton.testing.do_bench
* Benchmark dataframe printed along with name
2021-07-27 12:38:49 -07:00
Philippe Tillet
288b4f7f58 [PYTHON] Added frontend to print sass using turingas disasm.py (#109) 2021-07-27 12:38:49 -07:00
Philippe Tillet
c91dd56a92 [CI] Made setup.py more backwards-compatible (#108) 2021-07-27 12:38:49 -07:00
Philippe Tillet
147675923e [triton-ops] Minor build improvements (#106) 2021-07-27 12:38:49 -07:00
Szymon Sidor
ac57812bdc [PYTHON] Update TensorWrapper with device attribute (#102)
bugfix `triton.reinterpret`
2021-07-27 12:38:49 -07:00
Philippe Tillet
840140bf26 [CODEGEN] Removed dedicated reassociate pass to merge it into LLVM isel (#101)
This massively simplifies implementation of `reassociate` and also fixes
a bunch of bug. The pass could still be improved, but can already be used
to generate constant pointer offsets in eg the matmul epilogue
2021-07-27 12:38:49 -07:00
Philippe Tillet
7355efa745 [LANG] Preliminary FP8 support (#96) 2021-07-27 12:38:49 -07:00
Philippe Tillet
4290be1ae8 [PYTHON] Various minor codegen fixes (#95) 2021-07-27 12:38:49 -07:00
Philippe Tillet
2b75158426 [PYTHON] Added atomic_add (#94) 2021-07-27 12:38:49 -07:00
Nora Belrose
d7f87929fa [PYTHON] Added shape & device checks for inputs to sparse matmul op (#93) 2021-07-27 12:38:49 -07:00
Philippe Tillet
bfc0a7587d [PYTHON] Renamed triton.core -> triton.language (#92) 2021-07-27 12:38:49 -07:00
Nora Belrose
41410012e8 [PYTHON] Fix crash in sparse matmul for tensors w/ less than 4 dims (#91) 2021-07-27 12:38:49 -07:00
daadaada
f6688372db [PYTHON] Allow triton.code_gen.Binary to print Triton-IR asm. (#89) 2021-07-27 12:38:49 -07:00
Nora Belrose
1112e2526e [PYTHON] Fix variable referenced before assignment error in blocksparse matmul backward (#90) 2021-07-27 12:38:49 -07:00
Philippe Tillet
d9112144b4 [PYTHON] Now triton.code_gen.Binary can print PTX and LLIR (#88) 2021-07-27 12:38:49 -07:00
Philippe Tillet
29e33e50b7 [DOCS] Updates and improvements (#87) 2021-07-27 12:38:49 -07:00
Philippe Tillet
39f4730305 Deprecation of Triton-C and Replacement by decorated Python functions (#86)
This PR implements a major overhaul of the frontend for Triton, and replaces Triton-C by a pure Python API in which kernels are defined as @triton.jit decorated functions. The documentation and tutorials have also been updated to accommodate these changes.

See documentations for more information on the new API
2021-07-27 12:38:49 -07:00
Philippe Tillet
1fdb465b71 [DOCS] Various improvements and typo fixes 2021-07-27 12:38:49 -07:00
Philippe Tillet
167a2e4b1a [PYTHON] Fixed formatting issue in conv.c 2021-07-27 12:38:49 -07:00
Philippe Tillet
5ba5a77561 [BUILD] Remove compilation warnings 2021-07-27 12:38:49 -07:00
Philippe Tillet
b352bc79e3 [CI] Changed triton-nightly to --pre triton (#78)
The solution proposed in #77 can create namespace conflicts when triton and triton-nightly have both been pip installed. Therefore, this PR is moving nightly releases to pre-releases in the main triton index.
2021-07-27 12:38:49 -07:00
Philippe Tillet
2f80a98776 [BUILD] Added automatic nightly build releases to pip in CI; removed build-time dependence on LLVM and PyTorch (#77)
Recently there has been more and more report about installation issues:

    - Installing Triton before upgrading pytorch can create some issues because Triton uses some torch headers

    - llvm-10-dev not available on some platform; llvm-11-dev not available on e.g. Ubuntu.
    absence of nightly builds

This PR should fix all these issues. Some CMake tricks are used to download and install llvm at build time. Triton Python bindings were modified to remove dependence on pytorch ops. Midnight CI job added to generate binary wheels for all Triton version and update them on pypi's new triton-nightly project.

This PR will also make it very easy to use LLVM forks in the future for whatever needs we have.
2021-07-27 12:38:49 -07:00
Philippe Tillet
183878dce5 [DOCS] Added matrix multiplication tutorial 2021-07-27 12:38:49 -07:00
Philippe Tillet
50e58d73db [DOCS] Improved plots in tutorials 2021-07-27 12:38:49 -07:00
Philippe Tillet
eacbb73968 [PYTHON] CUTLASS wrapper for fair benchmarks (#75)
Before this commit, the benchmarking infrastructure used heterogeneous protocols between library (e.g., CUTLASS uses a C++ binary that reports mean TFLOPS; torch and triton use python call and report 10th, 50th and 90th quantiles). For the sake of uniformity and fair benchmark practices, this PR adds a python wrapper for auto-tuned CUTLASS matrix multiplication. Benchmarks have been rewritten to use this wrapper with `triton.testing.do_bench` rather than system calls to CUTLASS profiler. Importantly, this also ensures that all the matmuls are done on the *same* input data which should stabilize clock across providers.
2021-07-27 12:38:49 -07:00