418 Commits

Author SHA1 Message Date
Philippe Tillet
167a2e4b1a [PYTHON] Fixed formatting issue in conv.c 2021-07-27 12:38:49 -07:00
Philippe Tillet
5ba5a77561 [BUILD] Remove compilation warnings 2021-07-27 12:38:49 -07:00
Philippe Tillet
b352bc79e3 [CI] Changed triton-nightly to --pre triton (#78)
The solution proposed in #77 can create namespace conflicts when triton and triton-nightly have both been pip installed. Therefore, this PR is moving nightly releases to pre-releases in the main triton index.
2021-07-27 12:38:49 -07:00
Philippe Tillet
2f80a98776 [BUILD] Added automatic nightly build releases to pip in CI; removed build-time dependence on LLVM and PyTorch (#77)
Recently there has been more and more report about installation issues:

    - Installing Triton before upgrading pytorch can create some issues because Triton uses some torch headers

    - llvm-10-dev not available on some platform; llvm-11-dev not available on e.g. Ubuntu.
    absence of nightly builds

This PR should fix all these issues. Some CMake tricks are used to download and install llvm at build time. Triton Python bindings were modified to remove dependence on pytorch ops. Midnight CI job added to generate binary wheels for all Triton version and update them on pypi's new triton-nightly project.

This PR will also make it very easy to use LLVM forks in the future for whatever needs we have.
2021-07-27 12:38:49 -07:00
Philippe Tillet
183878dce5 [DOCS] Added matrix multiplication tutorial 2021-07-27 12:38:49 -07:00
Philippe Tillet
50e58d73db [DOCS] Improved plots in tutorials 2021-07-27 12:38:49 -07:00
Philippe Tillet
eacbb73968 [PYTHON] CUTLASS wrapper for fair benchmarks (#75)
Before this commit, the benchmarking infrastructure used heterogeneous protocols between library (e.g., CUTLASS uses a C++ binary that reports mean TFLOPS; torch and triton use python call and report 10th, 50th and 90th quantiles). For the sake of uniformity and fair benchmark practices, this PR adds a python wrapper for auto-tuned CUTLASS matrix multiplication. Benchmarks have been rewritten to use this wrapper with `triton.testing.do_bench` rather than system calls to CUTLASS profiler. Importantly, this also ensures that all the matmuls are done on the *same* input data which should stabilize clock across providers.
2021-07-27 12:38:49 -07:00
Philippe Tillet
58a5c87c53 [PYTHON] Made bench_blocksparse and bench_cross_entropy compatible
with the new performance report API
2021-07-27 12:38:49 -07:00
Philippe Tillet
5b9afaa688 [CODEGEN] Fixed bug that caused conditional operator to not always
properly mask load operations

Also includes minor improvement to benchmarking infrastructure
2021-07-27 12:38:49 -07:00
Philippe Tillet
d1d09566b1 [DOCS] Improved tutorials documentation 2021-07-27 12:38:49 -07:00
Philippe Tillet
85752037eb [PYTHON] Changed benchmarking strategy. Instead of enqueueing many
kernels before synchronizing, the kernels are now  enqueued one by one.

This makes it possible to clear the L2 cache before running the
workload, and also potentially collect some variance data for error bars
in plots
2021-07-27 12:38:49 -07:00
Philippe Tillet
92242ace2c [DOCS] Re-structured documentation hierarchy 2021-07-27 12:38:49 -07:00
Philippe Tillet
ca04da3575 [DOCS] Switched tutorials to Python and use Sphinx Gallery 2021-07-27 12:38:49 -07:00
Philippe Tillet
5172792543 [DOCS] Added .ipynb tutorials in docs 2021-07-27 12:38:49 -07:00
Philippe Tillet
3ecf834a69 [PYTHON] Deleted 01-vector-add.py: it is an unnecessary duplicate of
01-vector-add.ipynb
2021-07-27 12:38:49 -07:00
Philippe Tillet
62835a0979 [RUNTIME] Added auto-alignment mechanism (#71)
This PR adds an automatic memory alignment mechanism in the Triton runtime. Specifically, the JIT compiler detects the alignment (in bytes) of each pointer argument as well as the largest power of two divisor (between 1 and 16) of each integer argument. Proper .aligned and .multipleof attributes are then added to the Triton-IR on-the-fly for all auto-tunable kernels. There is a cache that remembers all the kernels compiled for each possible configuration.

This PR also includes substantial cleaning of the Python API. This adds 2-3us overhead, mostly due to accessing integer #defines from the auto-tuned compilation options. The previous solution was slightly faster but hacky and potentially unsafe, so this is preferred for now.
2021-07-27 12:38:49 -07:00
Philippe Tillet
ff62f7fffc [PYTHON] bugfix in bench_cross_entropy 2021-07-27 12:38:49 -07:00
Philippe Tillet
50ff1aea86 [DOCS] Added Python 02-fused-softmax.ipynb tutorial 2021-07-27 12:38:49 -07:00
Philippe Tillet
f64b779b0d [PYTHON] Bugfix on FP32 blocksparse matmul 2021-07-27 12:38:49 -07:00
Philippe Tillet
567a1a3d17 [CODEGEN] Bugfixes with FP32 async copy 2021-07-27 12:38:49 -07:00
Philippe Tillet
5b83259592 [CODEGEN] Major performance improvements on A100 (#70)
Improved handling of asynchronous copy, scheduling and synchronization for A100. Now achieving CUTLASS-like performance on large square dense matrix multiplication tasks
2021-07-27 12:38:49 -07:00
Jared Kaplan
045ab5d62a [PYTHON] Add Blocksparse Attention Fwd/Bwd Test (#69)
Also includes small bugfix for block-sparse softmax
2021-07-27 12:38:49 -07:00
Tom B Brown
7aa4d080b3 [PYTHON] Avoid dangerous global variables in kwarg default values (#68) 2021-07-27 12:38:49 -07:00
Philippe Tillet
d190285d89 [PYTHON][OPS] Added compiler hints to improve performance of
cross-entropy
2021-07-27 12:38:49 -07:00
Philippe Tillet
ce8aa2a41a [CI] Added benchmarking to CI script (#65) 2021-07-27 12:38:49 -07:00
Philippe Tillet
5e3c7f5a60 [PYTHON] Added automated benchmark script (#63)
This adds a bench functionality to the setup.py that can be used to run the benchmark suite and generates a bunch of csv files (and optionally plots)

python setup.py bench
python setup.py bench --with-plots
python setup.py bench --filter=cross_entropy
2021-07-27 12:38:48 -07:00
Philippe Tillet
66c94f21d7 [PYTHON] Removed .softmax from ops/__init__.py following previous commit 2021-07-27 12:38:48 -07:00
Philippe Tillet
b0647cfd52 [PYTHON] Removed support for dense softmax
Interest seems limited now that it is fused in cross_entropy. Will
likely re-add once it's easier to share code between ops
2021-07-27 12:38:48 -07:00
Jared Kaplan
682ac4c60e Added a Softmax Xent Op (#53)
Also includes a bugfix in kernel.py to set the device before registering the c++ function object
2021-07-27 12:38:48 -07:00
Philippe Tillet
dffd66bc83 [PYTHON] Made codebase pep8 compliant 2021-07-27 12:38:48 -07:00
Philippe Tillet
2a02fabdac [PYTHON] Some cleaning of the PyBind11 wrappers (#62) 2021-07-27 12:38:48 -07:00
Philippe Tillet
80e8a2f1f2 [PYTHON][OPS][BLOCKSPARSE] Now rounding softmax tile sizes to next power
of 2
2021-07-27 12:38:48 -07:00
Philippe Tillet
cc84a476a3 [TESTS] test_matmul.py now plots benchmarks 2021-07-27 12:38:48 -07:00
Philippe Tillet
fedbe6f439 [PYTHON] Added triton.__version__ string 2021-07-27 12:38:48 -07:00
Philippe Tillet
6fb4800f57 Improvements w/ Auto-Tuning and standard benchmarks (#57)
[PYTHON] Bug-fixes in the auto-tuning module and improvement of the existing API for it
2021-07-27 12:38:48 -07:00
Philippe Tillet
ad005d49ac [PYTHON] Added benchmark code for CUTLASS 2021-07-27 12:38:48 -07:00
Philippe Tillet
3fde4b8f5b [RUNTIME] Auto-tuning now works as expected when the values of
autotune_key change
2021-07-27 12:38:48 -07:00
Philippe Tillet
52af8cda34 [PYTHON] Fixed issue with IS_TK_DIV_K 2021-07-27 12:38:48 -07:00
Philippe Tillet
7cf358a352 [TUTORIALS] Fixed TYPO in CMakeLists.txt 2021-07-27 12:38:48 -07:00
Philippe Tillet
9b31244897 [PYTHON] Added benchmarking code 2021-07-27 12:38:48 -07:00
Philippe Tillet
7ba242fcce [PYTHON][OPS] Added block-sparse softmax 2021-07-27 12:38:48 -07:00
Philippe Tillet
f81da73b6a [PYTHON] Added utility to read single Triton kernel from provided file
in triton.read
2021-07-27 12:38:48 -07:00
Philippe Tillet
269ebc12e5 [PYTHON][TESTS][DOC] Various improvement of the API and code quality:
* Simplified `triton.kernel` API to achieve lower latency:
  > .data_ptr() must now be passed as kernel argument. No more implicit
conversion from torch.tensor
  > compilation options are now constant attributes, i.e., opt.d('VAR')
becomes opt.VAR
  > torch.device must now be passed explicitly to triton.kernel (no
longer inferred from torch.tensor arguments)
* C++ tests moved to `python/tests/`
* C++ tutorial created in `tutorials/`
* Python tutorial created in python/tutorials/
* Version changed to 1.0alpha
* No longer copying C++ headers into the Python package
* added python/triton/ops/ package for pre-written Triton ops
2021-07-27 12:38:48 -07:00
Philippe Tillet
083bbd1e8d [GENERAL] Merged v1.0alpha into master. Added features are:
- A100 support via mma.16816
- Thread swizzling for conflict-free shared memory accesses without
padding
- Complete overhaul of the LLVM code generation in
codegen/selection/generator.cc to remove overengineering
- Added debugging capabilities in the Python binding
- Compilation error for kernels that spill
2021-07-27 12:38:48 -07:00
Philippe Tillet
c0bc7ed8b0 [PYTHON] Added TRITON_DEBUG_MODE which reallocates input tensors outside of the pytorch memory pool to spot out-of-bounds accesses more easily 2021-07-27 12:38:48 -07:00
Philippe Tillet
547a99a5d4 [VERSION] 0.2.3 -> 0.3.0 2021-07-27 12:38:48 -07:00
Philippe Tillet
8ab62803db [PYTHON] Context switching logic moved to PyTorch 2021-07-27 12:38:48 -07:00
Philippe Tillet
4f08d87fed [DRIVER] Simplified Driver API by substantially removing reliance on driver::context 2021-07-27 12:38:48 -07:00
Philippe Tillet
073fddffc1 [PYTHON] Compiling Triton in Release mode now... 2021-07-27 12:38:48 -07:00
Philippe Tillet
a77c925dfd [DRIVER] Improved performance of Host driver code 2021-07-27 12:38:48 -07:00