Commit Graph

459 Commits

Author SHA1 Message Date
Philippe Tillet
d1d09566b1 [DOCS] Improved tutorials documentation 2021-07-27 12:38:49 -07:00
Philippe Tillet
85752037eb [PYTHON] Changed benchmarking strategy. Instead of enqueueing many
kernels before synchronizing, the kernels are now  enqueued one by one.

This makes it possible to clear the L2 cache before running the
workload, and also potentially collect some variance data for error bars
in plots
2021-07-27 12:38:49 -07:00
Philippe Tillet
92242ace2c [DOCS] Re-structured documentation hierarchy 2021-07-27 12:38:49 -07:00
Philippe Tillet
ca04da3575 [DOCS] Switched tutorials to Python and use Sphinx Gallery 2021-07-27 12:38:49 -07:00
Philippe Tillet
5172792543 [DOCS] Added .ipynb tutorials in docs 2021-07-27 12:38:49 -07:00
Philippe Tillet
3ecf834a69 [PYTHON] Deleted 01-vector-add.py: it is an unnecessary duplicate of
01-vector-add.ipynb
2021-07-27 12:38:49 -07:00
Philippe Tillet
62835a0979 [RUNTIME] Added auto-alignment mechanism (#71)
This PR adds an automatic memory alignment mechanism in the Triton runtime. Specifically, the JIT compiler detects the alignment (in bytes) of each pointer argument as well as the largest power of two divisor (between 1 and 16) of each integer argument. Proper .aligned and .multipleof attributes are then added to the Triton-IR on-the-fly for all auto-tunable kernels. There is a cache that remembers all the kernels compiled for each possible configuration.

This PR also includes substantial cleaning of the Python API. This adds 2-3us overhead, mostly due to accessing integer #defines from the auto-tuned compilation options. The previous solution was slightly faster but hacky and potentially unsafe, so this is preferred for now.
2021-07-27 12:38:49 -07:00
Philippe Tillet
ff62f7fffc [PYTHON] bugfix in bench_cross_entropy 2021-07-27 12:38:49 -07:00
Philippe Tillet
50ff1aea86 [DOCS] Added Python 02-fused-softmax.ipynb tutorial 2021-07-27 12:38:49 -07:00
Philippe Tillet
f64b779b0d [PYTHON] Bugfix on FP32 blocksparse matmul 2021-07-27 12:38:49 -07:00
Philippe Tillet
567a1a3d17 [CODEGEN] Bugfixes with FP32 async copy 2021-07-27 12:38:49 -07:00
Philippe Tillet
5b83259592 [CODEGEN] Major performance improvements on A100 (#70)
Improved handling of asynchronous copy, scheduling and synchronization for A100. Now achieving CUTLASS-like performance on large square dense matrix multiplication tasks
2021-07-27 12:38:49 -07:00
Jared Kaplan
045ab5d62a [PYTHON] Add Blocksparse Attention Fwd/Bwd Test (#69)
Also includes small bugfix for block-sparse softmax
2021-07-27 12:38:49 -07:00
Tom B Brown
7aa4d080b3 [PYTHON] Avoid dangerous global variables in kwarg default values (#68) 2021-07-27 12:38:49 -07:00
Philippe Tillet
d190285d89 [PYTHON][OPS] Added compiler hints to improve performance of
cross-entropy
2021-07-27 12:38:49 -07:00
Philippe Tillet
ce8aa2a41a [CI] Added benchmarking to CI script (#65) 2021-07-27 12:38:49 -07:00
Philippe Tillet
5e3c7f5a60 [PYTHON] Added automated benchmark script (#63)
This adds a bench functionality to the setup.py that can be used to run the benchmark suite and generates a bunch of csv files (and optionally plots)

python setup.py bench
python setup.py bench --with-plots
python setup.py bench --filter=cross_entropy
2021-07-27 12:38:48 -07:00
Philippe Tillet
66c94f21d7 [PYTHON] Removed .softmax from ops/__init__.py following previous commit 2021-07-27 12:38:48 -07:00
Philippe Tillet
b0647cfd52 [PYTHON] Removed support for dense softmax
Interest seems limited now that it is fused in cross_entropy. Will
likely re-add once it's easier to share code between ops
2021-07-27 12:38:48 -07:00
Jared Kaplan
682ac4c60e Added a Softmax Xent Op (#53)
Also includes a bugfix in kernel.py to set the device before registering the c++ function object
2021-07-27 12:38:48 -07:00
Philippe Tillet
dffd66bc83 [PYTHON] Made codebase pep8 compliant 2021-07-27 12:38:48 -07:00
Philippe Tillet
2a02fabdac [PYTHON] Some cleaning of the PyBind11 wrappers (#62) 2021-07-27 12:38:48 -07:00
Philippe Tillet
80e8a2f1f2 [PYTHON][OPS][BLOCKSPARSE] Now rounding softmax tile sizes to next power
of 2
2021-07-27 12:38:48 -07:00
Philippe Tillet
cc84a476a3 [TESTS] test_matmul.py now plots benchmarks 2021-07-27 12:38:48 -07:00
Philippe Tillet
fedbe6f439 [PYTHON] Added triton.__version__ string 2021-07-27 12:38:48 -07:00
Philippe Tillet
6fb4800f57 Improvements w/ Auto-Tuning and standard benchmarks (#57)
[PYTHON] Bug-fixes in the auto-tuning module and improvement of the existing API for it
2021-07-27 12:38:48 -07:00
Philippe Tillet
ad005d49ac [PYTHON] Added benchmark code for CUTLASS 2021-07-27 12:38:48 -07:00
Philippe Tillet
3fde4b8f5b [RUNTIME] Auto-tuning now works as expected when the values of
autotune_key change
2021-07-27 12:38:48 -07:00
Philippe Tillet
52af8cda34 [PYTHON] Fixed issue with IS_TK_DIV_K 2021-07-27 12:38:48 -07:00
Philippe Tillet
7cf358a352 [TUTORIALS] Fixed TYPO in CMakeLists.txt 2021-07-27 12:38:48 -07:00
Philippe Tillet
9b31244897 [PYTHON] Added benchmarking code 2021-07-27 12:38:48 -07:00
Philippe Tillet
7ba242fcce [PYTHON][OPS] Added block-sparse softmax 2021-07-27 12:38:48 -07:00
Philippe Tillet
f81da73b6a [PYTHON] Added utility to read single Triton kernel from provided file
in triton.read
2021-07-27 12:38:48 -07:00
Philippe Tillet
269ebc12e5 [PYTHON][TESTS][DOC] Various improvement of the API and code quality:
* Simplified `triton.kernel` API to achieve lower latency:
  > .data_ptr() must now be passed as kernel argument. No more implicit
conversion from torch.tensor
  > compilation options are now constant attributes, i.e., opt.d('VAR')
becomes opt.VAR
  > torch.device must now be passed explicitly to triton.kernel (no
longer inferred from torch.tensor arguments)
* C++ tests moved to `python/tests/`
* C++ tutorial created in `tutorials/`
* Python tutorial created in python/tutorials/
* Version changed to 1.0alpha
* No longer copying C++ headers into the Python package
* added python/triton/ops/ package for pre-written Triton ops
2021-07-27 12:38:48 -07:00
Philippe Tillet
083bbd1e8d [GENERAL] Merged v1.0alpha into master. Added features are:
- A100 support via mma.16816
- Thread swizzling for conflict-free shared memory accesses without
padding
- Complete overhaul of the LLVM code generation in
codegen/selection/generator.cc to remove overengineering
- Added debugging capabilities in the Python binding
- Compilation error for kernels that spill
2021-07-27 12:38:48 -07:00
Philippe Tillet
c0bc7ed8b0 [PYTHON] Added TRITON_DEBUG_MODE which reallocates input tensors outside of the pytorch memory pool to spot out-of-bounds accesses more easily 2021-07-27 12:38:48 -07:00
Philippe Tillet
547a99a5d4 [VERSION] 0.2.3 -> 0.3.0 2021-07-27 12:38:48 -07:00
Philippe Tillet
8ab62803db [PYTHON] Context switching logic moved to PyTorch 2021-07-27 12:38:48 -07:00
Philippe Tillet
4f08d87fed [DRIVER] Simplified Driver API by substantially removing reliance on driver::context 2021-07-27 12:38:48 -07:00
Philippe Tillet
073fddffc1 [PYTHON] Compiling Triton in Release mode now... 2021-07-27 12:38:48 -07:00
Philippe Tillet
a77c925dfd [DRIVER] Improved performance of Host driver code 2021-07-27 12:38:48 -07:00
Philippe Tillet
8f8d36c7a4 [GENERAL] Various bugfixes 2021-07-27 12:38:48 -07:00
Philippe Tillet
50587bbf4b [General] LLVM-9 -> LLVM-10 2021-07-27 12:38:48 -07:00
Philippe Tillet
8f3ee53f24 [PYTHON] Added option to show PTX source code in Python 2021-07-27 12:38:48 -07:00
Philippe Tillet
cf80ccc798 [PYTHON] Fixed torch ABI issue 2021-07-27 12:38:48 -07:00
Philippe Tillet
06abc8cb40 [GENERAL] Fix compatibility issue with older Torch versions 2021-07-27 12:38:48 -07:00
Philippe Tillet
f152150e7d [LANG] Added log intrinsic 2021-07-27 12:38:48 -07:00
Philippe Tillet
02a6e81b88 [PYTHON] Cleaning C++ bindings 2021-07-27 12:38:48 -07:00
Philippe Tillet
049ab989b5 [GENERAL] Various improvements:
* Sparse einsum in triton.ops.einsum
* Hacky support for fixed-tile-size atomic-add
* Various bugfixes in parser
2021-07-27 12:38:48 -07:00
Philippe Tillet
840308ab5d [CODEGEN] More work on the CPU backend 2021-07-27 12:38:48 -07:00