Commit Graph

1392 Commits

Author SHA1 Message Date
Philippe Tillet
ec51a2e9a5 [DOCS] Added non-tutorial documentation pages 2021-03-19 15:27:19 -04:00
Philippe Tillet
2f8f0042a9 [DOCS] Added matrix multiplication tutorial 2021-03-15 13:57:41 -04:00
Philippe Tillet
d1c0bf2bea [DOCS] Removed pip installation instruction as version on Pip is not
up-to-date
2021-03-11 12:05:34 -05:00
Philippe Tillet
134e246117 [DOCS] Improved plots in tutorials 2021-03-11 00:42:29 -05:00
Philippe Tillet
58207d4647 [PYTHON] CUTLASS wrapper for fair benchmarks (#75)
Before this commit, the benchmarking infrastructure used heterogeneous protocols between library (e.g., CUTLASS uses a C++ binary that reports mean TFLOPS; torch and triton use python call and report 10th, 50th and 90th quantiles). For the sake of uniformity and fair benchmark practices, this PR adds a python wrapper for auto-tuned CUTLASS matrix multiplication. Benchmarks have been rewritten to use this wrapper with `triton.testing.do_bench` rather than system calls to CUTLASS profiler. Importantly, this also ensures that all the matmuls are done on the *same* input data which should stabilize clock across providers.
2021-03-09 16:32:44 -05:00
Philippe Tillet
d25b7bc115 [README] Now linking to the documentation 2021-03-08 20:22:32 -05:00
Philippe Tillet
4781f979b2 [PYTHON] Made bench_blocksparse and bench_cross_entropy compatible
with the new performance report API
2021-03-08 20:19:10 -05:00
Philippe Tillet
061ef3920e [CODEGEN] Fixed bug that caused conditional operator to not always
properly mask load operations

Also includes minor improvement to benchmarking infrastructure
2021-03-08 20:04:26 -05:00
Philippe Tillet
dfa0d45ffe [DOCS] Improved tutorials documentation 2021-03-06 22:04:00 -05:00
Philippe Tillet
b8f2875d28 [PYTHON] Changed benchmarking strategy. Instead of enqueueing many
kernels before synchronizing, the kernels are now  enqueued one by one.

This makes it possible to clear the L2 cache before running the
workload, and also potentially collect some variance data for error bars
in plots
2021-03-06 22:02:18 -05:00
Philippe Tillet
e78211c8f5 [DOCS] Re-structured documentation hierarchy 2021-03-06 17:26:49 -05:00
Philippe Tillet
85d1b02e16 [DOCS] Switched tutorials to Python and use Sphinx Gallery 2021-03-06 14:03:01 -05:00
Philippe Tillet
5dd4cfc077 [DOCS] Added .ipynb tutorials in docs 2021-03-06 02:57:41 -05:00
Philippe Tillet
90f953931e [DOCS] Updated and improved docs (#73) 2021-03-05 22:33:50 -05:00
Philippe Tillet
2b9b284026 [PYTHON] Deleted 01-vector-add.py: it is an unnecessary duplicate of
01-vector-add.ipynb
2021-03-04 02:06:57 -05:00
Philippe Tillet
a7437e14c5 [RUNTIME] Added auto-alignment mechanism (#71)
This PR adds an automatic memory alignment mechanism in the Triton runtime. Specifically, the JIT compiler detects the alignment (in bytes) of each pointer argument as well as the largest power of two divisor (between 1 and 16) of each integer argument. Proper .aligned and .multipleof attributes are then added to the Triton-IR on-the-fly for all auto-tunable kernels. There is a cache that remembers all the kernels compiled for each possible configuration.

This PR also includes substantial cleaning of the Python API. This adds 2-3us overhead, mostly due to accessing integer #defines from the auto-tuned compilation options. The previous solution was slightly faster but hacky and potentially unsafe, so this is preferred for now.
2021-03-04 01:51:11 -05:00
Philippe Tillet
4e6fe6329f [PYTHON] bugfix in bench_cross_entropy 2021-02-26 02:37:46 -05:00
Philippe Tillet
8e8e65023b [DOCS] Added Python 02-fused-softmax.ipynb tutorial 2021-02-25 14:49:47 -05:00
Philippe Tillet
b0f37346b0 [PYTHON] Bugfix on FP32 blocksparse matmul 2021-02-24 14:44:23 -05:00
Philippe Tillet
db6bf71564 [CODEGEN] Bugfixes with FP32 async copy 2021-02-24 14:44:09 -05:00
Philippe Tillet
bfe92a5d91 [CODEGEN] Now initializing cp.async to zero when predicate is false
WARNING: case for non-zero initialization is still not handled. Will
require manual copy to shared
2021-02-22 17:14:33 -05:00
Philippe Tillet
15f8e8c3b7 [CODEGEN] Major performance improvements on A100 (#70)
Improved handling of asynchronous copy, scheduling and synchronization for A100. Now achieving CUTLASS-like performance on large square dense matrix multiplication tasks
2021-02-21 18:19:39 -05:00
Jared Kaplan
b10e9b89e9 [PYTHON] Add Blocksparse Attention Fwd/Bwd Test (#69)
Also includes small bugfix for block-sparse softmax
2021-02-19 17:46:05 -05:00
Tom B Brown
cf5a1ee79e [PYTHON] Avoid dangerous global variables in kwarg default values (#68) 2021-02-18 17:56:54 -05:00
Jason Ansel
c43535c219 [DOCS] Fix links in README.md (#66) 2021-02-16 16:23:39 -05:00
Philippe Tillet
9c7bf0b75d [CI] Continuous integration now publishes performance plots 2021-02-10 16:49:10 -05:00
Philippe Tillet
f8846d95ff [PYTHON][OPS] Added compiler hints to improve performance of
cross-entropy
2021-02-10 16:47:50 -05:00
Philippe Tillet
f07995d6f8 [CI] Updated build status URL 2021-02-10 14:04:53 -05:00
Philippe Tillet
0125ab1740 [CI] Added benchmarking to CI script (#65) 2021-02-10 14:01:47 -05:00
Philippe Tillet
c847cc6320 [DRIVER] Added options for developers to cache PTX file so that ti can
be manually modified
2021-02-09 00:09:10 -05:00
Philippe Tillet
53fd9631ef [PYTHON] Added automated benchmark script (#63)
This adds a bench functionality to the setup.py that can be used to run the benchmark suite and generates a bunch of csv files (and optionally plots)

python setup.py bench
python setup.py bench --with-plots
python setup.py bench --filter=cross_entropy
2021-02-08 15:16:41 -05:00
Philippe Tillet
ae3c6a1022 [PYTHON] Removed .softmax from ops/__init__.py following previous commit 2021-02-07 17:02:11 -05:00
Philippe Tillet
9ed392db9c [PYTHON] Removed support for dense softmax
Interest seems limited now that it is fused in cross_entropy. Will
likely re-add once it's easier to share code between ops
2021-02-07 16:46:47 -05:00
Jared Kaplan
db55ef4fa7 Added a Softmax Xent Op (#53)
Also includes a bugfix in kernel.py to set the device before registering the c++ function object
2021-02-07 12:53:42 -08:00
Philippe Tillet
c8b5cb4ad5 [PYTHON] Made codebase pep8 compliant 2021-02-07 15:06:57 -05:00
Philippe Tillet
14fee16886 [PYTHON] Some cleaning of the PyBind11 wrappers (#62) 2021-02-06 20:10:44 -05:00
Philippe Tillet
d5e1337782 [CI] Fixed unmerged YAML file 2021-02-06 17:17:29 -05:00
Philippe Tillet
80f03f2a76 [CI] Updated trigger rules and status badge link 2021-02-06 17:10:00 -05:00
Philippe Tillet
c7d4085f3d [CI] Improvement of CI pipeline (#60) 2021-02-06 16:59:46 -05:00
Philippe Tillet
fbcf36d40a [GITHUB] Modified run-on property of run-pipeline.yaml 2021-02-06 15:49:58 -05:00
Philippe Tillet
dae6035b5a [CI] Added Github Actions (#59) 2021-02-06 15:47:52 -05:00
Philippe Tillet
4165e574a4 [GENERAL] Added continuous integration 2021-02-05 14:31:58 -05:00
Philippe Tillet
d1e39d7f98 [PYTHON][OPS][BLOCKSPARSE] Now rounding softmax tile sizes to next power
of 2
2021-02-04 16:13:45 -05:00
Philippe Tillet
4a784ff13a [LANG] Now requiring tiles have power of 2 number of elements 2021-02-04 16:13:33 -05:00
Philippe Tillet
8ab5498d26 [TESTS] test_matmul.py now plots benchmarks 2021-02-04 15:35:53 -05:00
Philippe Tillet
7116df3a32 [PYTHON] Added triton.__version__ string 2021-02-03 17:39:55 -05:00
Philippe Tillet
1726197bb4 Improvements w/ Auto-Tuning and standard benchmarks (#57)
[PYTHON] Bug-fixes in the auto-tuning module and improvement of the existing API for it
2021-02-03 16:37:21 -05:00
Philippe Tillet
8ab68f5424 [PYTHON] Added benchmark code for CUTLASS 2021-01-31 21:23:01 -05:00
Philippe Tillet
4a61e65fc9 [LANG] Added __debug_barrier() call to force insertion of a CUDA
__syncthreads
2021-01-31 20:09:36 -05:00
Philippe Tillet
6e77538087 [RUNTIME] Auto-tuning now works as expected when the values of
autotune_key change
2021-01-31 19:23:51 -05:00