Commit Graph

66 Commits

Author SHA1 Message Date
Philippe Tillet
9be1d5afc2 [GENERAL] Various improvements:
* Sparse einsum in triton.ops.einsum
* Hacky support for fixed-tile-size atomic-add
* Various bugfixes in parser
2020-10-25 12:16:40 -07:00
Philippe Tillet
30ac1359b9 [RUNTIME] Lower-level interface for executing functions 2020-08-12 18:33:35 -04:00
Philippe Tillet
2d6484482f [CODEGEN][ANALYSIS] Fixed issue in layout inference 2020-08-10 11:53:11 -04:00
Philippe Tillet
bd2067606c [EXAMPLES] Improved mat_mul example 2020-08-06 17:29:52 -04:00
Philippe Tillet
f01bdd6207 [EXAMPLES] Added conv2d example 2020-08-06 17:29:52 -04:00
Philippe Tillet
cc7c77246b [EXAMPLES][TUTORIAL] Changed to new triton.kernel API 2020-07-08 13:39:19 -04:00
jack-willturner
5fddc2062e [DOCS] Transposition fix 2020-05-07 14:02:18 +01:00
jack-willturner
be02315168 [DOCS] Matrix copy and transpose 2020-05-05 14:30:49 +01:00
jack-willturner
f5d47536c5 [DOCS] Matmul and vecadd working examples 2020-05-04 16:25:17 +01:00
Philippe Tillet
eaabfb1d8e [PYTHON][EXAMPLES][EINSUM] Updated configs for matmul 2020-04-10 12:42:48 -04:00
Philippe Tillet
28f845eab1 [PYTHON][EXAMPLES][EINSUM] Added stride in CONV2D example 2020-04-10 00:14:31 -04:00
Philippe Tillet
8e276484ea [PYTHON][EXAMPLES][EINSUM] Added group-convolution test/benchmark 2020-04-09 23:37:39 -04:00
Philippe Tillet
840af73c8c [PYTHON][EINSUM] re-established auto-tuning 2020-04-09 11:01:57 -04:00
Philippe Tillet
7c09ff80eb [CORE] Fixed several issues that arose in the development of the
torch-blocksparse package:

* Now using warp shuffle in reductions when possible
* Various bugfixes in layout inference
* Added INFINITY, exponential and select
* Better error messages for unimplemented constructs
2020-03-31 18:57:28 -04:00
Philippe Tillet
b7895c653f [PYTHON][EXAMPLES] Removed BlockSparse examples; see
https://github.com/ptillet/torch-blocksparse.git
2020-03-05 13:32:42 -05:00
Philippe Tillet
1f1e4ee9ec [PYTHON] Merged blocksparse branch:
* Example for blocksparse matrix multiplication
* Simplified Triton kernel API
* Revived auto-tuning in einsum
2020-03-05 13:08:07 -05:00
Philippe Tillet
f2daff85d2 [GENERAL] Improved caching mechanism:
* Now computing hash in libtriton
* Now only compiling a single pytorch hook per function signature
2020-02-24 16:36:50 -05:00
Philippe Tillet
304b003969 [PYTHON][EXAMPLES] Removed obsolete files 2020-02-18 12:26:06 -05:00
Philippe Tillet
d11d2db6ee [PYTHON][EINSUM] Now handling reduction sizes that are not a multiple of
TK
2020-02-17 13:52:58 -05:00
Philippe Tillet
6a4d42c1b8 [PYTHON][CORE] Deprecating Tensorflow support 2020-02-10 04:20:33 -05:00
Philippe Tillet
5a3c30148e [PYTHON][EXAMPLES] Changed shape of einsum examples 2020-02-06 13:57:30 -05:00
Philippe Tillet
3e92901bd5 [TRITON][PYTHON] Cleaned up API 2020-02-05 19:44:19 -05:00
Philippe Tillet
db941161ed [PYTHON][EXAMPLES] Cleaned self-attention benchmarks 2020-01-22 18:09:00 -05:00
Philippe Tillet
ce7a00674a [PYTHON][EXAMPLES] Added self-attention example using triton.ops.einsum 2020-01-21 16:45:04 -05:00
Philippe Tillet
78b98fb7cf [GENERAL] Cleaned polymorphic structure of layouts analysis pass 2020-01-21 11:38:39 -05:00
Philippe Tillet
fbf2a3f56f [CODEGEN][TRANSFORM] some bug-fixes for FP32 einsum 2020-01-20 12:42:53 -05:00
Philippe Tillet
f278d9741a [GENERAL] Merged einsum feature branch. Various feature, performance
improvements and bugfixes:

* Added preliminary support for extended Einstein summation in PyTriton
* Significant performance improvement on FP32 kernels containing matrix
multiplication
* Added re-coalescing pass for FP16 kernels containing matrix
multiplication
* Various bugfixes
2020-01-20 12:42:48 -05:00
Philippe Tillet
50a52df489 [PYTHON][OPS] Convolution: Some cleaning of Triton-C kernel 2019-11-01 11:21:30 -04:00
Philippe Tillet
f4bbbbe5e4 [PYTHON][OPS] Bugfix in conv fprop 2019-11-01 00:43:02 -04:00
Philippe Tillet
739a8d9061 some work on conv 2019-10-31 18:08:27 -04:00
Philippe Tillet
e0fe8d9058 [PYTHON][TENSORFLOW] More work 2019-10-30 18:39:58 -04:00
Philippe Tillet
9b0f1a0807 more stuff 2019-10-30 13:44:31 -04:00
Philippe Tillet
bf3dc63858 [PYTHON] Removed dead code for alloc_empty and register_scalar 2019-10-30 10:37:30 -04:00
Philippe Tillet
f4fcaf84df [PYTHON][EXAMPLES] Added example for batchnorm 2019-10-30 01:49:42 -04:00
Philippe Tillet
76651a065f [PYTHON][EXAMPLES] Better einsum example 2019-10-29 12:56:58 -04:00
Philippe Tillet
448f4433d9 [PYTHON][KERNEL] Enforcing shapes to be known at compile-time for
TensorFlow Graph Execution
2019-10-29 00:48:53 -04:00
Philippe Tillet
e9c787ef05 [PYTHON][EINSUM] Added support for FP16 2019-10-28 14:07:17 -04:00
Philippe Tillet
0ec213547c [PYTHON][KERNEL] Added benchmarking functionalities for kernels 2019-10-28 00:30:04 -04:00
Philippe Tillet
e11557855f [PYTHON] [OPS] Added einsum implementation 2019-10-26 22:14:50 -04:00
Philippe Tillet
943bf41b5c [python] [op] added Triton NT einsum 2019-10-21 23:37:39 -04:00
Philippe Tillet
099918b3c0 [python] [ops] added skeleton for einsum op 2019-10-21 18:58:02 -04:00
Philippe Tillet
4b0c43bb7b [python][example] added test for einsum 2019-10-21 17:13:12 -04:00
Philippe Tillet
96bdae25d5 [python][example] now executing tensorflow and/or pytorch example
automatically
2019-09-05 21:35:23 -04:00
Philippe Tillet
9ab2880fba [python][examples] cleaned up dot example 2019-09-05 12:54:35 -04:00
Philippe Tillet
2d6c8311e8 [python] upgraded pybind11 ; forcing torch tensors to be contiguous() 2019-09-05 12:30:51 -04:00
Philippe Tillet
44896ee777 [pytorch] clean-up of dynamic framework load 2019-09-05 02:16:27 -04:00
Philippe Tillet
65133cdf33 [python] basic support for pytorch seems to be working 2019-09-05 01:32:21 -04:00
Philippe Tillet
ed0f706005 [python] fixed various issues in pytorch supoport 2019-09-05 00:19:42 -04:00
Philippe Tillet
945b5d0de9 [python] modularized triton package 2019-09-04 21:55:47 -04:00
Philippe Tillet
f6e9c24fe8 [python] more progress towards tensorflow/pytorch unification 2019-09-04 19:45:50 -04:00