Yan Chunwei
3a84278530
[Triton-MLIR][BACKEND] Refine dot conversion ( #710 )
...
This PR does
1. Refine the dot conversion
2. some other tiny code refinement
2022-09-27 14:38:34 +08:00
goostavz
61b61755e5
[Triton-MLIR][Backend] Support layout conversion between mmaLayout and blockedLayout ( #693 )
2022-09-27 03:58:47 +00:00
Keren Zhou
ecd1bc33df
[Triton-MLIR] Keren/code gen for extract slice and alloc tensor ( #692 )
...
Co-authored-by: gzhu <goostavz@outlook.com >
2022-09-23 19:38:14 +00:00
Yan Chunwei
922155f1d2
[BACKEND] add dot conversion (mma version=2) ( #672 )
...
LLVM Conversion for Dot op.
Due to the lack of `convert_layout`, currently, the dot only supports
the following combination of operands
- `$a` in shared layout
- `$b` in shared layout
- `$c` in MMA layout(but only Splat-like, leaving the generic cases to
`convert_layout`)
This PR focus on `mma.16816` related logic support, leaving the other
cases to the following PR.
Co-authored-by: Philippe Tillet <phil@openai.com >
2022-09-22 20:43:54 -07:00
Shintaro Iwasaki
940ef3f0ac
[BACKEND] llvm::dyn_cast -> llvm::dyn_cast_or_null ( #689 )
2022-09-22 03:26:40 +00:00
goostavz
15bfd0cb79
[BACKEND] Support of ConvertLayoutOp from blocked to blocked and SliceLayout with blocked parent ( #658 )
2022-09-17 14:58:42 -07:00
Shintaro Iwasaki
13669b46a6
[DOCS] Correct spelling ( #665 )
...
This PR corrects spelling like #664 for Triton-MLIR. It should not break anything.
2022-09-16 15:07:34 -07:00
Shintaro Iwasaki
43be75ad42
[FRONTEND] Add scalar type support for some ops ( #661 )
...
This PR adds basic support for scalar-type inputs to some ops (cast and pointer arithmetics) for Triton-MLIR. Also renames getelementptr -> addptr
2022-09-15 16:12:52 -07:00
Da Yan
2e08450c80
[OPTIMIZER] Better pipeline tests ( #660 )
2022-09-14 23:26:40 -07:00
Keren Zhou
16aed94ff5
[Analysis/Allocation] Allocation passes now assumes that slices always alias ( #108 )
...
This code in this branch assumes the `src` operand in
`insert_slice_async` always aliases the result, which shouldn't hold for
generally cases but is just a workaround to make the pipeline pass work.
I'm also working on the complete analysis in another
[branch](https://github.com/openai/triton-mlir/tree/keren/analyze-slice ).
2022-09-09 12:03:41 -07:00
Philippe Tillet
9bd5a3dcd2
[OPTIMIZER] Pipeline async buffer ( #110 )
2022-09-09 11:01:14 -07:00
Yan Chunwei
a9464f4993
[Backend] Vectorize Load/Store Ops ( #86 )
...
This PR does the following things:
- Code refactoring on Load and Store op codegen, rewrite with same logic
and share much code
- Support the vectorized load/store
2022-09-06 12:28:09 -07:00
Da Yan
35e346bcff
[OPTIMIZER] Better pipeline pass ( #100 )
...
* Use `insert_slice_async` instead of `CopyAsync`
* Move async.wait to loop header
Co-authored-by: Jokeren <kerenzhou@openai.com >
2022-09-06 08:31:13 -07:00
Philippe Tillet
a0bab9748e
[OPTIMIZER] Coalesce pass no longer takes a num-warps
argument ( #99 )
...
Improved design to avoid inconsistent `num-warps` value between the pass and the parent module of the operation it processes.
2022-09-05 18:09:02 -07:00
Philippe Tillet
d0b4c67b05
[OPTIMIZER] Improved layout conversion simplification algorithm ( #97 )
...
This PR both simplifies the layout conversion simplification algorithm, and also improves it to make it work with vectorized element-wise ops. The conversion optimizer still has a lot of room for improvements, and other PRs will address its limitations (ideally via some sort of explicit cost model)
2022-09-02 16:52:44 -07:00
Shintaro Iwasaki
3c635449e5
[Triton] Support math and libdevice ops ( #91 )
...
This PR adds basic math ops by using `MathDialect` and `libdevice` ops by using `extern_elementwise`. This is needed to compile some tutorial code (e.g., `softmax`). This PR implements only interface till PTX (so from frontend to TritonGPU-MLIR)
- Currently till TritonGPU. It cannot be lowered to PTX now.
- No special optimizations (e.g., constant folding etc) are applied.
- 14.x does not define folders for many operators for math ops, but 15.x seems to increase its coverage: https://github.com/llvm/llvm-project/blob/llvmorg-15.0.0-rc3/mlir/include/mlir/Dialect/Math/IR/MathOps.td
- No constant folding etc for `libdevice` ops.
```py
import triton
import triton.language as tl
import sys
@triton.jit
def add_kernel(
x_ptr,
y_ptr,
BLOCK_SIZE: tl.constexpr,
):
offsets = tl.arange(0, BLOCK_SIZE)
x = tl.load(x_ptr + offsets)
x = tl.sin(x)
output = tl.libdevice.sin(x)
output = tl.libdevice.fdiv_rn(output, output)
output = tl.libdevice.fmaf_rd(output, output, output)
tl.store(y_ptr + offsets, output)
if __name__ == "__main__" and len(sys.argv) >= 2:
signature = "*fp32,*fp32"
constants = {'BLOCK_SIZE': 1024}
output = triton.compile(add_kernel, signature, device=0, constants=constants, output="ttgir")
print(output)
```
->
```llvm
#blocked = #triton_gpu.blocked<{sizePerThread = [1], threadsPerWarp = [32], warpsPerCTA = [4], order = [0]}>
module attributes {"triton_gpu.num-warps" = 4 : i32} {
func @add_kernel__Pfp32_Pfp32__2c1024(%arg0: !tt.ptr<f32>, %arg1: !tt.ptr<f32>) {
%0 = tt.make_range {end = 1024 : i32, start = 0 : i32} : tensor<1024xi32, #blocked>
%1 = tt.splat %arg0 : (!tt.ptr<f32>) -> tensor<1024x!tt.ptr<f32>, #blocked>
%2 = tt.getelementptr %1, %0 : tensor<1024x!tt.ptr<f32>, #blocked>
%3 = tt.load %2 {cache = 1 : i32, evict = 1 : i32, isVolatile = false} : tensor<1024xf32, #blocked>
%4 = math.sin %3 : tensor<1024xf32, #blocked>
%5 = tt.ext_elemwise %4 {libname = "libdevice", libpath = "/home/siwasaki/triton/python/triton/language/libdevice.10.bc", symbol = "__nv_sinf"} : tensor<1024xf32, #blocked> -> tensor<1024xf32, #blocked>
%6 = tt.ext_elemwise %5, %5 {libname = "libdevice", libpath = "/home/siwasaki/triton/python/triton/language/libdevice.10.bc", symbol = "__nv_fdiv_rn"} : tensor<1024xf32, #blocked>, tensor<1024xf32, #blocked> -> tensor<1024xf32, #blocked>
%7 = tt.ext_elemwise %6, %6, %6 {libname = "libdevice", libpath = "/home/siwasaki/triton/python/triton/language/libdevice.10.bc", symbol = "__nv_fmaf_rd"} : tensor<1024xf32, #blocked>, tensor<1024xf32, #blocked>, tensor<1024xf32, #blocked> -> tensor<1024xf32, #blocked>
%8 = tt.splat %arg1 : (!tt.ptr<f32>) -> tensor<1024x!tt.ptr<f32>, #blocked>
%9 = tt.getelementptr %8, %0 : tensor<1024x!tt.ptr<f32>, #blocked>
tt.store %9, %7 : tensor<1024xf32, #blocked>
return
}
}
```
2022-09-01 16:34:27 -07:00
Keren Zhou
328b87aec6
Keren/tensor slice insert alloc ( #94 )
...
This branch defines three new triton_gpu operations to partially solve #87 . Below is an overview:
```
%tensor = triton_gpu.alloc_tensor : tensor<2x16x16xf16, #A>
%b = triton_gpu.insert_slice_async %a_ptr, %tensor, %offset {axis = 0 : i32, cache = 1 : i32, evict = 1 : i32, isVolatile = false} : tensor<16x16x!tt.ptr<f16>, #AL> -> tensor<2x16x16xf16, #A>
%c = triton_gpu.extract_slice %b, %offset {axis = 0 : i32} : tensor<2x16x16xf16, #A> -> tensor<16x16xf16, #A>
```
We plan to fully replace `copy_async` with `insert_slice_async`. **This hasn't been done yet.**
2022-09-01 12:37:17 -07:00
Shintaro Iwasaki
84aa7d025a
[TritonIR] simplify Load/StoreOps when mask is true/false ( #79 )
...
* [TritonIR] fix Load/Store/CopyAsyncOp's parsers
* [TritonIR] simplify Load/StoreOps when mask is true/false
* [TEST] adds tests to check load/store simplification
2022-08-24 12:55:49 -07:00
Shintaro Iwasaki
0ebef11c77
[TritonIR] Make mask operand optional ( #74 )
2022-08-22 22:00:17 -07:00
Da Yan
92ef552a54
[OPTIMIZER] Fix Num in AsyncWaitOp generated by the pipeline pass ( #72 )
2022-08-22 15:58:10 -07:00
Shintaro Iwasaki
9aa00249a6
[TritonIR] make other optional and remove isOtherUnspecified ( #67 )
...
[Triton] make other optional and remove isOtherUnspecified
2022-08-18 18:19:55 -07:00
Philippe Tillet
192be76b3c
[OPTIMIZER] Rewrite patterns for layout conversions ( #64 )
2022-08-18 12:49:37 -07:00
Da Yan
8776ad1a0e
[OPTIMIZER] Let the pipeline pass insert async wait. ( #63 )
2022-08-18 10:31:57 -07:00
Shintaro Iwasaki
d69ce77b19
[FRONTEND] add an attr for masked load without explicit other ( #55 )
2022-08-18 09:51:37 -07:00
Shintaro Iwasaki
2ba9a83465
[BUILD] fix minor issues with MLIR assert enabled ( #46 )
2022-08-11 21:20:47 -07:00
Philippe Tillet
3236642e8f
[OPTIMIZER] Added memory coalescing pass ( #31 )
2022-07-31 20:59:31 -07:00
Philippe Tillet
d1593e6ca8
[TritonGPU] Improved documentation and semantics of layout encodings ( #30 )
2022-07-31 13:59:44 -07:00
Philippe Tillet
432c3df265
[BUILD] MacOS can now build compiler and run MLIR tests ( #25 )
2022-07-27 01:32:10 -07:00
Philippe Tillet
6d62d88d4f
[CI] run clang-format ( #24 )
2022-07-26 17:25:03 -07:00
Keren Zhou
96cc6fb563
[TritonGPU] Pretty printer for layouts ( #21 )
2022-07-26 10:50:11 -07:00
Yan Da
63e6a85901
Fix blocked layout parser
2022-07-15 15:19:11 +08:00
Yan Da
9d1b5e3f79
special encoding for broadcast
2022-06-18 21:16:45 +08:00
Yan Da
53cf93ce6a
Revert "Remove TypeConverter from TritonToTritonGPU conversion"
...
This reverts commit 64d0b87ef0
.
2022-06-18 14:57:41 +08:00
Yan Da
64d0b87ef0
Remove TypeConverter from TritonToTritonGPU conversion
2022-06-18 14:34:59 +08:00
Yan Da
117a402c1b
more comments to TypeConverter & update warpTileSize
2022-06-08 16:20:07 +08:00
Yan Da
7b09b5f9e9
the pipeline pass now generates and accepts valid IR
2022-06-07 19:34:59 +08:00
Yan Da
366dddc3bc
update mma encoding & triton-opt
2022-06-06 21:03:58 +08:00
Yan Da
7807f64ef3
rename sharded_layout => blocked_layout
2022-06-05 16:14:59 +08:00
Yan Da
d5eca56cf3
more TritonGPU unit tests
2022-06-05 14:25:09 +08:00
Da Yan
e36a54eb86
more progress on the definition of layouts
2022-05-31 11:43:21 +00:00
Yan Da
41d338d848
Fix op mapping in pipeline.cpp
2022-05-26 13:57:01 +08:00
Yan Da
c529b462f5
more fixes on pipeline.cpp
2022-05-26 13:14:41 +08:00
Yan Da
71d1c10e19
Remove weird includes
2022-05-25 21:54:06 +08:00
Yan Da
9308e9c90c
A more general pipeliner
2022-05-25 21:52:51 +08:00
Yan Da
441fd7c3cc
assembly format
2022-05-25 17:53:24 +08:00
Yan Da
9b670cfb9f
Add ReduceOp
2022-05-25 14:15:36 +08:00
Yan Da
a2c9f919a8
TritonGPU verifier
2022-05-24 19:48:56 +08:00
Yan Da
36c45ec687
make numStages an option in PipelinePass
2022-05-23 12:47:55 +08:00
Yan Da
79298d61bc
fix a pipeline issue
2022-05-16 19:38:40 +08:00
Yan Da
c3c4ac3733
TritonGPU combiner
2022-05-16 19:17:15 +08:00