From e318185eb4561908327a35861f9dc14f3918fdd3 Mon Sep 17 00:00:00 2001 From: Philippe Tillet Date: Tue, 20 Sep 2022 18:09:43 -0700 Subject: [PATCH] [DOCS] Improved README.md wording (#683) Initial wording dates from a time where nobody knew Triton, and comparing it to CUDA helped differentiate it from other existing DSLs. But nowadays this comparison doesn't make much sense; Triton is its own thing, and some people may even still be more productive in CUDA than Triton -- language preferences are subjective after all. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f4b6ef41c..ed0fc71b1 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ # Triton -This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment to write fast code at higher productivity than CUDA, but also with higher flexibility than other existing DSLs. +This is the development repository of Triton, a language and compiler for writing highly efficient custom Deep-Learning primitives. The aim of Triton is to provide an open-source environment for expressing tensor math workloads that offers high flexibility, developer productivity and end to end performance. The foundations of this project are described in the following MAPL2019 publication: [Triton: An Intermediate Language and Compiler for Tiled Neural Network Computations](http://www.eecs.harvard.edu/~htk/publication/2019-mapl-tillet-kung-cox.pdf). Please consider citing this work if you use Triton!