From e118a89d54f7c247d36a8e68fcaa4be51e277587 Mon Sep 17 00:00:00 2001 From: GENTELMAN23 <44404103+GENTELMAN23@users.noreply.github.com> Date: Sun, 6 Jan 2019 10:44:12 -0600 Subject: [PATCH] spacing fix (#27018) --- guide/english/natural-language-processing/index.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/guide/english/natural-language-processing/index.md b/guide/english/natural-language-processing/index.md index 783afd9f21..3f002664f8 100644 --- a/guide/english/natural-language-processing/index.md +++ b/guide/english/natural-language-processing/index.md @@ -17,10 +17,9 @@ It has always been our dream to make machines understand our language. Ever sinc People have used this concept in a lot of interesting applications. A few of the exciting ones include Google Translate, Siri, or Gmail auto reply suggestions. People are however working on ways to improve these predictions, and state of the art research is being done on how to make machines answer questions more reliably. ## How Natural Language Processing works - Early on, NLP employed a rule based approach, i.e. all the rules were hard coded (E.g. writing grammar). However this wasn't very effective due to the variations in the language models. Currently, NLP processes are carried on using Artificial Intelligence. They rely mainly on Deep Learning, an AI that determines patterns in the data and uses them to train the model. This method is better than the earlier methods because when learning through the huge data sets, the machine can focus on the most common cases. This is not easy with hand-written rules because it is not obvious as to where the efforts are to be put. Also, these models become more reliable with the increase in data, but in earlier approaches it could only be made accurate by increasing the complexity of the rules, which is a more difficult task. -The model learns the rules of the language through the analysis of large corpora of typical real-world examples. This method requires enormous amount of labelled data , which is a big hurdle for NLP. +The model learns the rules of the language through the analysis of large corpora of typical real-world examples. This method requires enormous amount of labelled data, which is a big hurdle for NLP. ## Use cases People have used this concept in a lot of interesting applications. Few of the exciting ones include Google Translate, Siri, or Gmail auto reply suggestions. People are however working on ways to improve these predictions, and state of the art research is being done on how to make machines answer questions more reliably.