From a7616e7b2398ee6c1fb7c8dff9ad03921e7b3d8f Mon Sep 17 00:00:00 2001 From: Ritul Srivastava Date: Mon, 19 Nov 2018 19:33:11 +0530 Subject: [PATCH] Updated index.md (#22280) * Updated index.md Added more resources to the article * Updated index.md * Added more resources to the article Moved the lines out of the code block as requested --- guide/english/machine-learning/linear-regression/index.md | 6 ++++++ guide/english/machine-learning/logistic-regression/index.md | 3 +++ 2 files changed, 9 insertions(+) diff --git a/guide/english/machine-learning/linear-regression/index.md b/guide/english/machine-learning/linear-regression/index.md index 2ec8b0de36..c0daee1752 100644 --- a/guide/english/machine-learning/linear-regression/index.md +++ b/guide/english/machine-learning/linear-regression/index.md @@ -37,6 +37,8 @@ def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_i return [b, m] gradient_descent_runner(wheat_and_bread, 1, 1, 0.01, 100) + + ``` Code example is from this article. It also explains gradient descent and other essential concepts for deep learning. @@ -66,3 +68,7 @@ print(model.intercept_) from sklearn import metrics print(metrics.accuracy_score(y_test, y_pred_class)) ``` + + +You can refer to this article for deeper insight into regression +https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/ diff --git a/guide/english/machine-learning/logistic-regression/index.md b/guide/english/machine-learning/logistic-regression/index.md index 887b5528c3..9aaeb89272 100644 --- a/guide/english/machine-learning/logistic-regression/index.md +++ b/guide/english/machine-learning/logistic-regression/index.md @@ -36,6 +36,9 @@ penalizing the learning algorithms are used. Cost(hθ(x),y)=−log(hθ(x)) if y = 1 Cost(hθ(x),y)=−log(1−hθ(x)) if y = 0 +Refer to this article for clearing your basics https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/ + + #### Predictions using logistic regression: Logistic regression models the probability of the default class (i.e. the first class). You can classify results given by: