From c900cec85e4110b8610f49f8fd9990bffd51e360 Mon Sep 17 00:00:00 2001 From: Varian Caesar Date: Fri, 15 Feb 2019 03:45:14 +0700 Subject: [PATCH] edit(guide): simplify cost function (#29668) Simplify the cost function for logistic regression --- .../machine-learning/logistic-regression/index.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/guide/english/machine-learning/logistic-regression/index.md b/guide/english/machine-learning/logistic-regression/index.md index 9aaeb89272..b8996854e9 100644 --- a/guide/english/machine-learning/logistic-regression/index.md +++ b/guide/english/machine-learning/logistic-regression/index.md @@ -31,10 +31,13 @@ J(θ)=(1/m)∑Cost(hθ(x(i)),y(i)) , where summation is from i=1 to m. Where hθ(x) is = hypothetic value calculated in accordance with attributes and weights which are calculated and balanced via algorithm such as gradient descent. y = is the corresponding value from observation data set -Here cost function is not a proper sigmoid function in use but in place, two log functions which performs with greater efficiency without -penalizing the learning algorithms are used. -Cost(hθ(x),y)=−log(hθ(x)) if y = 1 -Cost(hθ(x),y)=−log(1−hθ(x)) if y = 0 +Here cost function is not a proper sigmoid function in use but in place, two log functions which performs with greater efficieny without penalizing the learning algorithms are used. +* Cost(hθ(x),y)=−log(hθ(x)) if y = 1 +* Cost(hθ(x),y)=−log(1−hθ(x)) if y = 0 + +Which we can simplify as: + +Cost(hθ(x),y) = −log(hθ(x)) − log(1 − hθ(x)) Refer to this article for clearing your basics https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/