Updated index.md (#22280)
* Updated index.md Added more resources to the article * Updated index.md * Added more resources to the article Moved the lines out of the code block as requested
This commit is contained in:
@ -37,6 +37,8 @@ def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_i
|
||||
return [b, m]
|
||||
|
||||
gradient_descent_runner(wheat_and_bread, 1, 1, 0.01, 100)
|
||||
|
||||
|
||||
```
|
||||
|
||||
Code example is from <a href='http://blog.floydhub.com/coding-the-history-of-deep-learning/' target='_blank' rel='nofollow'>this article</a>. It also explains gradient descent and other essential concepts for deep learning.
|
||||
@ -66,3 +68,7 @@ print(model.intercept_)
|
||||
from sklearn import metrics
|
||||
print(metrics.accuracy_score(y_test, y_pred_class))
|
||||
```
|
||||
|
||||
|
||||
You can refer to this article for deeper insight into regression
|
||||
https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/
|
||||
|
@ -36,6 +36,9 @@ penalizing the learning algorithms are used.
|
||||
Cost(hθ(x),y)=−log(hθ(x)) if y = 1
|
||||
Cost(hθ(x),y)=−log(1−hθ(x)) if y = 0
|
||||
|
||||
Refer to this article for clearing your basics https://www.analyticsvidhya.com/blog/2017/06/a-comprehensive-guide-for-linear-ridge-and-lasso-regression/
|
||||
|
||||
|
||||
#### Predictions using logistic regression:
|
||||
Logistic regression models the probability of the default class (i.e. the first class).
|
||||
You can classify results given by:
|
||||
|
Reference in New Issue
Block a user