fix(guide): simplify directory structure

This commit is contained in:
Mrugesh Mohapatra
2018-10-16 21:26:13 +05:30
parent f989c28c52
commit da0df12ab7
35752 changed files with 0 additions and 317652 deletions

View File

@ -0,0 +1,11 @@
---
title: Convolutional Neural Networks
---
Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars.
### Suggested links :
- Stanford CS231n [Lecture 5 Convolutional Neural Networks](https://www.youtube.com/watch?v=bNb2fEVKeEo)
- Stanford CS231n [Lecture 9 CNN Architectures](https://www.youtube.com/watch?v=DAOcjicFr1Y&t=2384s)
- Udacity Deep learning : [Convolutional netwoks](https://www.youtube.com/watch?v=jajksuQW4mc)
- Andrew Ng's DeepLearning.ai: [Convulational Neural Networks](https://www.coursera.org/learn/convolutional-neural-networks/)

View File

@ -0,0 +1,19 @@
---
title: Generative Adversarial Networks
---
## Generative Adversarial Networks
## Overview
Generative adversarial networks (GANs) are a class of [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence) algorithms used in [unsupervised machine learning](https://en.wikipedia.org/wiki/Unsupervised_machine_learning), implemented by a system of two [neural networks](https://en.wikipedia.org/wiki/Neural_network) contesting with each other in a zero-sum game framework. They were introduced by Ian Goodfellow et al. in 2014. This technique can generate photographs that look at least superficially authentic to human observers, having many realistic characteristics (though in tests people can tell real from generated in many cases).
## Method
One network generates candidates (generative) and the other [evaluates them](https://en.wikipedia.org/wiki/Turing_test)(discriminative). Typically, the generative network learns to map from a [latent space](https://en.wikipedia.org/wiki/Latent_variable) to a particular data distribution of interest, while the discriminative network discriminates between instances from the true data distribution and candidates produced by the generator. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel synthesised instances that appear to have come from the true data distribution).
In practice, a known dataset serves as the initial training data for the discriminator. Training the discriminator involves presenting it with samples from the dataset, until it reaches some level of accuracy. Typically the generator is seeded with a randomized input that is sampled from a predefined latent space (e.g. a [multivariate normal distribution](https://en.wikipedia.org/wiki/Multivariate_normal_distribution)). Thereafter, samples synthesized by the generator are evaluated by the discriminator. [Backpropagation](https://en.wikipedia.org/wiki/Backpropagation) is applied in both networks so that the generator produces better images, while the discriminator becomes more skilled at flagging synthetic images. The generator is typically a deconvolutional neural network, and the discriminator is a [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network).
The idea to infer models in a competitive setting (model versus discriminator) was proposed by Li, Gauci and Gross in 2013. Their method is used for behavioral inference. It is termed Turing Learning, as the setting is akin to that of a [Turing test](https://en.wikipedia.org/wiki/Turing_test). Turing Learning is a generalization of GANs. Models other than neural networks can be considered. Moreover, the discriminators are allowed to influence the processes from which the datasets are obtained, making them active interrogators as in the Turing test. The idea of adversarial training can also be found in earlier works, such as Schmidhuber in 1992.
## Application
GANs have been used to produce samples of [photorealistic](https://en.wikipedia.org/wiki/Photorealistic) images for the purposes of visualizing new interior/industrial design, shoes, bags and clothing items or items for computer games' scenes. These networks were reported to be used by Facebook. Recently, GANs have modeled patterns of motion in video. They have also been used to reconstruct 3D models of objects from images and to improve astronomical images. In 2017 a fully convolutional feedforward GAN was used for image enhancement using automated texture synthesis in combination with perceptual loss. The system focused on realistic textures rather than pixel-accuracy. The result was a higher image quality at high magnification.

View File

@ -0,0 +1,66 @@
---
title: Neural Networks
---
## Neural Networks
![Feed-forward neural network](http://ufldl.stanford.edu/tutorial/images/SingleNeuron.png)
An artificial neural network is a computing system. They are like biological neural networks that constitute animal brains.
To train a neural network, we need an input vector and a corresponding output vector.
The training works by minimizing an error term. This error can be the squared difference between the predicted output and the original output.
The basic principle which underlies the remarkable success of neural networks is 'The Universal Approximation Theorem'. It has been mathematically proven thet the neural networks are universal approximation machines which are capable of approximating any mathematical function between the given input and output.
Neural networks initially became popular in the 1980s, but limitations in computational power prohibited their widespread acceptance until the past decade.
Innovations in CPU size and power allow for neural network implementation at scale, though other machine learning paradigms still outrank neural networks in terms of efficiency.
The most basic element of a neural network is a neuron. It's input is a vector, say `x`, and its output is a real valued variable, say `y`. Thus, we can conclude that the neuron acts as a mapping between the vector `x` and a real number `y`.
Neural networks perform regression iteratively across multiple layers, resulting in a more nuanced prediction model.
A single node in a neural network computes the exact same function as [logistic regression](../logistic-regression/index.md).
All these layers, aside from the input and output, are hidden, that is, the specific traits represented by these layers are not chosen or modified by the programmer.
![Four Layered Neural Network](http://cs231n.github.io/assets/nn1/neural_net2.jpeg)
In any given layer, each node takes all values stored in the previous layer as input and makes predictions on them based on a logistic regression analysis.
The power of neural networks lies in their ability to "discover" patterns and traits unseen by programmers.
As mentioned earlier, the middle layers are "hidden," meaning the weights given to the transitions is determined exclusively by the training of the algorithm.
Neural networks are used on a variety of tasks. These include computer vision, speech recognition, translation, social network filtering, playing video games, and medical diagnosis among other things.
### Visualization
There's an awesome tool to help you grasp the idea of neural networks without any hard math: <a href='http://playground.tensorflow.org' target='_blank' rel='nofollow'>TensorFlow Playground</a>, a web app that lets you play with a real neural network running in your browser and click buttons and tweak parameters to see how it works.
### Problems solved using Neural Networks
- Classification
- Clustering
- Regression
- Anomaly detection
- Association rules
- Reinforcement learning
- Structured prediction
- Feature engineering
- Feature learning
- Learning to rank
- Grammar induction
- Weather prediction
- Generating images
### Common Neural Network Systems
The most common Neural Networks used today fall into the [deep learning](https://github.com/freeCodeCamp/guides/blob/master/src/pages/machine-learning/deep-learning/index.md) category. Deep learning is the process of chaining multiple layers of neurons to allow a network to create increasingly abstract mappings between input and output vectors. Deep neural networks will most commonly use [back propogation](https://github.com/freeCodeCamp/guides/blob/master/src/pages/machine-learning/backpropagation/index.md) in order to converge upon the most accurate mapping.
The second most common form of neural networks is nueroevolution. In this system multiple neural networks are randomly generated as initial guesses. Then multiple generations of combining the accurate most networks and random permutations are used to converge upon a more accurate mapping.
### Types of Neural Networks
- Recurrent Neural Network (RNN)
- Long-short Term Memory (LSTM), a type of RNN
- Convolutional Neural Network (CNN)
### More Information:
- [Neural Networks - Wikipedia](https://en.wikipedia.org/wiki/Artificial_neural_network#Components_of_an_artificial_neural_network)
- [Daniel Shiffman's Nature of Code](http://natureofcode.com/book/chapter-10-neural-networks/)
- [Stanford University, Multilayer Neural Networks](http://ufldl.stanford.edu/tutorial/supervised/MultiLayerNeuralNetworks/)
- [3Blue1Brown, Youtube Channel with Neural Network content](https://youtu.be/aircAruvnKk)
- [Siraj Raval, Youtube CHannel with Neural Network content](https://youtu.be/h3l4qz76JhQ)
- [Neuroevolution - Wikipedia](https://en.wikipedia.org/wiki/Neuroevolution)

View File

@ -0,0 +1,15 @@
---
title: Multi Layer Perceptron
---
## Multi Layer Perceptron
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/machine-learning/neural-networks/multi-layer-perceptron/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->

View File

@ -0,0 +1,13 @@
---
title: Perceptron
---
## Perceptron
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/machine-learning/neural-networks/perceptron/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->

View File

@ -0,0 +1,15 @@
---
title: Recurrent Neural Networks
---
## Recurrent Neural Networks
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/machine-learning/neural-networks/recurrent-neural-networks/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->