fix(guide): simplify directory structure

This commit is contained in:
Mrugesh Mohapatra
2018-10-16 21:26:13 +05:30
parent f989c28c52
commit da0df12ab7
35752 changed files with 0 additions and 317652 deletions

View File

@ -0,0 +1,30 @@
---
title: Behavioral patterns
---
## Behavioral patterns
Behavioral design patterns are design patterns that identify common communication problems between objects and realize these patterns. By doing so, these patterns increase flexibility in carrying out this communication, making the software more reliable and easy to mantain.
Examples of this type of design pattern include:
1. **Chain of responsibility pattern**: Command objects are handled or passed on to other objects by logic-containing processing objects.
2. **Command pattern**: Command objects encapsulate an action and its parameters.
3. **Interpreter pattern**: Implement a specialized computer language to rapidly solve a specific set of problems.
4. **Iterator pattern**: Iterators are used to access the elements of an aggregate object sequentially without exposing its underlying representation.
5. **Mediator pattern**: Provides a unified interface to a set of interfaces in a subsystem.
6. **Memento pattern**: Provides the ability to restore an object to its previous state (rollback).
7. **Null Object pattern**: Designed to act as a default value of an object.
8. **Observer pattern**: a.k.a. P**ublish/Subscribe** or **Event Listener**. Objects register to observe an event that may be raised by another object.
9. **Weak reference pattern**: De-couple an observer from an observable.
10. **Protocol stack**: Communications are handled by multiple layers, which form an encapsulation hierarchy.
11. **Scheduled-task pattern**: A task is scheduled to be performed at a particular interval or clock time (used in real-time computing).
12. **Single-serving visitor pattern**: Optimize the implementation of a visitor that is allocated, used only once, and then deleted.
13. **Specification pattern**: Recombinable business logic in a boolean fashion.
14. **State pattern**: A clean way for an object to partially change its type at runtime.
15. **Strategy pattern**: Algorithms can be selected on the fly.
16. **Template method pattern**: Describes the program skeleton of a program.
17. **Visitor pattern**: A way to separate an algorithm from an object.
### Sources
[https://en.wikipedia.org/wiki/Behavioral_pattern](https://en.wikipedia.org/wiki/Behavioral_pattern)

View File

@ -0,0 +1,21 @@
---
title: Creational patterns
---
## Creational patterns
Creational design patterns are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or in added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.
Creational design patterns are composed of two dominant ideas. One is encapsulating knowledge about which concrete classes the system uses. Another is hiding how instances of these concrete classes are created and combined.<sup1</sup>
Five well-known design patterns that are parts of creational patterns are:
1. **Abstract factory pattern**, which provides an interface for creating related or dependent objects without specifying the objects' concrete classes.
2. **Builder pattern**, which separates the construction of a complex object from its representation so that the same construction process can create different representations.
3. **Factory method pattern**, which allows a class to defer instantiation to subclasses.
4. **Prototype pattern**, which specifies the kind of object to create using a prototypical instance, and creates new objects by cloning this prototype.
5. **Singleton pattern**, which ensures that a class only has one instance, and provides a global point of access to it.
### Sources
1. [Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John (1995). Design Patterns. Massachusetts: Addison-Wesley. p. 81. ISBN 978-0-201-63361-0. Retrieved 2015-05-22.](http://www.pearsoned.co.uk/bookshop/detail.asp?item=171742)

View File

@ -0,0 +1,27 @@
---
title: Algorithm Design Patterns
---
## Algorithm Design Patterns
In software engineering, a design pattern is a general repeatable solution to a commonly occurring problem in software design. A design pattern isn't a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations.
Design patterns can speed up the development process by providing tested, proven development paradigms.
This patterns are divided in three major categories:
### Creational patterns
These are design patterns that deal with object creation mechanisms, trying to create objects in a manner suitable to the situation. The basic form of object creation could result in design problems or in added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.
### Structural patterns
These are design patterns that ease the design by identifying a simple way to realize relationships between entities.
### Behavioral patterns
These are design patterns that identify common communication patterns between objects and realize these patterns. By doing so, these patterns increase flexibility in carrying out this communication.
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
[Design patterns - Wikipedia](https://en.wikipedia.org/wiki/Design_Patterns)

View File

@ -0,0 +1,28 @@
---
title: Structural patterns
---
## Structural patterns
Structural design patterns are design patterns that ease the design by identifying a simple way to realize relationships between entities and are responsible for building simple and efficient class hierarchies between different classes.
Examples of Structural Patterns include:
1. **Adapter pattern**: 'adapts' one interface for a class into one that a client expects.
2. **Adapter pipeline**: Use multiple adapters for debugging purposes.
3. **Retrofit Interface Pattern**: An adapter used as a new interface for multiple classes at the same time.
4. **Aggregate pattern**: a version of the Composite pattern with methods for aggregation of children.
5. **Bridge pattern**: decouple an abstraction from its implementation so that the two can vary independently.
6. **Tombstone**: An intermediate "lookup" object contains the real location of an object.
7. **Composite pattern**: a tree structure of objects where every object has the same interface.
8. **Decorator pattern**: add additional functionality to a class at runtime where subclassing would result in an exponential rise of new classes.
9. **Extensibility pattern**: a.k.a. Framework - hide complex code behind a simple interface.
10. **Facade pattern**: create a simplified interface of an existing interface to ease usage for common tasks.
11. **Flyweight pattern**: a large quantity of objects share a common properties object to save space.
12. **Marker pattern**: an empty interface to associate metadata with a class.
13. **Pipes and filters**: a chain of processes where the output of each process is the input of the next.
14. **Opaque pointer**: a pointer to an undeclared or private type, to hide implementation details.
15. **Proxy pattern** a class functioning as an interface to another thing.
### Sources
[https://en.wikipedia.org/wiki/Structural_pattern](https://en.wikipedia.org/wiki/Structural_pattern)

View File

@ -0,0 +1,90 @@
---
title: Algorithm Performance
---
In mathematics, big-O notation is a symbolism used to describe and compare the _limiting behavior_ of a function.
A function's limiting behavior is how the function acts as it tends towards a particular value and in big-O notation it is usually as it trends towards infinity.
In short, big-O notation is used to describe the growth or decline of a function, usually with respect to another function.
in algorithm design we usualy use big-O notation because we can see how bad or good an algorithm will work in worst mode. but keep that in mind it isn't always the case because the worst case may be super rare and in those cases we calculate average case. for now lest's disscus big-O notation.
In mathematics, big-O notation is a symbolism used to describe and compare the _limiting behavior_ of a function.
A function's limiting behavior is how the function acts as it trends towards a particular value and in big-O notation it is usually as it trends towards infinity.
In short, big-O notation is used to describe the growth or decline of a function, usually with respect to another function.
NOTE: x^2 is equivalent to x * x or 'x-squared']
For example we say that x = O(x^2) for all x > 1 or in other words, x^2 is an upper bound on x and therefore it grows faster.
The symbol of a claim like x = O(x^2) for all x > _n_ can be substituted with x <= x^2 for all x > _n_ where _n_ is the minimum number that satisfies the claim, in this case 1.
Effectively, we say that a function f(x) that is O(g(x)) grows slower than g(x) does.
Comparitively, in computer science and software development we can use big-O notation in order to describe the efficiency of algorithms via its time and space complexity.
**Space Complexity** of an algorithm refers to its memory footprint with respect to the input size.
Specifically when using big-O notation we are describing the efficiency of the algorithm with respect to an input: _n_, usually as _n_ approaches infinity.
When examining algorithms, we generally want a lower time and space complexity. Time complexity of o(1) is indicative of constant time.
Through the comparison and analysis of algorithms we are able to create more efficient applications.
For algorithm performance we have two main factors:
- **Time**: We need to know how much time it takes to run an algorithm for our data and how it will grow by data size (or in some cases other factors like number of digits and etc).
- **Space**: our memory is finate so we have to know how much free space we need for this algorithm and like time we need to be able to trace its growth.
The following 3 notations are mostly used to represent time complexity of algorithms:
1. **Θ Notation**: The theta notation bounds a functions from above and below, so it defines exact behavior. we can say that we have theta notation when worst case and best case are the same.
>Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
2. **Big O Notation**: The Big O notation defines an upper bound of an algorithm. For example Insertion Sort takes linear time in best case and quadratic time in worst case. We can safely say that the time complexity of Insertion sort is *O*(*n^2*).
>O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0}
3. **Ω Notation**: Ω notation provides an lower bound to algorithm. it shows fastest possible answer for that algorithm.
>Ω (g(n)) = {f(n): there exist positive constants c and n0 such that 0 <= cg(n) <= f(n) for all n >= n0}.
## Examples
As an example, we can examine the time complexity of the <a href='https://github.com/FreeCodeCamp/wiki/blob/master/Algorithms-Bubble-Sort.md#algorithm-bubble-sort' target='_blank' rel='nofollow'>[bubble sort]</a> algorithm and express it using big-O notation.
#### Bubble Sort:
```javascript
// Function to implement bubble sort
void bubble_sort(int array<a href='http://bigocheatsheet.com/' target='_blank' rel='nofollow'>], int n)
{
// Here n is the number of elements in array
int temp;
for(int i = 0; i < n-1; i++)
{
// Last i elements are already in place
for(int j = 0; j < n-i-1; j++)
{
if (array[j] > array[j+1])
{
// swap elements at index j and j+1
temp = array[j];
array[j] = array[j+1];
array[j+1] = temp;
}
}
}
}
```
Looking at this code, we can see that in the best case scenario where the array is already sorted, the program will only make _n_ comparisons as no swaps will occur.
Therefore we can say that the best case time complexity of bubble sort is O(_n_).
Examining the worst case scenario where the array is in reverse order, the first iteration will make _n_ comparisons while the next will have to make _n_ - 1 comparisons and so on until only 1 comparison must be made.
The big-O notation for this case is therefore _n_ * [(_n_ - 1) / 2] which = 0.5*n*^2 - 0.5*n* = O(_n_^2) as the _n_^2 term dominates the function which allows us to ignore the other term in the function.
We can confirm this analysis using [this handy big-O cheat sheet</a> that features the big-O time complexity of many commonly used data structures and algorithms
It is very apparent that while for small use cases this time complexity might be alright, at a large scale bubble sort is simply not a good solution for sorting.
This is the power of big-O notation: it allows developers to easily see the potential bottlenecks of their application, and take steps to make these more scalable.
For more information on why big-O notation and algorithm analysis is important visit this <a href='https://www.freecodecamp.com/videos/big-o-notation-what-it-is-and-why-you-should-care' target='_blank' rel='nofollow'>video challenge</a>!

View File

@ -0,0 +1,63 @@
---
title: AVL Trees
---
## AVL Trees
An AVL tree is a subtype of binary search tree.
A BST is a data structure composed of nodes. It has the following guarantees:
1. Each tree has a root node (at the top).
2. The root node has zero or more child nodes.
3. Each child node has zero or more child nodes, and so on.
4. Each node has up to two children.
5. For each node, its left descendents are less than the current node, which is less than the right descendents.
AVL trees have an additional guarantee:
6. The difference between the depth of right and left subtrees cannot be more than one. In order to maintain this guarantee, an implementation of an AVL will include an algorithm to rebalance the tree when adding an additional element would upset this guarantee.
AVL trees have a worst case lookup, insert and delete time of O(log n).
### Right Rotation
![AVL Tree Right Rotation](https://raw.githubusercontent.com/HebleV/valet_parking/master/images/avl_right_rotation.jpg)
### Left Rotation
![AVL Tree Left Rotation](https://raw.githubusercontent.com/HebleV/valet_parking/master/images/avl_left_rotation.jpg)
### AVL Insertion Process
You will do an insertion similar to a normal Binary Search Tree insertion. After inserting, you fix the AVL property using left or right rotations.
- If there is an imbalance in left child of right subtree, then you perform a left-right rotation.
- If there is an imbalance in left child of left subtree, then you perform a right rotation.
- If there is an imbalance in right child of right subtree, then you perform a left rotation.
- If there is an imbalance in right child of left subtree, then you perform a right-left rotation.
#### More Information:
[YouTube - AVL Tree](https://www.youtube.com/watch?v=7m94k2Qhg68)
An AVL tree is a self-balancing binary search tree.
An AVL tree is a binary search tree which has the following properties:
->The sub-trees of every node differ in height by at most one.
->Every sub-tree is an AVL tree.
AVL tree checks the height of the left and the right sub-trees and assures that the difference is not more than 1. This difference is called the Balance Factor.
The height of an AVL tree is always O(Logn) where n is the number of nodes in the tree.
AVL Tree Rotations:-
In AVL tree, after performing every operation like insertion and deletion we need to check the balance factor of every node in the tree. If every node satisfies the balance factor condition then we conclude the operation otherwise we must make it balanced. We use rotation operations to make the tree balanced whenever the tree is becoming imbalanced due to any operation.
Rotation operations are used to make a tree balanced.There are four rotations and they are classified into two types:
->Single Left Rotation (LL Rotation)
In LL Rotation every node moves one position to left from the current position.
->Single Right Rotation (RR Rotation)
In RR Rotation every node moves one position to right from the current position.
->Left Right Rotation (LR Rotation)
The LR Rotation is combination of single left rotation followed by single right rotation. In LR Rotation, first every node moves one position to left then one position to right from the current position.
->Right Left Rotation (RL Rotation)
The RL Rotation is combination of single right rotation followed by single left rotation. In RL Rotation, first every node moves one position to right then one position to left from the current position.

View File

@ -0,0 +1,24 @@
---
title: B Trees
---
## B Trees
# Introduction
B-Tree is a self-balancing search tree. In most of the other self-balancing search trees (like AVL and Red Black Trees), it is assumed that everything is in main memory. To understand use of B-Trees, we must think of huge amount of data that cannot fit in main memory. When the number of keys is high, the data is read from disk in the form of blocks. Disk access time is very high compared to main memory access time. The main idea of using B-Trees is to reduce the number of disk accesses. Most of the tree operations (search, insert, delete, max, min, ..etc ) require O(h) disk accesses where h is height of the tree. B-tree is a fat tree. Height of B-Trees is kept low by putting maximum possible keys in a B-Tree node. Generally, a B-Tree node size is kept equal to the disk block size. Since h is low for B-Tree, total disk accesses for most of the operations are reduced significantly compared to balanced Binary Search Trees like AVL Tree, Red Black Tree, ..etc.
Properties of B-Tree:
1) All leaves are at same level.
2) A B-Tree is defined by the term minimum degree t. The value of t depends upon disk block size.
3) Every node except root must contain at least t-1 keys. Root may contain minimum 1 key.
4) All nodes (including root) may contain at most 2t 1 keys.
5) Number of children of a node is equal to the number of keys in it plus 1.
6) All keys of a node are sorted in increasing order. The child between two keys k1 and k2 contains all keys in range from k1 and k2.
7) B-Tree grows and shrinks from root which is unlike Binary Search Tree. Binary Search Trees grow downward and also shrink from downward.
8) Like other balanced Binary Search Trees, time complexity to search, insert and delete is O(Logn).
Search:
Search is similar to search in Binary Search Tree. Let the key to be searched be k. We start from root and recursively traverse down. For every visited non-leaf node, if the node has key, we simply return the node. Otherwise we recur down to the appropriate child (The child which is just before the first greater key) of the node. If we reach a leaf node and dont find k in the leaf node, we return NULL.
Traverse:
Traversal is also similar to Inorder traversal of Binary Tree. We start from the leftmost child, recursively print the leftmost child, then repeat the same process for remaining children and keys. In the end, recursively print the rightmost child.

View File

@ -0,0 +1,56 @@
---
title: Backtracking Algorithms
---
# Backtracking Algorithms
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate *("backtracks")* as soon as it determines that the candidate cannot possibly be completed to a valid solution.
### Example Problem (The Knights tour problem)
*The knight is placed on the first block of an empty board and, moving according to the rules of chess, must visit each square exactly once.*
### Path followed by Knight to cover all the cells
Following is chessboard with 8 x 8 cells. Numbers in cells indicate move number of Knight.
[![The knight's tour solution - by Euler](https://upload.wikimedia.org/wikipedia/commons/d/df/Knights_tour_%28Euler%29.png)](https://commons.wikimedia.org/wiki/File:Knights_tour_(Euler).png)
### Naive Algorithm for Knights tour
The Naive Algorithm is to generate all tours one by one and check if the generated tour satisfies the constraints.
```
while there are untried tours
{
generate the next tour
if this tour covers all squares
{
print this path;
}
}
```
### Backtracking Algorithm for Knights tour
Following is the Backtracking algorithm for Knights tour problem.
```
If all squares are visited
print the solution
Else
a) Add one of the next moves to solution vector and recursively
check if this move leads to a solution. (A Knight can make maximum
eight moves. We choose one of the 8 moves in this step).
b) If the move chosen in the above step doesn't lead to a solution
then remove this move from the solution vector and try other
alternative moves.
c) If none of the alternatives work then return false (Returning false
will remove the previously added item in recursion and if false is
returned by the initial call of recursion then "no solution exists" )
```
### More Information
[Wikipedia](https://en.wikipedia.org/wiki/Backtracking)
[Geeks 4 Geeks](http://www.geeksforgeeks.org/backtracking-set-1-the-knights-tour-problem/)
[A very interesting introduction to backtracking](https://www.hackerearth.com/practice/basic-programming/recursion/recursion-and-backtracking/tutorial/)

View File

@ -0,0 +1,283 @@
---
title: Binary Search Trees
---
## Binary Search Trees
![Binary Search Tree](https://cdn-images-1.medium.com/max/1320/0*x5o1G1UpM1RfLpyx.png)
A tree is a data structure composed of nodes that has the following characteristics:
1. Each tree has a root node (at the top) having some value.
2. The root node has zero or more child nodes.
3. Each child node has zero or more child nodes, and so on. This create a subtree in the tree. Every node has it's own subtree made up of his children and their children, etc. This means that every node on its own can be a tree.
A binary search tree (BST) adds these two characteristics:
1. Each node has a maximum of up to two children.
2. For each node, the values of its left descendent nodes are less than that of the current node, which in turn is less than the right descendent nodes (if any).
The BST is built up on the idea of the <a href='https://guide.freecodecamp.org/algorithms/search-algorithms/binary-search' targer='_blank' rel='nofollow'>binary search</a> algorithm, which allows for fast lookup, insertion and removal of nodes. The way that they are set up means that, on average, each comparison allows the operations to skip about half of the tree, so that each lookup, insertion or deletion takes time proportional to the logarithm of the number of items stored in the tree, `O(log n)`. However, some times the worst case can happen, when the tree isn't balanced and the time complexity is `O(n)` for all three of these functions. That is why self-balancing trees (AVL, red-black, etc.) are a lot more effective than the basic BST.
**Worst case scenario example:** This can happen when you keep adding nodes that are *always* larger than the node before (it's parent), the same can happen when you always add nodes with values lower than their parents.
### Basic operations on a BST
- Create: creates an empty tree.
- Insert: insert a node in the tree.
- Search: Searches for a node in the tree.
- Delete: deletes a node from the tree.
#### Create
Initially an empty tree without any nodes is created. The variable/identifier which must point to the root node is initialized with a `NULL` value.
#### Search
You always start searching the tree at the root node and go down from there. You compare the data in each node with the one you are looking for. If the compared node doesn't match then you either proceed to the right child or the left child, which depends on the outcome of the following comparison: If the node that you are searching for is lower than the one you were comparing it with, you proceed to to the left child, otherwise (if it's larger) you go to the right child. Why? Because the BST is structured (as per its definition), that the right child is always larger than the parent and the left child is always lesser.
#### Insert
It is very similar to the search function. You again start at the root of the tree and go down recursively, searching for the right place to insert our new node, in the same way as explained in the search function. If a node with the same value is already in the tree, you can choose to either insert the duplicate or not. Some trees allow duplicates, some don't. It depends on the certain implementation.
#### Deletion
There are 3 cases that can happen when you are trying to delete a node. If it has,
1. No subtree (no children): This one is the easiest one. You can simply just delete the node, without any additional actions required.
2. One subtree (one child): You have to make sure that after the node is deleted, its child is then connected to the deleted node's parent.
3. Two subtrees (two children): You have to find and replace the node you want to delete with its successor (the letfmost node in the right subtree).
The time complexity for creating a tree is `O(1)`. The time complexity for searching, inserting or deleting a node depends on the height of the tree `h`, so the worst case is `O(h)`.
#### Predecessor of a node
Predecessors can be described as the node that would come right before the node you are currently at. To find the predecessor of the current node, look at the right-most/largest leaf node in the left subtree.
#### Successor of a node
Successors can be described as the node that would come right after the node you are currently at. To find the successor of the current node, look at the left-most/smallest leaf node in the right subtree.
### Special types of BT
- Heap
- Red-black tree
- B-tree
- Splay tree
- N-ary tree
- Trie (Radix tree)
### Runtime
**Data structure: Array**
- Worst-case performance: `O(log n)`
- Best-case performance: `O(1)`
- Average performance: `O(log n)`
- Worst-case space complexity: `O(1)`
Where `n` is the number of nodes in the BST.
### Implementation of BST
Here's a definiton for a BST node having some data, referencing to its left and right child nodes.
```c
struct node {
int data;
struct node *leftChild;
struct node *rightChild;
};
```
#### Search Operation
Whenever an element is to be searched, start searching from the root node. Then if the data is less than the key value, search for the element in the left subtree. Otherwise, search for the element in the right subtree. Follow the same algorithm for each node.
```c
struct node* search(int data){
struct node *current = root;
printf("Visiting elements: ");
while(current->data != data){
if(current != NULL) {
printf("%d ",current->data);
//go to left tree
if(current->data > data){
current = current->leftChild;
}//else go to right tree
else {
current = current->rightChild;
}
//not found
if(current == NULL){
return NULL;
}
}
}
return current;
}
```
#### Insert Operation
Whenever an element is to be inserted, first locate its proper location. Start searching from the root node, then if the data is less than the key value, search for the empty location in the left subtree and insert the data. Otherwise, search for the empty location in the right subtree and insert the data.
```c
void insert(int data) {
struct node *tempNode = (struct node*) malloc(sizeof(struct node));
struct node *current;
struct node *parent;
tempNode->data = data;
tempNode->leftChild = NULL;
tempNode->rightChild = NULL;
//if tree is empty
if(root == NULL) {
root = tempNode;
} else {
current = root;
parent = NULL;
while(1) {
parent = current;
//go to left of the tree
if(data < parent->data) {
current = current->leftChild;
//insert to the left
if(current == NULL) {
parent->leftChild = tempNode;
return;
}
}//go to right of the tree
else {
current = current->rightChild;
//insert to the right
if(current == NULL) {
parent->rightChild = tempNode;
return;
}
}
}
}
}
```
#### Delete Operation
void deleteNode(struct node* root, int data){
if (root == NULL) root=tempnode;
if (data < root->key)
root->left = deleteNode(root->left, key);
else if (key > root->key)
root->right = deleteNode(root->right, key);
else
{
if (root->left == NULL)
{
struct node *temp = root->right;
free(root);
return temp;
}
else if (root->right == NULL)
{
struct node *temp = root->left;
free(root);
return temp;
}
struct node* temp = minValueNode(root->right);
root->key = temp->key;
root->right = deleteNode(root->right, temp->key);
}
return root;
}
Binary search trees (BSTs) also give us quick access to predecessors and successors.
Predecessors can be described as the node that would come right before the node you are currently at.
- To find the predecessor of the current node, look at the rightmost/largest leaf node in the left subtree.
Successors can be described as the node that would come right after the node you are currently at.
- To find the successor of the current node, look at the leftmost/smallest leaf node in the right subtree.
### Let's look at a couple of procedures operating on trees.
Since trees are recursively defined, it's very common to write routines that operate on trees that are themselves recursive.
So for instance, if we want to calculate the height of a tree, that is the height of a root node, We can go ahead and recursively do that, going through the tree. So we can say:
* For instance, if we have a nil tree, then its height is a 0.
* Otherwise, We're 1 plus the maximum of the left child tree and the right child tree.
* So if we look at a leaf for example, that height would be 1 because the height of the left child is nil, is 0, and the height of the nil right child is also 0. So the max of that is 0, then 1 plus 0.
#### Height(tree) algorithm
```
if tree = nil:
return 0
return 1 + Max(Height(tree.left),Height(tree.right))
```
#### Here is the code in C++
```
int maxDepth(struct node* node)
{
if (node==NULL)
return 0;
else
{
int rDepth = maxDepth(node->right);
int lDepth = maxDepth(node->left);
if (lDepth > rDepth)
{
return(lDepth+1);
}
else
{
return(rDepth+1);
}
}
}
```
We could also look at calculating the size of a tree that is the number of nodes.
* Again, if we have a nil tree, we have zero nodes.
* Otherwise, we have the number of nodes in the left child plus 1 for ourselves plus the number of nodes in the right child. So 1 plus the size of the left tree plus the size of the right tree.
#### Size(tree) algorithm
```
if tree = nil
return 0
return 1 + Size(tree.left) + Size(tree.right)
```
#### Here is the code in C++
```
int treeSize(struct node* node)
{
if (node==NULL)
return 0;
else
return 1+(treeSize(node->left) + treeSize(node->right));
}
```
### Relevant videos on freeCodeCamp YouTube channel
* [Binary Search Tree](https://youtu.be/5cU1ILGy6dM)
* [Binary Search Tree: Traversal and Height](https://youtu.be/Aagf3RyK3Lw)
### Following are common types of Binary Trees:
Full Binary Tree/Strict Binary Tree: A Binary Tree is full or strict if every node has exactly 0 or 2 children.
18
/ \
15 30
/ \ / \
40 50 100 40
In Full Binary Tree, number of leaf nodes is equal to number of internal nodes plus one.
Complete Binary Tree: A Binary Tree is complete Binary Tree if all levels are completely filled except possibly the last level and the last level has all keys as left as possible
18
/ \
15 30
/ \ / \
40 50 100 40
/ \ /
8 7 9

View File

@ -0,0 +1,39 @@
---
title: Boundary Fill
---
## Boundary Fill
Boundary fill is the algorithm used frequently in computer graphics to fill a desired color inside a closed polygon having the same boundary
color for all of its sides.
The most approached implementation of the algorithm is a stack-based recursive function.
### Working:
The problem is pretty simple and usually follows these steps:
1. Take the position of the starting point and the boundary color.
2. Decide wether you want to go in 4 directions (N, S, W, E) or 8 directions (N, S, W, E, NW, NE, SW, SE).
3. Choose a fill color.
4. Travel in those directions.
5. If the pixel you land on is not the fill color or the boundary color , replace it with the fill color.
6. Repeat 4 and 5 until you've been everywhere within the boundaries.
### Certain Restrictions:
- The boundary color should be the same for all the edges of the polygon.
- The starting point should be within the polygon.
### Code Snippet:
```
void boundary_fill(int pos_x, int pos_y, int boundary_color, int fill_color)
{
current_color= getpixel(pos_x,pos_y); //get the color of the current pixel position
if( current_color!= boundary_color || currrent_color != fill_color) // if pixel not already filled or part of the boundary then
{
putpixel(pos_x,pos_y,fill_color); //change the color for this pixel to the desired fill_color
boundary_fill(pos_x + 1, pos_y,boundary_color,fill_color); // perform same function for the east pixel
boundary_fill(pos_x - 1, pos_y,boundary_color,fill_color); // perform same function for the west pixel
boundary_fill(pos_x, pos_y + 1,boundary_color,fill_color); // perform same function for the north pixel
boundary_fill(pos_x, pos_y - 1,boundary_color,fill_color); // perform same function for the south pixel
}
}
```
From the given code you can see that for any pixel that you land on, you first check whether it can be changed to the fill_color and then you do so
for its neighbours till all the pixels within the boundary have been checked.

View File

@ -0,0 +1,16 @@
---
title: Brute Force Algorithms
---
## Brute Force Algorithms
Brute Force Algorithms refers to a programming style that does not include any shortcuts to improve performance, but instead relies on sheer computing power to try all possibilities until the solution to a problem is found.
A classic example is the traveling salesman problem (TSP). Suppose a salesman needs to visit 10 cities across the country. How does one determine the order in which cities should be visited such that the total distance traveled is minimized? The brute force solution is simply to calculate the total distance for every possible route and then select the shortest one. This is not particularly efficient because it is possible to eliminate many possible routes through clever algorithms.
Another example: 5 digit password, in the worst case scenario would take 10<sup>5</sup> tries to crack.
The time complexity of brute force is <b> O(n*m) </b>. So, if we were to search for a string of 'n' characters in a string of 'm' characters using brute force, it would take us n * m tries.
#### More Information:
<a href="https://en.wikipedia.org/wiki/Brute-force_search"> Wikipedia </a>

View File

@ -0,0 +1,35 @@
---
title: Divide and Conquer Algorithms
---
## Divide and Conquer Algorithms
Divide and Conquer | (Introduction)
Like Greedy and Dynamic Programming, Divide and Conquer is an algorithmic paradigm. A typical Divide and Conquer algorithm solves a problem using following three steps.
1. Divide: Break the given problem into subproblems of same type.
2. Conquer: Recursively solve these subproblems.
3. Combine: Appropriately combine the answers.
Following are some standard algorithms that are Divide and Conquer algorithms.
1) Binary Search is a searching algorithm. In each step, the algorithm compares the input element x with the value of the middle element in array. If the values match, return the index of middle. Otherwise, if x is less than the middle element, then the algorithm recurs for left side of middle element, else recurs for right side of middle element.
2) Quicksort is a sorting algorithm. The algorithm picks a pivot element, rearranges the array elements in such a way that all elements smaller than the picked pivot element move to left side of pivot, and all greater elements move to right side. Finally, the algorithm recursively sorts the subarrays on left and right of pivot element.
3) Merge Sort is also a sorting algorithm. The algorithm divides the array in two halves, recursively sorts them and finally merges the two sorted halves.
4) Closest Pair of Points The problem is to find the closest pair of points in a set of points in x-y plane. The problem can be solved in O(n^2) time by calculating distances of every pair of points and comparing the distances to find the minimum. The Divide and Conquer algorithm solves the problem in O(nLogn) time.
5) Strassens Algorithm is an efficient algorithm to multiply two matrices. A simple method to multiply two matrices need 3 nested loops and is O(n^3). Strassens algorithm multiplies two matrices in O(n^2.8974) time.
6) CooleyTukey Fast Fourier Transform (FFT) algorithm is the most common algorithm for FFT. It is a divide and conquer algorithm which works in O(nlogn) time.
7) The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm. It reduces the multiplication of two n-digit numbers to at most to n^1.585(which is approximation of log of 3 in base 2) single digit products. It is therefore faster than the classical algorithm, which requires n^2 single-digit products.
### Divide and Conquer (D & C) vs Dynamic Programming (DP)
Both paradigms (D & C and DP) divide the given problem into subproblems and solve subproblems. How to choose one of them for a given problem? Divide and Conquer should be used when same subproblems are not evaluated many times. Otherwise Dynamic Programming or Memoization should be used.
For example, Binary Search is a Divide and Conquer algorithm, we never evaluate the same subproblems again. On the other hand, for calculating nth Fibonacci number, Dynamic Programming should be preferred.

View File

@ -0,0 +1,15 @@
---
title: Embarassingly Parallel Algorithms
---
## Embarassingly Parallel Algorithms
In parallel programming, an embarrassingly parallel algorithm is one that requires no communication or dependency between the processes. Unlike distributed computing problems that need communication between tasks—especially on intermediate results, embarrassingly parallel algorithms are easy to perform on server farms that lack the special infrastructure used in a true supercomputer cluster. Due to the nature of embarrassingly parallel algorithms, they are well suited to large, internet-based distributed platforms, and do not suffer from parallel slowdown. The opposite of embarrassingly parallel problems are inherently serial problems, which cannot be parallelized at all.
The ideal case of embarrassingly parallel algorithms can be summarized as following:
* All the sub-problems or tasks are defined before the computations begin.
* All the sub-solutions are stored in independent memory locations (variables, array elements).
* Thus, the computation of the sub-solutions is completely independent.
* If the computations require some initial or final communication, then we call it nearly embarrassingly parallel.
Many may wonder the etymology of the term “embarrassingly”. In this case, embarrassingly has nothing to do with embarrassment; in fact, it means an overabundance—here referring to parallelization problems which are “embarrassingly easy”.
A common example of an embarrassingly parallel problem is 3d video rendering handled by a graphics processing unit, where each frame or pixel can be handled with no interdependency. Some other examples would be protein folding software that can run on any computer with each machine doing a small piece of the work, generation of all subsets, random numbers, and Monte Carlo simulations.

View File

@ -0,0 +1,15 @@
---
title: Evaluating Polynomials Direct Analysis
---
## Evaluating Polynomials Direct Analysis
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/evaluating-polynomials-direct-analysis/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->

View File

@ -0,0 +1,15 @@
---
title: Evaluating Polynomials Synthetic Division
---
## Evaluating Polynomials Synthetic Division
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/evaluating-polynomials-synthetic-division/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->

View File

@ -0,0 +1,58 @@
---
title: Exponentiation
---
## Exponentiation
Given two integers a and n, write a function to compute a^n.
#### Code
Algorithmic Paradigm: Divide and conquer.
```C
int power(int x, unsigned int y) {
if (y == 0)
return 1;
else if (y%2 == 0)
return power(x, y/2)*power(x, y/2);
else
return x*power(x, y/2)*power(x, y/2);
}
```
Time Complexity: O(n) | Space Complexity: O(1)
#### Optimized Solution: O(logn)
```C
int power(int x, unsigned int y) {
int temp;
if( y == 0)
return 1;
temp = power(x, y/2);
if (y%2 == 0)
return temp*temp;
else
return x*temp*temp;
}
```
## Modular Exponentiation
Given three numbers x, y and p, compute (x^y) % p
```C
int power(int x, unsigned int y, int p) {
int res = 1;
x = x % p;
while (y > 0) {
if (y & 1)
res = (res*x) % p;
// y must be even now
y = y>>1;
x = (x*x) % p;
}
return res;
}
```
Time Complexity: O(Log y).

View File

@ -0,0 +1,112 @@
---
title: Flood Fill Algorithm
---
## Flood Fill Algorithm
Flood fill is an algorithm mainly used to determine a bounded area connected to a given node in a multi-dimensional array. It is
a close resemblance to the bucket tool in paint programs.
The most approached implementation of the algorithm is a stack-based recursive function, and that's what we're gonna talk about
next.
### How does it work?
The problem is pretty simple and usually follows these steps:
1. Take the position of the starting point.
2. Decide wether you want to go in 4 directions (**N, S, W, E**) or 8 directions (**N, S, W, E, NW, NE, SW, SE**).
3. Choose a replacement color and a target color.
4. Travel in those directions.
5. If the tile you land on is a target, reaplce it with the chosen color.
6. Repeat 4 and 5 until you've been everywhere within the boundaries.
Let's take the following array as an example:
![alt text](https://github.com/firealex2/Codingame/blob/master/small%208%20grid%20paintefffd.png)
The red square is the starting point and the gray squares are the so called walls.
For further details, here's a piece of code describing the function:
```c++
int wall = -1;
void flood_fill(int pos_x, int pos_y, int target_color, int color)
{
if(a[pos_x][pos_y] == wall || a[pos_x][pos_y] == color) // if there is no wall or if i haven't been there
return; // already go back
if(a[pos_x][pos_y] != target_color) // if it's not color go back
return;
a[pos_x][pos_y] = color; // mark the point so that I know if I passed through it.
flood_fill(pos_x + 1, pos_y, color); // then i can either go south
flood_fill(pos_x - 1, pos_y, color); // or north
flood_fill(pos_x, pos_y + 1, color); // or east
flood_fill(pos_x, pos_y - 1, color); // or west
return;
}
```
As seen above, my starting point is (4,4). After calling the function for the start coordinates **x = 4** and **y = 4**,
I can start checking if there is no wall or color on the spot. If that is valid i mark the spot with one **"color"**
and start checking the other adiacent squares.
Going south we will get to point (5,4) and the function runs again.
### Excercise problem
I always considered that solving a (or more) problem/s using a newly learned algorithm is the best way to fully understand
the concept.
So here's one:
**Statement:**
In a bidimensional array you are given n number of **"islands"**. Try to find the largest area of an island and
the corresponding island number. 0 marks water and any other x between 1 and n marks one square from the surface corresponding
to island x.
**Input**
* **n** - the number of islands.
* **l,c** - the dimensions of the matrix.
* the next **l** lines, **c** numbers giving the **l**th row of the matrix.
**Output**
* **i** - the number of the island with the largest area.
* **A** - the area of the **i**'th island.
**Ex:**
You have the following input:
```c++
2 4 4
0 0 0 1
0 0 1 1
0 0 0 2
2 2 2 2
```
For which you will get island no. 2 as the biggest island with the area of 5 squares.
### Hints
The problem is quite easy, but here are some hints:
1. Use the flood-fill algorithm whenever you encounter a new island.
2. As opposed to the sample code, you should go through the area of the island and not on the ocean (0 tiles).

View File

@ -0,0 +1,128 @@
---
title: Breadth First Search (BFS)
---
## Breadth First Search (BFS)
Breadth First Search is one of the most simple graph algorithms. It traverses the graph by first checking the current node and then expanding it by adding its successors to the next level. The process is repeated for all nodes in the current level before moving to the next level. If the solution is found the search stops.
### Visualisation
![](https://upload.wikimedia.org/wikipedia/commons/4/46/Animated_BFS.gif)
### Evaluation
Space Complexity: O(n)
Worse Case Time Complexity: O(n)
Breadth First Search is complete on a finite set of nodes and optimal if the cost of moving from one node to another is constant.
### C++ code for BFS implementation
```cpp
// Program to print BFS traversal from a given
// source vertex. BFS(int s) traverses vertices
// reachable from s.
#include<iostream>
#include <list>
using namespace std;
// This class represents a directed graph using
// adjacency list representation
class Graph
{
int V; // No. of vertices
// Pointer to an array containing adjacency
// lists
list<int> *adj;
public:
Graph(int V); // Constructor
// function to add an edge to graph
void addEdge(int v, int w);
// prints BFS traversal from a given source s
void BFS(int s);
};
Graph::Graph(int V)
{
this->V = V;
adj = new list<int>[V];
}
void Graph::addEdge(int v, int w)
{
adj[v].push_back(w); // Add w to vs list.
}
void Graph::BFS(int s)
{
// Mark all the vertices as not visited
bool *visited = new bool[V];
for(int i = 0; i < V; i++)
visited[i] = false;
// Create a queue for BFS
list<int> queue;
// Mark the current node as visited and enqueue it
visited[s] = true;
queue.push_back(s);
// 'i' will be used to get all adjacent
// vertices of a vertex
list<int>::iterator i;
while(!queue.empty())
{
// Dequeue a vertex from queue and print it
s = queue.front();
cout << s << " ";
queue.pop_front();
// Get all adjacent vertices of the dequeued
// vertex s. If a adjacent has not been visited,
// then mark it visited and enqueue it
for (i = adj[s].begin(); i != adj[s].end(); ++i)
{
if (!visited[*i])
{
visited[*i] = true;
queue.push_back(*i);
}
}
}
}
// Driver program to test methods of graph class
int main()
{
// Create a graph given in the above diagram
Graph g(4);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 2);
g.addEdge(2, 0);
g.addEdge(2, 3);
g.addEdge(3, 3);
cout << "Following is Breadth First Traversal "
<< "(starting from vertex 2) \n";
g.BFS(2);
return 0;
}
```
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
<a href='https://github.com/freecodecamp/guides/computer-science/data-structures/graphs/index.md' target='_blank' rel='nofollow'>Graphs</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/depth-first-search/index.md' target='_blank' rel='nofollow'>Depth First Search (DFS)</a>

View File

@ -0,0 +1,160 @@
---
title: Depth First Search (DFS)
---
## Depth First Search (DFS)
Depth First Search is one of the most simple graph algorithms. It traverses the graph by first checking the current node and then moving to one of its sucessors to repeat the process. If the current node has no sucessor to check, we move back to its predecessor and the process continues (by moving to another sucessor). If the solution is found the search stops.
### Visualisation
![](https://upload.wikimedia.org/wikipedia/commons/7/7f/Depth-First-Search.gif)
### Implementation (C++14)
```c++
#include <iostream>
#include <vector>
#include <queue>
#include <algorithm>
using namespace std;
class Graph{
int v; // number of vertices
// pointer to a vector containing adjacency lists
vector < int > *adj;
public:
Graph(int v); // Constructor
// function to add an edge to graph
void add_edge(int v, int w);
// prints dfs traversal from a given source `s`
void dfs();
void dfs_util(int s, vector < bool> &visited);
};
Graph::Graph(int v){
this -> v = v;
adj = new vector < int >[v];
}
void Graph::add_edge(int u, int v){
adj[u].push_back(v); // add v to us list
adj[v].push_back(v); // add u to v's list (remove this statement if the graph is directed!)
}
void Graph::dfs(){
// visited vector - to keep track of nodes visited during DFS
vector < bool > visited(v, false); // marking all nodes/vertices as not visited
for(int i = 0; i < v; i++)
if(!visited[i])
dfs_util(i, visited);
}
// notice the usage of call-by-reference here!
void Graph::dfs_util(int s, vector < bool > &visited){
// mark the current node/vertex as visited
visited[s] = true;
// output it to the standard output (screen)
cout << s << " ";
// traverse its adjacency list and recursively call dfs_util for all of its neighbours!
// (only if the neighbour has not been visited yet!)
for(vector < int > :: iterator itr = adj[s].begin(); itr != adj[s].end(); itr++)
if(!visited[*itr])
dfs_util(*itr, visited);
}
int main()
{
// create a graph using the Graph class we defined above
Graph g(4);
g.add_edge(0, 1);
g.add_edge(0, 2);
g.add_edge(1, 2);
g.add_edge(2, 0);
g.add_edge(2, 3);
g.add_edge(3, 3);
cout << "Following is the Depth First Traversal of the provided graph"
<< "(starting from vertex 0): ";
g.dfs();
// output would be: 0 1 2 3
return 0;
}
```
### Evaluation
Space Complexity: O(n)
Worse Case Time Complexity: O(n)
Depth First Search is complete on a finite set of nodes. I works better on shallow trees.
### Implementation of DFS in C++
```c++
#include<iostream>
#include<vector>
#include<queue>
using namespace std;
struct Graph{
int v;
bool **adj;
public:
Graph(int vcount);
void addEdge(int u,int v);
void deleteEdge(int u,int v);
vector<int> DFS(int s);
void DFSUtil(int s,vector<int> &dfs,vector<bool> &visited);
};
Graph::Graph(int vcount){
this->v = vcount;
this->adj=new bool*[vcount];
for(int i=0;i<vcount;i++)
this->adj[i]=new bool[vcount];
for(int i=0;i<vcount;i++)
for(int j=0;j<vcount;j++)
adj[i][j]=false;
}
void Graph::addEdge(int u,int w){
this->adj[u][w]=true;
this->adj[w][u]=true;
}
void Graph::deleteEdge(int u,int w){
this->adj[u][w]=false;
this->adj[w][u]=false;
}
void Graph::DFSUtil(int s, vector<int> &dfs, vector<bool> &visited){
visited[s]=true;
dfs.push_back(s);
for(int i=0;i<this->v;i++){
if(this->adj[s][i]==true && visited[i]==false)
DFSUtil(i,dfs,visited);
}
}
vector<int> Graph::DFS(int s){
vector<bool> visited(this->v);
vector<int> dfs;
DFSUtil(s,dfs,visited);
return dfs;
}
```
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
<a href='https://github.com/freecodecamp/guides/computer-science/data-structures/graphs/index.md' target='_blank' rel='nofollow'>Graphs</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/breadth-first-search/index.md' target='_blank' rel='nofollow'>Breadth First Search (BFS)</a>
[Depth First Search (DFS) - Wikipedia](https://en.wikipedia.org/wiki/Depth-first_search)

View File

@ -0,0 +1,93 @@
---
title: Dijkstra's Algorithm
---
# Dijkstra's Algorithm
Dijkstra's Algorithm is a graph algorithm presented by E.W. Dijkstra. It finds the single source shortest path in a graph with non-negative edges.(why?)
We create 2 arrays : visited and distance, which record whether a vertex is visited and what is the minimum distance from the source vertex respectively. Initially visited array is assigned as false and distance as infinite.
We start from the source vertex. Let the current vertex be u and its adjacent vertices be v. Now for every v which is adjacent to u, the distance is updated if it has not been visited before and the distance from u is less than its current distance. Then we select the next vertex with the least distance and which has not been visited.
Priority Queue is often used to meet this last requirement in the least amount of time. Below is an implementation of the same idea using priority queue in Java.
```java
import java.util.*;
public class Dijkstra {
class Graph {
LinkedList<Pair<Integer>> adj[];
int n; // Number of vertices.
Graph(int n) {
this.n = n;
adj = new LinkedList[n];
for(int i = 0;i<n;i++) adj[i] = new LinkedList<>();
}
// add a directed edge between vertices a and b with cost as weight
public void addEdgeDirected(int a, int b, int cost) {
adj[a].add(new Pair(b, cost));
}
public void addEdgeUndirected(int a, int b, int cost) {
addEdgeDirected(a, b, cost);
addEdgeDirected(b, a, cost);
}
}
class Pair<E> {
E first;
E second;
Pair(E f, E s) {
first = f;
second = s;
}
}
// Comparator to sort Pairs in Priority Queue
class PairComparator implements Comparator<Pair<Integer>> {
public int compare(Pair<Integer> a, Pair<Integer> b) {
return a.second - b.second;
}
}
// Calculates shortest path to each vertex from source and returns the distance
public int[] dijkstra(Graph g, int src) {
int distance[] = new int[g.n]; // shortest distance of each vertex from src
boolean visited[] = new boolean[g.n]; // vertex is visited or not
Arrays.fill(distance, Integer.MAX_VALUE);
Arrays.fill(visited, false);
PriorityQueue<Pair<Integer>> pq = new PriorityQueue<>(100, new PairComparator());
pq.add(new Pair<Integer>(src, 0));
distance[src] = 0;
while(!pq.isEmpty()) {
Pair<Integer> x = pq.remove(); // Extract vertex with shortest distance from src
int u = x.first;
visited[u] = true;
Iterator<Pair<Integer>> iter = g.adj[u].listIterator();
// Iterate over neighbours of u and update their distances
while(iter.hasNext()) {
Pair<Integer> y = iter.next();
int v = y.first;
int weight = y.second;
// Check if vertex v is not visited
// If new path through u offers less cost then update distance array and add to pq
if(!visited[v] && distance[u]+weight<distance[v]) {
distance[v] = distance[u]+weight;
pq.add(new Pair(v, distance[v]));
}
}
}
return distance;
}
public static void main(String args[]) {
Dijkstra d = new Dijkstra();
Dijkstra.Graph g = d.new Graph(4);
g.addEdgeUndirected(0, 1, 2);
g.addEdgeUndirected(1, 2, 1);
g.addEdgeUndirected(0, 3, 6);
g.addEdgeUndirected(2, 3, 1);
g.addEdgeUndirected(1, 3, 3);
int dist[] = d.dijkstra(g, 0);
System.out.println(Arrays.toString(dist));
}
}
```

View File

@ -0,0 +1,48 @@
---
title: Floyd Warshall Algorithm
---
## Floyd Warshall Algorithm
Floyd Warshall algorithm is a great algorithm for finding shortest distance between all vertices in graph. It has a very concise algorithm and O(V^3) time complexity (where V is number of vertices). It can be used with negative weights, although negative weight cycles must not be present in the graph.
### Evaluation
Space Complexity: O(V^2)
Worse Case Time Complexity: O(V^3)
### Python implementation
```python
# A large value as infinity
inf = 1e10
def floyd_warshall(weights):
V = len(weights)
distance_matrix = weights
for k in range(V):
next_distance_matrix = [list(row) for row in distance_matrix] # make a copy of distance matrix
for i in range(V):
for j in range(V):
# Choose if the k vertex can work as a path with shorter distance
next_distance_matrix[i][j] = min(distance_matrix[i][j], distance_matrix[i][k] + distance_matrix[k][j])
distance_matrix = next_distance_matrix # update
return distance_matrix
# A graph represented as Adjacency matrix
graph = [
[0, inf, inf, -3],
[inf, 0, inf, 8],
[inf, 4, 0, -2],
[5, inf, 3, 0]
]
print(floyd_warshall(graph))
```
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
<a href='https://github.com/freecodecamp/guides/computer-science/data-structures/graphs/index.md' target='_blank' rel='nofollow'>Graphs</a>
<a href='https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm' target='_blank' rel='nofollow'>Floyd Warshall - Wikipedia</a>

View File

@ -0,0 +1,25 @@
---
title: Graph algorithms
---
## Graph algorithms
Graph algorithms are a set of instructions that traverse (visits nodes of a) graph.
Some algorithms are used to find a specific node or the path between two given nodes.
### Why Graph Algorithms are Important
A graphs are very useful data structures which can be to model various problems. These algorithms have direct applications on Social Networking sites, State Machine modeling and many more.
### Some Common Graph Algorithms
Some of the most common graph algorithms are:
<a href='https://github.com/freecodecamp/guides/computer-science/data-structures/graphs/index.md' target='_blank' rel='nofollow'>Graphs</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/breadth-first-search/index.md' target='_blank' rel='nofollow'>Breadth First Search (BFS)</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/depth-first-search/index.md' target='_blank' rel='nofollow'>Depth First Search (DFS)</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/dijkstra/index.md' target='_blank' rel='nofollow'>Dijkstra</a>
<a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/graph-algorithms/floyd-warshall-algorithm/index.md' target='_blank' rel='nofollow'>Floyd-Warshall Algorithm</a>

View File

@ -0,0 +1,15 @@
---
title: Greatest Common Divisor Direct Analysis
---
## Greatest Common Divisor Direct Analysis
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/algorithms/greatest-common-divisor-direct-analysis/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->

View File

@ -0,0 +1,74 @@
---
title: Greatest Common Divisor Euclidean
---
## Greatest Common Divisor Euclidean
For this topic you must know about Greatest Common Divisor (GCD) and the MOD operation first.
#### Greatest Common Divisor (GCD)
The GCD of two or more integers is the largest integer that divides each of the integers such that their remainder is zero.
Example-
GCD of 20, 30 = 10 *(10 is the largest number which divides 20 and 30 with remainder as 0)*
GCD of 42, 120, 285 = 3 *(3 is the largest number which divides 42, 120 and 285 with remainder as 0)*
#### "mod" Operation
The mod operation gives you the remainder when two positive integers are divided.
We write it as follows-
`A mod B = R`
This means, dividing A by B gives you the remainder R, this is different than your division operation which gives you the quotient.
Example-
7 mod 2 = 1 *(Dividing 7 by 2 gives the remainder 1)*
42 mod 7 = 0 *(Dividing 42 by 7 gives the remainder 0)*
With the above two concepts understood you will easily understand the Euclidean Algorithm.
### Euclidean Algorithm for Greatest Common Divisor (GCD)
The Euclidean Algorithm finds the GCD of 2 numbers.
You will better understand this Algorithm by seeing it in action.
Assuming you want to calculate the GCD of 1220 and 516, lets apply the Euclidean Algorithm-
Assuming you want to calculate the GCD of 1220 and 516, lets apply the Euclidean Algorithm-
![Euclidean Example](https://i.imgur.com/aa8oGgP.png)
Pseudo Code of the Algorithm-
Step 1: **Let `a, b` be the two numbers**
Step 2: **`a mod b = R`**
Step 3: **Let `a = b` and `b = R`**
Step 4: **Repeat Steps 2 and 3 until `a mod b` is greater than 0**
Step 5: **GCD = b**
Step 6: Finish
Javascript Code to Perform GCD-
```javascript
function gcd(a, b) {
var R;
while ((a % b) > 0) {
R = a % b;
a = b;
b = R;
}
return b;
}
```
Javascript Code to Perform GCD using Recursion-
```javascript
function gcd(a, b) {
if (b == 0)
return a;
else
return gcd(b, (a % b));
}
```
You can also use the Euclidean Algorithm to find GCD of more than two numbers.
Since, GCD is associative, the following operation is valid- `GCD(a,b,c) == GCD(GCD(a,b), c)`
Calculate the GCD of the first two numbers, then find GCD of the result and the next number.
Example- `GCD(203,91,77) == GCD(GCD(203,91),77) == GCD(7, 77) == 7`
You can find GCD of `n` numbers in the same way.

View File

@ -0,0 +1,91 @@
---
title: Greedy Algorithms
---
## What is a Greedy Algorithm
You must have heard about a lot of algorithmic design techniques while sifting through some of the articles here. Some of them are :
* Brute Force
* Divide and Conquer
* Greedy Programming
* Dynamic Programming
to name a few. In this article, you will learn about what a greedy algorithm is and how you can use this technique to solve a lot of programming problems that otherwise do not seem trivial.
Imagine you are going for hiking and your goal is to reach the highest peak possible. You already have the map before you start, but there are thousands of possible paths shown on the map. You are too lazy and simply dont have the time to evaluate each of them. Screw the map! You started hiking with a simple strategy be greedy and short-sighted. Just take paths that slope upwards the most. This seems like a good strategy for hiking. But is it always the best ?
After the trip ended and your whole body is sore and tired, you look at the hiking map for the first time. Oh my god! Theres a muddy river that I shouldve crossed, instead of keep walking upwards. This means that a greedy algorithm picks the best immediate choice and never reconsiders its choices. In terms of optimizing a solution, this simply means that the greedy solution will try and find local optimum solutions - which can be many - and might miss out on a global optimum solution.
## Formal Definition
Assume that you have an objective function that needs to be optimized (either maximized or minimized) at a given point. A Greedy algorithm makes greedy choices at each step to ensure that the objective function is optimized. The Greedy algorithm has only one shot to compute the optimal solution so that it never goes back and reverses the decision.
### Greedy algorithms have some advantages and disadvantages:
* It is quite easy to come up with a greedy algorithm (or even multiple greedy algorithms) for a problem.
Analyzing the run time for greedy algorithms will generally be much easier than for other techniques (like Divide and conquer). For the Divide and conquer technique, it is not clear whether the technique is fast or slow. This is because at each level of recursion the size of gets smaller and the number of sub-problems increases.
* The difficult part is that for greedy algorithms you have to work much harder to understand correctness issues. Even with the correct algorithm, it is hard to prove why it is correct. Proving that a greedy algorithm is correct is more of an art than a science. It involves a lot of creativity. Usually, coming up with an algorithm might seem to be trivial, but proving that it is actually correct, is a whole different problem.
## Interval Scheduling Problem
Let's dive into an interesting problem that you can encounter in almost any industry or any walk of life. Some instances of the problem are as follows :
* You are given a set of N schedules of lectures for a single day at a university. The schedule for a specific lecture is of the form (s_time, f_time) where s_time represents the start time for that lecture and similarly the f_time represents the finishing time. Given a list of N lecture schedules, we need to select maximum set of lectures to be held out during the day such that **none of the lectures overlap with one another i.e. if lecture Li and Lj are included in our selection then the start time of j >= finish time of i or vice versa**.
* Your friend is working as a camp counselor, and he is in charge of organizing activities for a set of campers. One of his plans is the following mini-triathlon exercise: each contestant must swim 20 laps of a pool, then bike 10 miles, then run 3 miles.
* The plan is to send the contestants out in a staggered fashion, via the following rule: the contestants must use the pool one at a time. In other words, first one contestant swims the 20 laps, gets out, and starts biking.
* As soon as this first person is out of the pool, a second contestant begins swimming the 20 laps; as soon as he or she is out and starts biking, a third contestant begins swimming, and so on.
* Each contestant has a projected swimming time, a projected biking time, and a projected running time. Your friend wants to decide on a schedule for the triathlon: an order in which to sequence the starts of the contestants.
* Let's say that the completion time of a schedule is the earliest time at which all contestants will be finished with all three legs of the triathlon, assuming the time projections are accurate. What is the best order for sending people out, if one wants the whole competition to be over as soon as possible? More precisely, give an efficient algorithm that produces a schedule whose completion time is as small as possible
### The Lecture Scheduling Problem
Let's look at the various approaches for solving this problem.
1. **Earliest Start Time First** i.e. select the interval that has the earliest start time. Take a look at the following example that breaks this solution. This solution failed because there could be an interval that starts very early but that is very long. This means the next strategy that we could try would be where we look at smaller intervals first.
![Earliest Starting Time First](https://algorithmsandme.files.wordpress.com/2015/03/f268b-jobs.png?w=840)
2. **Smallest Interval First** i.e. you end up selecting the lectures in order of their overall interval which is nothing but their `finish time - start time`. Again, this solution is not correct. Look at the following case.
![Shortest Interval First](https://i.stack.imgur.com/4bz2N.png)
You can clearly see that the shortest interval lecture is the one in the middle, but that is not the optimal solution here. Let's look at yet another solution for this problem deriving insights from this solution.
3. **Least Conflicting Interval First** i.e. you should look at intervals that cause the least number of conflicts. Yet again we have an example where this approach fails to find an optimal solution.
![Least Conflicting Interval First](https://i.stack.imgur.com/5LZ9V.png)
The diagram shows us that the least confliciting interval is the one in the middle with just 2 conflicts. After that we can only pick the two intervals at the very ends with conflicts 3 each. But the optimal solution is to pick the 4 intervals on the topmost level.
4. **Earliest Finishing time first**. This is the approach that always gives us the most optimal solution to this problem. We derived a lot of insights from previous approaches and finally came upon this approach. We sort the intervals according to increasing order of their finishing times and then we start selecting intervals from the very beginning. Look at the following pseudo code for more clarity.
```
function interval_scheduling_problem(requests)
schedule \gets \{\}
while requests is not yet empty
choose a request i_r \in requests that has the lowest finishing time
schedule \gets schedule \cup \{i_r\}
delete all requests in requests that are not compatible with i_r
end
return schedule
end
```
## When do we use Greedy Algorithms
Greedy Algorithms can help you find solutions to a lot of seemingly tough problems. The only problem with them is that you might come up with the correct solution but you might not be able to verify if its the correct one. All the greedy problems share a common property that a local optima can eventually lead to a global minima without reconsidering the set of choices already considered.
Greedy Algorithms help us solve a lot of different kinds of problems. Stay tuned for upcoming tutorials on each one of these.
1. Shortest Path Problem.
2. Minimum Spanning Tree Problem in a Graph.
3. Huffman Encoding Problem.
4. K Centers Problem
#### More Information:
<a href="https://www.youtube.com/watch?v=HzeK7g8cD0Y" target="_blank">
<img src="http://img.youtube.com/vi/HzeK7g8cD0Y/0.jpg" alt="Greedy Problems" width="240" height="180" border="10" />
</a>
<a href="https://www.youtube.com/watch?v=poWB2UCuozA" target="_blank">
<img src="http://img.youtube.com/vi/poWB2UCuozA/0.jpg" alt="Greedy Problems" width="240" height="180" border="10" />
</a>

View File

@ -0,0 +1,60 @@
---
title: Algorithms
---
## Algorithms
In computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculations, data processing and automated reasoning tasks.
An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
There are certain requirements that an algorithm must abide by:
<ol>
<li>Definiteness: Each step in the process is precisely stated.</li>
<li>Effective Computability: Each step in the process can be carried out by a computer.</li>
<li>Finiteness: The program will eventually successfully terminate.</li>
</ol>
Some common types of algorithms include sorting algorithms, search algorithms, and compression algorithms. Classes of algorithms include Graph, Dynamic Programming, Sorting, Searching, Strings, Math, Computational Geometry, Optimization, and Miscellaneous. Although technically not a class of algorithms, Data Structures are often grouped with them.
### Efficiency
Algorithms are most commonly judged by their efficiency and the amount of computing resources they require to complete their task. A common way to evaluate an algorithm is to look at its time complexity. This shows how the running time of the algorithm grows as the input size grows. Since the algorithms today, have to be operate on large data inputs, it is essential for our algorithms to have a reasonably fast running time .
### Sorting Algorithms
Sorting algorithms come in various flavors depending on your necessity.
Some, very common and widely used are:
#### Quick Sort
There is no sorting discussion which can finish without quick sort. The basic concept is in the link below.
[Quick Sort](http://me.dt.in.th/page/Quicksort/)
#### Merge Sort
It is the sorting algorithm which relies on the concept how to sorted arrays are merged to give one sorted arrays. Read more about it here-
[Merge Sort](https://www.geeksforgeeks.org/merge-sort/)
freeCodeCamp's curriculum heavily emphasizes creating algorithms. This is because learning algorithms is a good way to practice programming skills. Interviewers most commonly test candidates on algorithms during developer job interviews.
### Further Resources
[Intro to Algorithms | Crash Course: Computer Science](https://www.youtube.com/watch?v=rL8X2mlNHPM)
This video gives an accessible and lively introduction to algorithms focusing on sorting and graph search algorithms.
[What is an Algorithm and Why Should you Care? | Khan Academy](https://www.youtube.com/watch?v=CvSOaYi89B4)
This video introduces algorithms and briefly discusses some high profile uses of them.
[15 Sorting Algorithms in 6 Minutes | Timo Bingmann](https://www.youtube.com/watch?v=kPRA0W1kECg)
This video visually demonstrates some popular sorting algorithms that are commonly taught in programming and Computer Science courses.
[Algorithm Visualizer](http://algo-visualizer.jasonpark.me)
This is also a really good open source project that helps you visualize algorithms.
[Infographic on how Machine Learning Algorithms Work](https://www.boozallen.com/content/dam/boozallen_site/sig/pdf/infographic/how-do-machines-learn.pdf)
This infographic shows you how unsupervised and supervised machine learning algorithms work..

View File

@ -0,0 +1,66 @@
---
title: Lee's Algorithm
---
## Lee's Algorithm
The Lee algorithm is one possible solution for maze routing problems. It always gives an optimal solution, if one exists, but is
slow and requires large memory for dense layout.
### Understanding how it works
The algorithm is a `breadth-first` based algorithm that uses `queues` to store the steps. It usually uses the following steps:
1. Choose a starting point and add it to the queue.
2. Add the valid neighboring cells to the queue.
3. Remove the position you are on from the queue and continue to the next element.
4. Repeat steps 2 and 3 until the queue is empty.
### Implementation
C++ has the queue already implemented in the `<queue>` library, but if you are using something else you are welcome to implement
your own version of queue.
C++ code:
```c++
int dl[] = {-1, 0, 1, 0}; // these arrays will help you travel in the 4 directions more easily
int dc[] = {0, 1, 0, -1};
queue<int> X, Y; // the queues used to get the positions in the matrix
X.push(start_x); //initialize the queues with the start position
Y.push(start_y);
void lee()
{
int x, y, xx, yy;
while(!X.empty()) // while there are still positions in the queue
{
x = X.front(); // set the current position
y = Y.front();
for(int i = 0; i < 4; i++)
{
xx = x + dl[i]; // travel in an adiacent cell from the current position
yy = y + dc[i];
if('position is valid') //here you should insert whatever conditions should apply for your position (xx, yy)
{
X.push(xx); // add the position to the queue
Y.push(yy);
mat[xx][yy] = -1; // you usually mark that you have been to this position in the matrix
}
}
X.pop(); // eliminate the first position, as you have no more use for it
Y.pop();
}
}
```

View File

@ -0,0 +1,87 @@
---
title: QuickSelect
---
## QuickSelect
QuickSelect is selecti algorithm to find K-th smallest element in unsorted list.
### Algorithms
After findding pivot (pivot is a position that partition the list into two parts. Every element on the left is less than pivot and every element on the right is more than pivot) it recurs only for the part that contains the k-th smallest element.
If index of partitioned element (pivot) is more than k, then we recurtion for left part. If index (pivot) is same as k, we have found the k-th smallest element and we return. If index is less than k, then we recursion for right part.
#### Selection Psudocode
```
Input : List, left is first position of list, right is last position of list and k is k-th smallest element.
Output : A new list is partitioned.
quickSelect(list, left, right, k)
if left = right
return list[left]
// Select a pivotIndex between left and right
pivotIndex := partition(list, left, right,
pivotIndex)
if k = pivotIndex
return list[k]
else if k < pivotIndex
right := pivotIndex - 1
else
left := pivotIndex + 1
```
### Partition
Partition is to find the pivot as mentioned above. (Every element on the left is less than pivot and every element on the right is more than pivot)
There are two algorithm for find pivot of partition.
- Lomuto Partition
- Hoare Partition
#### Lomuto Partition
This partition chooses a pivot that is typically the last element in the array. The algorithm maintains index i as it scans the array using another index j such that the elements lo through i (inclusive) are less than or equal to the pivot, and the elements i+1 through j-1 (inclusive) are greater than the pivot.
This scheme degrades to O(n^2) when the array is already in order.
```
algorithm Lomuto(A, lo, hi) is
pivot := A[hi]
i := lo
for j := lo to hi - 1 do
if A[j] < pivot then
if i != j then
swap A[i] with A[j]
i := i + 1
swap A[i] with A[hi]
return i
```
#### Hoare Partition
Hoare uses two indices that start at the ends of the array being partitioned, then move toward each other, until they detect an inversion: a pair of elements, one greater than or equal to the pivot, one lesser or equal, that are in the wrong order relative to each other. The inverted elements are then swapped. When the indices meet, the algorithm stops and returns the final index. There are many variants of this algorithm.
```
algorithm Hoare(A, lo, hi) is
pivot := A[lo]
i := lo - 1
j := hi + 1
loop forever
do
i := i + 1
while A[i] < pivot
do
j := j - 1
while A[j] > pivot
if i >= j then
return j
swap A[i] with A[j]
```
## Time complexity
Like quicksort, the quickselect has good average performance, but is sensitive to the pivot that is chosen. If good pivots are chosen, meaning ones that consistently decrease the search set by a given fraction, then the search set decreases in size exponentially and by induction (or summing the geometric series) one sees that performance is linear, as each step is linear and the overall time is a constant times this (depending on how quickly the search set reduces). However, if bad pivots are consistently chosen, such as decreasing by only a single element each time, then worst-case performance is quadratic: O(n^2). This occurs for example in searching for the maximum element of a set, using the first element as the pivot, and having sorted data.

View File

@ -0,0 +1,36 @@
---
title: Red Black Trees
---
## Red Black Trees
Red-Black Tree is a self-balancing Binary Search Tree (BST) where every node follows following rules.
1. Every node has two children, colored either red or black.
2. Every tree leaf node is always black.
3. Every red node has both of its children colored black.
3. There are no two adjacent red nodes (A red node cannot have a red parent or red child).
4. Every path from root to a tree leaf node has the same number of black nodes (called "black height").
Reference-style:
![alt text][fibonacci]
[fibonacci]: https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Fibonacci_Tree_as_Red-Black_Tree.svg/2000px-Fibonacci_Tree_as_Red-Black_Tree.svg.png "Fibonacci example of red black trees"
### Why Red-Black Trees?
Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take O(h) time where h is the height of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If we make sure that height of the tree remains O(Logn) after every insertion and deletion, then we can guarantee an upper bound of O(Logn) for all these operations. The height of a Red Black tree is always O(Logn) where n is the number of nodes in the tree.
### Comparison with AVL Tree
The AVL trees are more balanced compared to Red Black Trees, but they may cause more rotations during insertion and deletion. So if your application involves many frequent insertions and deletions, then Red Black trees should be preferred. And if the insertions and deletions are less frequent and search is more frequent operation, then AVL tree should be preferred over Red Black Tree.
### Left-Leaning RedBlack Tree
A left-leaning redblack (LLRB) tree is a type of self-balancing binary search tree. It is a variant of the redblack tree and guarantees the same asymptotic complexity for operations, but is designed to be easier to implement.
### Properties of Left Leaning Red-Black Trees
All of the red-black tree algorithms that have been proposed are characterized by a worst-case search time bounded by a small constant multiple of log N in a tree of N keys, and the behavior observed in practice is typically that same multiple faster than the worst-case bound, close to the optimal log N nodes examined that would be observed in a perfectly balanced tree.
Specifically, in a left-leaning red-black 2-3 tree built from N random keys:
->A random successful search examines log2 N 0.5 nodes.
->The average tree height is about 2 log2 N
#### More Information:
* [Video from Algorithms and Data Structures](https://www.youtube.com/watch?v=2Ae0D6EXBV4)

View File

@ -0,0 +1,342 @@
---
title: Binary Search
---
## Binary Search
A binary search locates an item in a sorted array by repeatedly dividing the search interval in half.
How do you search a name in a telephone directory?
One way would be to start from the first page and look at each name in the phonebook till we find what we are looking for. But that would be an extremely laborious and inefficient way to search.
Because we know that names in the phonebook are sorted alphabetically, we could probably work along the following steps:
1. Open the middle page of the phonebook
2. If it has the name we are looking for, we are done!
3. Otherwise, throw away the half of the phonebook that does not contain the name
4. Repeat until you find the name or there are no more pages left in the phonebook
Time complexity: As we dispose off one part of the search case during every step of binary search, and perform the search operation on the other half, this results in a worst case time complexity of *O*(*log<sub>2</sub>N*).
Space complexity: Binary search takes constant or *O*(*1*) space meaning that we don't do any input size related variable defining.
for small sets linear search is better but in larger ones it is way more efficient to use binary search.
In detail, how many times can you divide N by 2 until you have 1? This is essentially saying, do a binary search (half the elements) until you found it. In a formula this would be this:
```
1 = N / 2x
```
Multiply by 2x:
```
2x = N
```
Now do the log2:
```
log2(2x) = log2 N
x * log2(2) = log2 N
x * 1 = log2 N
```
This means you can divide log N times until you have everything divided. Which means you have to divide log N ("do the binary search step") until you found your element.
*O*(*log<sub>2</sub>N*) is such so because at every step half of the elements in the data set are gone which is justified by the base of the logarithmic function.
This is the binary search algorithm. It is elegant and efficient but for it to work correctly, the array must be **sorted**.
---
Find 5 in the given array of numbers using binary search.
![Binary Search 1](https://i.imgur.com/QAuugOL.jpg)
Mark low, high and mid positions in the array.
![Binary Search 2](https://i.imgur.com/1710fEx.jpg)
Compare the item you are looking for with the middle element.
![Binary Search 3](https://i.imgur.com/jr4icze.jpg)
Throw away the left half and look in the right half.
![Binary Search 4](https://i.imgur.com/W57lGsk.jpg)
Again compare with the middle element.
![Binary Search 5](https://i.imgur.com/5Twm8NE.jpg)
Now, move to the left half.
![Binary Search 6](https://i.imgur.com/01xetay.jpg)
The middle element is the item we were looking for!
The binary search algorithm takes a divide-and-conquer approach where the array is continuously divided until the item is found or until there are no more elements left for checking. Hence, this algorithm can be defined recursively to generate an elegant solution.
The two base cases for recursion would be:
* No more elements left in the array
* Item is found
The Power of Binary Search in Data Systems (B+ trees):
Binary Search Trees are very powerful because of their O(log n) search times, second to the hashmap data structure which uses a hasing key to search for data in O(1). It is important to understand how the log n run time comes from the height of a binary search tree. If each node splits into two nodes, (binary), then the depth of the tree is log n (base 2).. In order to improve this speed in Data System, we use B+ trees because they have a larger branching factor, and therefore more height. I hope this short article helps expand your mind about how binary search is used in practical systems.
The code for recursive binary search is shown below:
### Javascript implementation
```javascript
function binarySearch(arr, item, low, high) {
if (low > high) { // No more elements in the array.
return null;
}
// Find the middle of the array.
var mid = Math.ceil((low + high) / 2);
if (arr[mid] === item) { // Found the item!
return mid;
}
if (item < arr[mid]) { // Item is in the half from low to mid-1.
return binarySearch(arr, item, low, mid-1);
}
else { // Item is in the half from mid+1 to high.
return binarySearch(arr, item, mid+1, high);
}
}
var numbers = [1,2,3,4,5,6,7];
print(binarySearch(numbers, 5, 0, numbers.length-1));
```
Here is another implementation in Javascript:
```Javascript
function binary_search(a, v) {
function search(low, high) {
if (low === high) {
return a[low] === v;
} else {
var mid = math_floor((low + high) / 2);
return (v === a[mid])
||
(v < a[mid])
? search(low, mid - 1)
: search(mid + 1, high);
}
}
return search(0, array_length(a) - 1);
}
```
### Ruby implementation
```ruby
def binary_search(target, array)
sorted_array = array.sort
low = 0
high = (sorted_array.length) - 1
while high >= low
middle = (low + high) / 2
if target > sorted_array[middle]
low = middle + 1
elsif target < sorted_array[middle]
high = middle - 1
else
return middle
end
end
return nil
end
```
### Example in C
```C
int binarySearch(int a[], int l, int r, int x) {
if (r >= l){
int mid = l + (r - l)/2;
if (a[mid] == x)
return mid;
if (arr[mid] > x)
return binarySearch(arr, l, mid-1, x);
return binarySearch(arr, mid+1, r, x);
}
return -1;
}
```
### C/C++ implementation
```C++
int binary_search(int arr[], int l, int r, int target)
{
while (r >= l)
{
int mid = l + (r - l)/2;
if (arr[mid] == target)
return mid;
if (arr[mid] > target) r = mid - 1;
else l = mid + 1;
}
return -1;
}
```
### Python implementation
```Python
def binary_search(arr, l, r, target):
if r >= l:
mid = l + (r - l)/2
if arr[mid] == target:
return mid
elif arr[mid] > target:
return binary_search(arr, l, mid-1, target)
else:
return binary_search(arr, mid+1, r, target)
else:
return -1
```
### Example in C++
```c++
// Binary Search using iteration
int binary_search(int arr[], int beg, int end, int num)
{
while(beg <= end){
int mid = (beg + end) / 2;
if(arr[mid] == num)
return mid;
else if(arr[mid] < num)
beg = mid + 1;
else
end = mid - 1;
}
return -1;
}
```
```c++
// Binary Search using recursion
int binary_search(int arr[], int beg, int end, int num)
{
if(beg <= end){
int mid = (beg + end) / 2;
if(arr[mid] == num)
return mid;
else if(arr[mid] < num)
return binary_search(arr, mid + 1, end, num);
else
return binary_search(arr, beg, mid - 1, num);
}
return -1;
}
```
### Example in C++
Recursive approach!
```C++ - Recursive approach
int binarySearch(int arr[], int start, int end, int x)
{
if (end >= start)
{
int mid = start + (end - start)/2;
if (arr[mid] == x)
return mid;
if (arr[mid] > x)
return binarySearch(arr, start, mid-1, x);
return binarySearch(arr, mid+1, end, x);
}
return -1;
}
```
Iterative approach!
```C++ - Iterative approach
int binarySearch(int arr[], int start, int end, int x)
{
while (start <= end)
{
int mid = start + (end - start)/2;
if (arr[mid] == x)
return mid;
if (arr[mid] < x)
start = mid + 1;
else
end = mid - 1;
}
return -1;
}
```
### Example in Swift
```Swift
func binarySearch(for number: Int, in numbers: [Int]) -> Int? {
var lowerBound = 0
var upperBound = numbers.count
while lowerBound < upperBound {
let index = lowerBound + (upperBound - lowerBound) / 2
if numbers[index] == number {
return index // we found the given number at this index
} else if numbers[index] < number {
lowerBound = index + 1
} else {
upperBound = index
}
}
return nil // the given number was not found
}
```
### Example in Java
```Java
// Iterative Approach in Java
int binarySearch(int[] arr, int start, int end, int element)
{
while(start <= end)
{
int mid = ( start + end ) / 2;
if(arr[mid] == element)
return mid;
if(arr[mid] < element)
start = mid+1;
else
end = mid-1;
}
return -1;
}
```
```Java
// Recursive Approach in Java
int binarySearch(int[] arr, int start,int end , int element)
{
int mid = ( start + end ) / 2;
if(arr[mid] == element)
return mid;
if(arr[mid] < element)
return binarySearch( arr , mid + 1 , end , element );
else
return binarySearch( arr, start, mid - 1 , element);
}
```
### More Information
* [Binary search (YouTube video)](https://youtu.be/P3YID7liBug)
* [Binary Search - CS50](https://www.youtube.com/watch?v=5xlIPT1FRcA)
* [Binary Search - MyCodeSchool](https://www.youtube.com/watch?v=j5uXyPJ0Pew&list=PL2_aWCzGMAwL3ldWlrii6YeLszojgH77j)

View File

@ -0,0 +1,92 @@
---
title: Exponential Search
---
## Exponential Search
Exponential Search also known as finger search, searchs for an element in a sorted array by jumping `2^i` elements every iteration where i represents the
value of loop control variable, and then verifying if the search element is present between last jump and the current jump
# Complexity Worst Case
O(log(N))
Often confused because of the name, the algorithm is named so not because of the time complexity.
The name arises as a result of the algorithm jumping elements with steps equal to exponents of 2
# Works
1. Jump the array `2^i` elements at a time searching for the condition `Array[2^(i-1)] < valueWanted < Array[2^i]`. If `2^i` is greater than the lenght of array, then set the upper bound to the length of the array.
2. Do a binary search between `Array[2^(i-1)]` and `Array[2^i]`
# Code
```
// C++ program to find an element x in a
// sorted array using Exponential search.
#include <bits/stdc++.h>
using namespace std;
int binarySearch(int arr[], int, int, int);
// Returns position of first ocurrence of
// x in array
int exponentialSearch(int arr[], int n, int x)
{
// If x is present at firt location itself
if (arr[0] == x)
return 0;
// Find range for binary search by
// repeated doubling
int i = 1;
while (i < n && arr[i] <= x)
i = i*2;
// Call binary search for the found range.
return binarySearch(arr, i/2, min(i, n), x);
}
// A recursive binary search function. It returns
// location of x in given array arr[l..r] is
// present, otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l)
{
int mid = l + (r - l)/2;
// If the element is present at the middle
// itself
if (arr[mid] == x)
return mid;
// If element is smaller than mid, then it
// can only be present n left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid-1, x);
// Else the element can only be present
// in right subarray
return binarySearch(arr, mid+1, r, x);
}
// We reach here when element is not present
// in array
return -1;
}
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = exponentialSearch(arr, n, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
```
# More Information
- <a href='https://en.wikipedia.org/wiki/Exponential_search' target='_blank' rel='nofollow'>Wikipedia</a>
- <a href='https://www.geeksforgeeks.org/exponential-search/' target='_blank' rel='nofollow'>GeeksForGeeks</a>
# Credits
[C++ Implementation](https://www.wikitechy.com/technology/exponential-search/)

View File

@ -0,0 +1,20 @@
---
title: Search Algorithms
---
## Search Algorithms
In computer science, a search algorithm is any algorithm which solves the Search problem, namely, to retrieve information stored within some data structure or calculated in the search space of a problem domain. Examples of such structures include Linked Lists, Array data structures, Search trees and many more. The appropriate search algorithm often depends on the data structure being searched but also on prior knowledge about the data.
<a href='https://en.wikipedia.org/wiki/Search_algorithm' target='_blank' rel='nofollow'>More on wikipedia</a>.
This kind of algorithm looks at the problem of re-arranging an array of items in asceding order. The two most clasical examples of that is the binary search and the merge sort algorithm.
In the following links you can also find more information about:
* <a href="">Binary</a> Search
* <a href="">Linear</a> Search
* Searching <a href="">linked lists vs arrays</a>
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
* MIT OCW Introduction to search <a href="https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/unit-4-probability-and-planning/search-algorithms/">search</a> algorithms.
* Princeton University: <a href="https://introcs.cs.princeton.edu/java/42sort/">Sorting and Searching.</a>
* The anatomy of a search engine <a href="http://infolab.stanford.edu/~backrub/google.html">(Google).</a>

View File

@ -0,0 +1,26 @@
---
title: Jump Search
---
## Jump Search
A jump search locates an item in a sorted array by jumping k itens and then verify if the item wanted is between
the previous jump and current jump.
# Complexity Worst Case
O(√N)
# Works
1. Define the value of k, the number of jump: Optimal jump size is √N where the N is the length of array
2. Jump the array k-by-k searching by the condition `Array[i] < valueWanted < Array[i+k]`
3. Do a linear search between `Array[i]` and `Array[i + k]`
![Jumping Search 1](https://i1.wp.com/theoryofprogramming.com/wp-content/uploads/2016/11/jump-search-1.jpg?resize=676%2C290)
# Code
To view examples of code implementation of this method access this link below:
[Jump Search - OpenGenus/cosmos](https://github.com/OpenGenus/cosmos/tree/master/code/search/jump_search)
# Credits
[The logic's array image](http://theoryofprogramming.com/2016/11/10/jump-search-algorithm/)

View File

@ -0,0 +1,162 @@
---
title: Linear Search
---
## Linear Search
Suppose you are given a list or an array of items. You are searching for a particular item. How do you do that?
Find the number 13 in the given list.
![Linear Search 1](https://i.imgur.com/ThkzYEV.jpg)
You just look at the list and there it is!
![Linear Search 2](https://i.imgur.com/K7HfCly.jpg)
Now, how do you tell a computer to find it.
A computer cannot look at more than the value at a given instant of time. So it takes one item from the array and checks if it is the same as what you are looking for.
![Linear Search 3](https://i.imgur.com/ZOSxeZD.jpg)
The first item did not match. So move onto the next one.
![Linear Search 4](https://i.imgur.com/SwKsPxD.jpg)
And so on...
This is done till a match is found or until all the items have been checked.
![Linear Search 5](https://i.imgur.com/3AaViff.jpg)
In this algorithm, you can stop when the item is found and then there is no need to look further.
So how long would it take to do the linear search operation?
In the best case, you could get lucky and the item you are looking at maybe at the first position in the array!
But in the worst case, you would have to look at each and every item before you find the item at the last place or before you realize that the item is not in the array.
The complexity therefore of the linear search is: O(n).
If the element to be searched presides on the the first memory block then the complexity would be: O(1).
The code for a linear search function in JavaScript is shown below. This function returns the position of the item we are looking for in the array. If the item is not present in the array, the function would return null.
### Example in Javascript
```javascript
function linearSearch(arr, item) {
// Go through all the elements of arr to look for item.
for (var i = 0; i < arr.length; i++) {
if (arr[i] === item) { // Found it!
return i;
}
}
// Item not found in the array.
return null;
}
```
### Example in Ruby
```ruby
def linear_search(target, array)
counter = 0
while counter < array.length
if array[counter] == target
return counter
else
counter += 1
end
end
return nil
end
```
### Example in C++
```c++
int linear_search(int arr[],int n,int num)
{
for(int i=0;i<n;i++){
if(arr[i]==num)
return i;
}
// Item not found in the array
return -1;
}
```
### Example in Python
```python
def linear_search(array, num):
for i in range(len(array)):
if (array[i]==num):
return i
return -1
```
### Example in Swift
```swift
func linearSearch(for number: Int, in array: [Int]) -> Int? {
for (index, value) in array.enumerated() {
if value == number { return index } // return the index of the number
}
return nil // the number was not found in the array
}
```
### Example in Java
```Java 8
int linearSearch(int[] arr, int element)
{
for(int i=0;i<arr.length;i++)
{
if(arr[i] == element)
return i;
}
return -1;
}
```
## Global Linear Search
What if you are searching the multiple occurrences of an element? For example you want to see how many 5s are in an array.
Target = 5
Array = [ 1, 2, 3, 4, 5, 6, 5, 7, 8, 9, 5]
This array has 3 occurances of 5s and we want to return the indexes (where they are in the array) of all of them. This is called global linear search and you will need to adjust your code to return an array of the index points at which it finds out target element. When you find an index element that matches your target, the index point (counter) will be added in the results array. If it doesnt match the code will continue to move on to the next element in the array by adding 1 to the counter.
```ruby
def global_linear_search(target, array)
counter = 0
results = []
while counter < array.length
if array[counter] == target
results << counter
counter += 1
else
counter += 1
end
end
if results.empty?
return nil
else
return results
end
end
```
## Why linear search is not efficient
There is no doubt that linear search is simple but because it compares each element one by one, it is time consuming and hence not much efficient. If we have to find a number from say, 1000000 numbers and number is at the last location, linear search technique would become quite tedious. So, also learn about bubble sort, quick sort etc.
#### Relevant Video:
#### Other Resources
<!-- Please add any articles you think might be helpful to read before writing the article -->
<a href='https://www.youtube.com/watch?v=vZWfKBdSgXI' target='_blank' rel='nofollow'>Linear Search - CS50</a>

View File

@ -0,0 +1,19 @@
---
title: Searching Linked Lists Versus Arrays
---
## Searching Linked Lists Versus Arrays
Suppose you have to search for an element in an *unsorted* linked list and array. In that case, you need to do a linear search (remember, unsorted). Doing a linear search for an element in either data structure will be an O(n) operation.
Now if you have a *sorted* linked list and array, you can still search in both the data structures in O(log n) time using Binary Search. Although, it will be a bit tedious to code while using linked lists.
Linked lists are usually preferred over arrays where insertion is a frequent operation. It's easier to insert in linked lists as only a pointer changes. But to insert in an array (the middle or beginning), you need to move all the elements after the one that you insert. Another place where you should use linked lists is where size is uncertain (you don't know the size when you are starting out), because arrays have fixed size.
Arrays do provide a few advantages over linked lists:
1. Random access
2. Less memory as compared to linked lists
3. Arrays have better cache locality thus providing better performance
It completely depends on the use case for whether arrays or linked lists are better.
### More Information:
- A Programmer's Approach of Looking at Linked List vs Array: <a href='http://www.geeksforgeeks.org/programmers-approach-looking-array-vs-linked-list/' target='_blank' rel='nofollow'>Geeks for Geeks</a>

View File

@ -0,0 +1,172 @@
---
title: Bubble Sort
---
## Bubble Sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in wrong order.
This is a very slow sorting algorithm compared to algorithms like quicksort, with worst-case complexity O(n^2). However, the tradeoff is that bubble sort is one of the easiest sorting algorithms to implement from scratch.
### Example:
#### First Pass:
( 5 1 4 2 8 ) > ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) > ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) > ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) > ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them.
#### Second Pass:
( 1 4 2 5 8 ) > ( 1 4 2 5 8 )
( 1 4 2 5 8 ) > ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted.
#### Third Pass:
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
( 1 2 4 5 8 ) > ( 1 2 4 5 8 )
#### Properties
- Space complexity: O(1)
- Best case performance: O(n)
- Average case performance: O(n\*n)
- Worst case performance: O(n\*n)
- Stable: Yes
### Video Explanation
[Bubble sort in easy way](https://www.youtube.com/watch?v=Jdtq5uKz-w4)
-----
### Example in JavaScript
```js
let arr = [1, 4, 7, 45, 7,43, 44, 25, 6, 4, 6, 9];
let sorted = false
while(!sorted) {
sorted = true
for(var i=0; i < arr.length; i++) {
if(arr[i] < arr[i-1]) {
let temp = arr[i];
arr[i] = arr[i-1];
arr[i-1] = temp;
sorted = false;
}
}
}
```
### Example in Java.
```java
public class bubble-sort {
static void sort(int[] arr) {
int n = arr.length;
int temp = 0;
for(int i=0; i < n; i++){
for(int x=1; x < (n-i); x++){
if(arr[x-1] > arr[x]){
temp = arr[x-1];
arr[x-1] = arr[x];
arr[x] = temp;
}
}
}
}
public static void main(String[] args) {
for(int i=0; i < 15; i++){
int arr[i] = (int)(Math.random() * 100 + 1);
}
System.out.println("array before sorting\n");
for(int i=0; i < arr.length; i++){
System.out.print(arr[i] + " ");
}
bubbleSort(arr);
System.out.println("\n array after sorting\n");
for(int i=0; i < arr.length; i++){
System.out.print(arr[i] + " ");
}
}
}
```
### Example in C++
```c++
// Recursive Implementation
void bubblesort(int arr[], int n)
{
if(n==1) //Initial Case
return;
bool swap_flag = false;
for(int i=0;i<n-1;i++) //After this pass the largest element will move to its desired location.
{
if(arr[i]>arr[i+1])
{
int temp=arr[i];
arr[i]=arr[i+1];
arr[i+1]=temp;
swap_flag = true;
}
}
// IF no two elements were swapped in the loop, then return, as array is sorted
if(swap_flag == false)
return;
bubblesort(arr,n-1); //Recursion for remaining array
}
```
### Example in Swift
```swift
func bubbleSort(_ inputArray: [Int]) -> [Int] {
guard inputArray.count > 1 else { return inputArray } // make sure our input array has more than 1 element
var numbers = inputArray // function arguments are constant by default in Swift, so we make a copy
for i in 0..<(numbers.count - 1) {
for j in 0..<(numbers.count - i - 1) {
if numbers[j] > numbers[j + 1] {
numbers.swapAt(j, j + 1)
}
}
}
return numbers // return the sorted array
}
```
### Example in Python
```py
def bubblesort( A ):
for i in range( len( A ) ):
for k in range( len( A ) - 1, i, -1 ):
if ( A[k] < A[k - 1] ):
swap( A, k, k - 1 )
def swap( A, x, y ):
tmp = A[x]
A[x] = A[y]
A[y] = tmp
```
### More Information
<!-- Please add any articles you think might be helpful to read before writing the article -->
- [Wikipedia](https://en.wikipedia.org/wiki/Bubble_sort)
- [Bubble Sort Algorithm - CS50](https://youtu.be/Ui97-_n5xjo)
- [Bubble Sort Algorithm - GeeksForGeeks (article)](http://www.geeksforgeeks.org/bubble-sort)
- [Bubble Sort Algorithm - MyCodeSchool (video)](https://www.youtube.com/watch?v=Jdtq5uKz-w4)
- [Algorithms: Bubble Sort - HackerRank (video)](https://www.youtube.com/watch?v=6Gv8vg0kcHc)
- [Bubble Sort Algorithm - GeeksForGeeks (video)](https://www.youtube.com/watch?v=nmhjrI-aW5o)
- [Bubble Sort Visualization](https://www.hackerearth.com/practice/algorithms/sorting/bubble-sort/visualize/)

View File

@ -0,0 +1,51 @@
---
title: Bucket Sort
---
## What is Bucket Sort ?
Bucket sort is a comparison sort algorithm that operates on elements by dividing them into different buckets and then sorting these buckets
individually. Each bucket is sorted individually using a separate sorting algorithm or by applying the bucket sort algorithm recursively.
Bucket sort is mainly useful when the input is uniformly distributed over a range.
## Assume one has the following problem in front of them:
One has been given a large array of floating point integers lying uniformly between the lower and upper bound. This array now needs to be
sorted. A simple way to solve this problem would be to use another sorting algorithm such as Merge sort, Heap Sort or Quick Sort. However,
these algorithms guarantee a best case time complexity of O(NlogN).
However, using bucket sort, the above task can be completed in O(N)time.
Let's have a closer look at it.
Consider one needs to create an array of lists, i.e of buckets. Elements now need to be inserted into these buckets on the basis of their properties.
Each of these buckets can then be sorted individually using Insertion Sort.
### Pseudo Code for Bucket Sort:
```
void bucketSort(float[] a,int n)
{
for(each floating integer 'x' in n)
{
insert x into bucket[n*x];
}
for(each bucket)
{
sort(bucket);
}
}
```
### More Information:
- [Wikipedia](https://en.wikipedia.org/wiki/Bucket_sort)
- [GeeksForGeeks](http://www.geeksforgeeks.org/bucket-sort-2/)

View File

@ -0,0 +1,56 @@
---
title: Counting Sort
---
## Counting Sort
Counting Sort is a sorting technique based on keys between a specific range. It works by counting the number of objects having distinct key values (kind of hashing). Then doing some arithmetic to calculate the position of each object in the output sequence.
### Example:
```
For simplicity, consider the data in the range 0 to 9.
Input data: 1, 4, 1, 2, 7, 5, 2
1) Take a count array to store the count of each unique object.
Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 2 0 1 1 0 1 0 0
2) Modify the count array such that each element at each index
stores the sum of previous counts.
Index: 0 1 2 3 4 5 6 7 8 9
Count: 0 2 4 4 5 6 6 7 7 7
The modified count array indicates the position of each object in
the output sequence.
3) Output each object from the input sequence followed by
decreasing its count by 1.
Process the input data: 1, 4, 1, 2, 7, 5, 2. Position of 1 is 2.
Put data 1 at index 2 in output. Decrease count by 1 to place
next data 1 at an index 1 smaller than this index.
```
### Implementation
```js
let numbers = [1, 4, 1, 2, 7, 5, 2];
let count = [];
let i, z = 0;
let max = Math.max(...numbers);
// initialize counter
for (i = 0; i <= max; i++) {
count[i] = 0;
}
for (i=0; i < numbers.length; i++) {
count[numbers[i]]++;
}
for (i = 0; i <= max; i++) {
while (count[i]-- > 0) {
numbers[z++] = i;
}
}
// output sorted array
for (i=0; i < numbers.length; i++) {
console.log(numbers[i]);
}
```

View File

@ -0,0 +1,128 @@
---
title: Heapsort
---
## Heapsort
Heapsort is an efficient sorting algorithm based on the use of max/min heaps. A heap is a tree-based data structure that satisfies the heap property -- that is for a max heap, the key of any node is less than or equal to the key of its parent (if it has a parent). This property can be leveraged to access the maximum element in the heap in O(logn) time using the maxHeapify method. We perform this operation n times, each time moving the maximum element in the heap to the top of the heap and extracting it from the heap and into a sorted array. Thus, after n iterations we will have a sorted version of the input array. This algorithm runs in O(nlogn) time and O(1) additional space [O(n) including the space required to store the input data] since all operations are performed entirely in-place.
The est worst and average case time complecity of Heapsort is O(nlogn). Although heapsort has a better worse-case complexity than quicksort, a well-implemented quicksort runs faster in practice. This is a comparison-based algorithm so it can be used for nonnumerical data sets insofar as some relation (heap property) can be defined over the elements.
An implementation in Java is as shown below :
```java
import java.util.Arrays;
public class Heapsort {
public static void main(String[] args) {
//test array
Integer[] arr = {1, 4, 3, 2, 64, 3, 2, 4, 5, 5, 2, 12, 14, 5, 3, 0, -1};
String[] strarr = {"hope you find this helpful!", "wef", "rg", "q2rq2r", "avs", "erhijer0g", "ewofij", "gwe", "q", "random"};
arr = heapsort(arr);
strarr = heapsort(strarr);
System.out.println(Arrays.toString(arr));
System.out.println(Arrays.toString(strarr));
}
//O(nlogn) TIME, O(1) SPACE, NOT STABLE
public static <E extends Comparable<E>> E[] heapsort(E[] arr){
int heaplength = arr.length;
for(int i = arr.length/2; i>0;i--){
arr = maxheapify(arr, i, heaplength);
}
for(int i=arr.length-1;i>=0;i--){
E max = arr[0];
arr[0] = arr[i];
arr[i] = max;
heaplength--;
arr = maxheapify(arr, 1, heaplength);
}
return arr;
}
//Creates maxheap from array
public static <E extends Comparable<E>> E[] maxheapify(E[] arr, Integer node, Integer heaplength){
Integer left = node*2;
Integer right = node*2+1;
Integer largest = node;
if(left.compareTo(heaplength) <=0 && arr[left-1].compareTo(arr[node-1]) >= 0){
largest = left;
}
if(right.compareTo(heaplength) <= 0 && arr[right-1].compareTo(arr[largest-1]) >= 0){
largest = right;
}
if(largest != node){
E temp = arr[node-1];
arr[node-1] = arr[largest-1];
arr[largest-1] = temp;
maxheapify(arr, largest, heaplength);
}
return arr;
}
}
```
implementation in C++
```C++
#include <iostream>
using namespace std;
void heapify(int arr[], int n, int i)
{
int largest = i;
int l = 2*i + 1;
int r = 2*i + 2;
if (l < n && arr[l] > arr[largest])
largest = l;
if (r < n && arr[r] > arr[largest])
largest = r;
if (largest != i)
{
swap(arr[i], arr[largest]);
heapify(arr, n, largest);
}
}
void heapSort(int arr[], int n)
{
for (int i = n / 2 - 1; i >= 0; i--)
heapify(arr, n, i);
for (int i=n-1; i>=0; i--)
{
swap(arr[0], arr[i]);
heapify(arr, i, 0);
}
}
void printArray(int arr[], int n)
{
for (int i=0; i<n; ++i)
cout << arr[i] << " ";
cout << "\n";
}
int main()
{
int arr[] = {12, 11, 13, 5, 6, 7};
int n = sizeof(arr)/sizeof(arr[0]);
heapSort(arr, n);
cout << "Sorted array is \n";
printArray(arr, n);
}
```
### Visualization
* <a href='https://www.cs.usfca.edu/~galles/visualization/HeapSort.html'>USFCA</a>
* <a href='https://www.hackerearth.com/practice/algorithms/sorting/heap-sort/tutorial/'>HackerEarth</a>
#### More Information:
- <a href='https://en.wikipedia.org/wiki/Quicksort' target='_blank' rel='nofollow'>Wikipedia</a>

View File

@ -0,0 +1,55 @@
---
title: Sorting Algorithms
---
## Sorting Algorithms
Sorting algorithms are a set of instructions that take an array or list as an input and arrange the items into a particular order.
Sorts are most commonly in numerical or a form of alphabetical (called lexicographical) order, and can be in ascending (A-Z, 0-9) or descending (Z-A, 9-0) order.
### Why Sorting Algorithms are Important
Since sorting can often reduce the complexity of a problem, it is an important algorithm in Computer Science. These algorithms have direct applications in searching algorithms, database algorithms, divide and conquer methods, data structure algorithms, and many more.
### Some Common Sorting Algorithms
Some of the most common sorting algorithms are:
* Selection Sort
* Bubble Sort
* Insertion Sort
* Merge Sort
* Quick Sort
* Heap Sort
* Counting Sort
* Radix Sort
* Bucket Sort
### Classification of Sorting Algorithm
Sorting algorithms can be categorized based on the following parameters:
1. Based on Number of Swaps or Inversion
This is the number of times the algorithm swaps elements to sort the input. `Selection Sort` requires the minimum number of swaps.
2. Based on Number of Comparisons
This is the number of times the algorithm compares elements to sort the input. Using <a href='https://guide.freecodecamp.org/computer-science/notation/big-o-notation/' target='_blank' rel='nofollow'>Big-O notation</a>, the sorting algorithm examples listed above require at least `O(nlogn)` comparisons in the best case and `O(n^2)` comparisons in the worst case for most of the outputs.
3. Based on Recursion or Non-Recursion
Some sorting algorithms, such as `Quick Sort`, use recursive techniques to sort the input. Other sorting algorithms, such as `Selection Sort` or `Insertion Sort`, use non-recursive techniques. Finally, some sorting algorithm, such as `Merge Sort`, make use of both recursive as well as non-recursive techniques to sort the input.
4. Based on Stability
Sorting algorithms are said to be `stable` if the algorithm maintains the relative order of elements with equal keys. In other words, two equivalent elements remain in the same order in the sorted output as they were in the input.
* `Insertion sort`, `Merge Sort`, and `Bubble Sort` are stable
* `Heap Sort` and `Quick Sort` are not stable
5. Based on Extra Space Requirement
Sorting algorithms are said to be `in place` if they require a constant `O(1)` extra space for sorting.
* `Insertion sort` and `Quick-sort` are `in place` sort as we move the elements about the pivot and do not actually use a separate array which is NOT the case in merge sort where the size of the input must be allocated beforehand to store the output during the sort.
* `Merge Sort` is an example of `out place` sort as it require extra memory space for it's operations.
### Best possible time complexity for any comparison based sorting
Any comparison based sorting algorithm must make at least nLog2n comparisons to sort the input array, and Heapsort and merge sort are asymptotically optimal comparison sorts.This can be easily proved by drawing the desicion tree diagram.
### Algorithmic Paradigm
Merge Sort and Quick Sort are based on Divide and Conquer Algorithm

View File

@ -0,0 +1,182 @@
---
title: Insertion Sort
---
## Insertion Sort
Insertion sort is the simplest and efficient sorting algorithm for small number of elements.
### Example:
In Insertion sort, you compare the `key` element with the previous elements. If the previous elements are greater than the `key` element, then you move the previous element to the next position.
Start from index 1 to size of the input array.
[ 8 3 5 1 4 2 ]
Step 1 :
![[ 8 3 5 1 4 2 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/1.png?raw=true)
```
key = 3 //starting from 1st index.
Here `key` will be compared with the previous elements.
In this case, `key` is compared with 8. since 8 > 3, move the element 8
to the next position and insert `key` to the previous position.
Result: [ 3 8 5 1 4 2 ]
```
Step 2 :
![[ 3 8 5 1 4 2 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/2.png?raw=true)
```
key = 5 //2nd index
8 > 5 //move 8 to 2nd index and insert 5 to the 1st index.
Result: [ 3 5 8 1 4 2 ]
```
Step 3 :
![[ 3 5 8 1 4 2 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/3.png?raw=true)
```
key = 1 //3rd index
8 > 1 => [ 3 5 1 8 4 2 ]
5 > 1 => [ 3 1 5 8 4 2 ]
3 > 1 => [ 1 3 5 8 4 2 ]
Result: [ 1 3 5 8 4 2 ]
```
Step 4 :
![[ 1 3 5 8 4 2 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/4.png?raw=true)
```
key = 4 //4th index
8 > 4 => [ 1 3 5 4 8 2 ]
5 > 4 => [ 1 3 4 5 8 2 ]
3 > 4 ≠> stop
Result: [ 1 3 4 5 8 2 ]
```
Step 5 :
![[ 1 3 4 5 8 2 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/5.png?raw=true)
```
key = 2 //5th index
8 > 2 => [ 1 3 4 5 2 8 ]
5 > 2 => [ 1 3 4 2 5 8 ]
4 > 2 => [ 1 3 2 4 5 8 ]
3 > 2 => [ 1 2 3 4 5 8 ]
1 > 2 ≠> stop
Result: [1 2 3 4 5 8]
```
![[ 1 2 3 4 5 8 ]](https://github.com/blulion/freecodecamp-resource/blob/master/insertion_sort/6.png?raw=true)
The below algorithm is slightly optimized version to avoid swapping `key` element in every iteration. Here, the `key` element will be swapped at the end of the iteration (step).
```Algorithm
InsertionSort(arr[])
for j = 1 to arr.length
key = arr[j]
i = j - 1
while i > 0 and arr[i] > key
arr[i+1] = arr[i]
i = i - 1
arr[i+1] = key
```
Here is a detaied implementation in Javascript:
```
function insertion_sort(A) {
var len = array_length(A);
var i = 1;
while (i < len) {
var x = A[i];
var j = i - 1;
while (j >= 0 && A[j] > x) {
A[j + 1] = A[j];
j = j - 1;
}
A[j+1] = x;
i = i + 1;
}
}
```
A quick implementation in Swift is as shown below :
```swift
var array = [8, 3, 5, 1, 4, 2]
func insertionSort(array:inout Array<Int>) -> Array<Int>{
for j in 0..<array.count {
let key = array[j]
var i = j-1
while (i > 0 && array[i] > key){
array[i+1] = array[i]
i = i-1
}
array[i+1] = key
}
return array
}
```
The Java example is shown below:
```
public int[] insertionSort(int[] arr)
for (j = 1; j < arr.length; j++) {
int key = arr[j]
int i = j - 1
while (i > 0 and arr[i] > key) {
arr[i+1] = arr[i]
i -= 1
}
arr[i+1] = key
}
return arr;
```
### insertion sort in c....
```C
void insertionSort(int arr[], int n)
{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i-1;
while (j >= 0 && arr[j] > key)
{
arr[j+1] = arr[j];
j = j-1;
}
arr[j+1] = key;
}
}
```
### Properties:
* Space Complexity: O(1)
* Time Complexity: O(n), O(n* n), O(n* n) for Best, Average, Worst cases respectively
* Sorting In Place: Yes
* Stable: Yes
#### Other Resources:
- [Wikipedia](https://en.wikipedia.org/wiki/Insertion_sort)
- [CS50 - YouTube](https://youtu.be/TwGb6ohsvUU)
- [SortInsertion - GeeksforGeeks, YouTube](https://www.youtube.com/watch?v=wObxd4Kx8sE)
- [Insertion Sort Visualization](https://www.hackerearth.com/practice/algorithms/sorting/insertion-sort/visualize/)
- [Insertion Sort - MyCodeSchool](https://www.youtube.com/watch?v=i-SKeOcBwko)

View File

@ -0,0 +1,254 @@
---
title: Merge Sort
---
## Merge Sort
Merge Sort is a <a href='https://guide.freecodecamp.org/algorithms/divide-and-conquer-algorithms' target='_blank' rel='nofollow'>Divide and Conquer</a> algorithm. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. The major portion of the algorithm is given two sorted arrays, we have to merge them into a single sorted array. There is something known as the <a href='http://www.geeksforgeeks.org/merge-two-sorted-arrays/' target='_blank' rel='nofollow'>Two Finger Algorithm</a> that helps us merge two sorted arrays together. Using this subroutine and calling the merge sort function on the array halves recursively will give us the final sorted array we are looking for.
Since this is a recursion based algorithm, we have a recurrence relation for it. A recurrence relation is simply a way of representing a problem in terms of its subproblems.
``` T(n) = 2 * T(n / 2) + O(n) ```
Putting it in plain english, we break down the subproblem into two parts at every step and we have some linear amount of work that we have to do for merging the two sorted halves together at each step.
```
T(n) = 2T(n/2) + n
= 2(2T(n/4) + n/2) + n
= 4T(n/4) + n + n
= 4(2T(n/8) + n/4) + n + n
= 8T(n/8) + n + n + n
= nT(n/n) + n + ... + n + n + n
= n + n + ... + n + n + n
```
Counting the number of repetitions of n in the sum at the end, we see that there are lg n + 1 of them. Thus the running time is n(lg n + 1) = n lg n + n. We observe that n lg n + n < n lg n + n lg n = 2n lg n for n>0, so the running time is O(n lg n).
```Algorithm
MergeSort(arr[], left, right):
If right > l:
1. Find the middle point to divide the array into two halves:
mid = (left+right)/2
2. Call mergeSort for first half:
Call mergeSort(arr, left, mid)
3. Call mergeSort for second half:
Call mergeSort(arr, mid+1, right)
4. Merge the two halves sorted in step 2 and 3:
Call merge(arr, left, mid, right)
```
![Merge Sort Algorithm](https://upload.wikimedia.org/wikipedia/commons/thumb/e/e6/Merge_sort_algorithm_diagram.svg/300px-Merge_sort_algorithm_diagram.svg.png)
### Properties:
* Space Complexity: O(n)
* Time Complexity: O(n*log(n)). The time complexity for the Merge Sort might not be obvious from the first glance. The log(n) factor that comes in is because of the recurrence relation we have mentioned before.
* Sorting In Place: No in a typical implementation
* Stable: Yes
* Parallelizable :yes (Several parallel variants are discussed in the third edition of Cormen, Leiserson, Rivest, and Stein's Introduction to Algorithms.)
### Visualization:
* <a href='https://www.cs.usfca.edu/~galles/visualization/ComparisonSort.html'>USFCA</a>
* <a href='https://www.hackerearth.com/practice/algorithms/sorting/merge-sort/visualize/'>HackerEarth</a>
### Relavant videos on freeCodeCamp YouTube channel
* <a href='https://youtu.be/TzeBrDU-JaY'>Merge Sort algorithm - MyCodeSchool</a>
### Other Resources:
* <a href='https://en.wikipedia.org/wiki/Merge_sort' target='_blank' rel='nofollow'>Wikipedia</a>
* <a href='www.geeksforgeeks.org/merge-sort' target='_blank' rel='nofollow'>GeeksForGeeks</a>
* <a href='https://youtu.be/sWtYJv_YXbo' target='_blank' rel='nofollow'>Merge Sort - CS50</a>
### Implementaion in JS
```js
const list = [23, 4, 42, 15, 16, 8, 3]
const mergeSort = (list) =>{
if(list.length <= 1) return list;
const middle = list.length / 2 ;
const left = list.slice(0, middle);
const right = list.slice(middle, list.length);
return merge(mergeSort(left), mergeSort(right));
}
const merge = (left, right) => {
var result = [];
while(left.length || right.length) {
if(left.length && right.length) {
if(left[0] < right[0]) {
result.push(left.shift())
} else {
result.push(right.shift())
}
} else if(left.length) {
result.push(left.shift())
} else {
result.push(right.shift())
}
}
return result;
}
console.log(mergeSort(list)) // [ 3, 4, 8, 15, 16, 23, 42 ]
```
### Implementation in C
```C
#include<stdlib.h>
#include<stdio.h>
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
int L[n1], R[n2];
for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1+ j];
i = 0;
j = 0;
k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}
while (j < n2)
{
arr[k] = R[j];
j++;
k++;
}
}
void mergeSort(int arr[], int l, int r)
{
if (l < r)
{
int m = l+(r-l)/2;
mergeSort(arr, l, m);
mergeSort(arr, m+1, r);
merge(arr, l, m, r);
}
}
void printArray(int A[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", A[i]);
printf("\n");
}
int main()
{
int arr[] = {12, 11, 13, 5, 6, 7};
int arr_size = sizeof(arr)/sizeof(arr[0]);
printf("Given array is \n");
printArray(arr, arr_size);
mergeSort(arr, 0, arr_size - 1);
printf("\nSorted array is \n");
printArray(arr, arr_size);
return 0;
```
### Implementation in C++
Let us consider array A = {2,5,7,8,9,12,13}
and array B = {3,5,6,9,15} and we want array C to be in ascending order as well.
```c++
void mergesort(int A[],int size_a,int B[],int size_b,int C[])
{
int token_a,token_b,token_c;
for(token_a=0, token_b=0, token_c=0; token_a<size_a && token_b<size_b; )
{
if(A[token_a]<=B[token_b])
C[token_c++]=A[token_a++];
else
C[token_c++]=B[token_b++];
}
if(token_a<size_a)
{
while(token_a<size_a)
C[token_c++]=A[token_a++];
}
else
{
while(token_b<size_b)
C[token_c++]=B[token_b++];
}
}
```
### Implementation in Python
```python
temp = None
def merge(arr, left, right):
global temp, inversions
mid = (left + right) // 2
for i in range(left, right + 1):
temp[i] = arr[i]
k, L, R = left, left, mid + 1
while L <= mid and R <= right:
if temp[L] <= temp[R]:
arr[k] = temp[L]
L += 1
else:
arr[k] = temp[R]
R += 1
k += 1
while L <= mid:
arr[k] = temp[L]
L += 1
k += 1
while R <= right:
arr[k] = temp[R]
R += 1
k += 1
def merge_sort(arr, left, right):
if left >= right:
return
mid = (left + right) // 2
merge_sort(arr, left, mid)
merge_sort(arr, mid + 1, right)
merge(arr, left, right)
arr = [1,6,3,1,8,4,2,9,3]
temp = [None for _ in range(len(arr))]
merge_sort(arr, 0, len(arr) - 1)
print(arr, inversions)
```

View File

@ -0,0 +1,144 @@
---
title: Quick Sort
---
## Quick Sort
Quick sort is an efficient divide and conquer sorting algorithm. Average case time complexity of Quick Sort is O(nlog(n)) with worst case time complexity being O(n^2).
The steps involved in Quick Sort are:
- Choose an element to serve as a pivot, in this case, the last element of the array is the pivot.
- Partitioning: Sort the array in such a manner that all elements less than the pivot are to the left, and all elements greater than the pivot are to the right.
- Call Quicksort recursively, taking into account the previous pivot to properly subdivide the left and right arrays. (A more detailed explanation can be found in the comments below)
A quick implementation in JavaScript:
```javascript
const arr = [6, 2, 5, 3, 8, 7, 1, 4]
const quickSort = (arr, start, end) => {
if(start < end) {
// You can learn about how the pivot value is derived in the comments below
let pivot = partition(arr, start, end)
// Make sure to read the below comments to understand why pivot - 1 and pivot + 1 are used
// These are the recursive calls to quickSort
quickSort(arr, start, pivot - 1)
quickSort(arr, pivot + 1, end)
}
}
const partition = (arr, start, end) => {
let pivot = end
// Set i to start - 1 so that it can access the first index in the event that the value at arr[0] is greater than arr[pivot]
// Succeeding comments will expound upon the above comment
let i = start - 1
let j = start
// Increment j up to the index preceding the pivot
while (j < pivot) {
// If the value is greater than the pivot increment j
if (arr[j] > arr[pivot]) {
j++
}
// When the value at arr[j] is less than the pivot:
// increment i (arr[i] will be a value greater than arr[pivot]) and swap the value at arr[i] and arr[j]
else {
i++
swap(arr, j, i)
j++
}
}
//The value at arr[i + 1] will be greater than the value of arr[pivot]
swap(arr, i + 1, pivot)
//You return i + 1, as the values to the left of it are less than arr[i+1], and values to the right are greater than arr[i + 1]
// As such, when the recursive quicksorts are called, the new sub arrays will not include this the previously used pivot value
return i + 1
}
const swap = (arr, firstIndex, secondIndex) => {
let temp = arr[firstIndex]
arr[firstIndex] = arr[secondIndex]
arr[secondIndex] = temp
}
quickSort(arr, 0, arr.length - 1)
console.log(arr)
```
A quick sort implementation in C
```C
#include<stdio.h>
void swap(int* a, int* b)
{
int t = *a;
*a = *b;
*b = t;
}
int partition (int arr[], int low, int high)
{
int pivot = arr[high];
int i = (low - 1);
for (int j = low; j <= high- 1; j++)
{
if (arr[j] <= pivot)
{
i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}
void quickSort(int arr[], int low, int high)
{
if (low < high)
{
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
void printArray(int arr[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", arr[i]);
printf("n");
}
int main()
{
int arr[] = {10, 7, 8, 9, 1, 5};
int n = sizeof(arr)/sizeof(arr[0]);
quickSort(arr, 0, n-1);
printf("Sorted array: n");
printArray(arr, n);
return 0;
}
```
The space complexity of quick sort is O(n). This is an improvement over other divide and conquer sorting algorithms, which take O(nlong(n)) space. Quick sort achieves this by changing the order of elements within the given array. Compare this with the <a href='https://guide.freecodecamp.org/algorithms/sorting-algorithms/merge-sort' target='_blank' rel='nofollow'>merge sort</a> algorithm which creates 2 arrays, each length n/2, in each function call.
#### More Information:
- <a href='https://en.wikipedia.org/wiki/Quicksort' target='_blank' rel='nofollow'>Wikipedia</a>
- <a href='http://www.geeksforgeeks.org/quick-sort' target='_blank' rel='nofollow'>GeeksForGeeks</a>
- <a href='https://www.youtube.com/watch?v=MZaf_9IZCrc' target='_blank' rel='nofollow'>Youtube: A Visual Explanation of Quicksort</a>
- <a href='https://www.youtube.com/watch?v=SLauY6PpjW4' target='_blank' rel='nofollow'>Youtube: Gayle Laakmann McDowell (author of Cracking The Coding Interview) explains the basics of quicksort and show some implementations</a>
- <a href='https://www.youtube.com/watch?v=COk73cpQbFQ' target='_blank' rel='nofollow'>Quick Sort - MyCodeSchool</a>

View File

@ -0,0 +1,120 @@
---
title: Radix Sort
---
## Radix Sort
Prerequisite: Counting Sort
QuickSort, MergeSort, HeapSort are comparison based sorting algorithms.
CountSort is not comparison based algorithm. It has the complexity of O(n+k), where k is the maximum element of the input array.
So, if k is O(n) ,CountSort becomes linear sorting, which is better than comparison based sorting algorithms that have O(nlogn) time complexity.
The idea is to extend the CountSort algorithm to get a better time complexity when k goes O(n2).
Here comes the idea of Radix Sort.
Algorithm:
For each digit i where i varies from the least significant digit to the most significant digit of a number
Sort input array using countsort algorithm according to ith digit.
We used count sort because it is a stable sort.
Example: Assume the input array is:
10,21,17,34,44,11,654,123
Based on the algorithm, we will sort the input array according to the one's digit (least significant digit).
0: 10 </br>
1: 21 11</br>
2:</br>
3: 123</br>
4: 34 44 654</br>
5:</br>
6:</br>
7: 17</br>
8:</br>
9:</br>
So, the array becomes 10,21,11,123,24,44,654,17
Now, we'll sort according to the ten's digit:
0:</br>
1: 10 11 17</br>
2: 21 123</br>
3: 34</br>
4: 44</br>
5: 654</br>
6:</br>
7:</br>
8:</br>
9:
Now, the array becomes : 10,11,17,21,123,34,44,654
Finally , we sort according to the hundred's digit (most significant digit):
0: 010 011 017 021 034 044</br>
1: 123</br>
2:</br>
3:</br>
4:</br>
5:</br>
6: 654</br>
7:</br>
8:</br>
9:
The array becomes : 10,11,17,21,34,44,123,654 which is sorted. This is how our algorithm works.
An implementation in C:
```
void countsort(int arr[],int n,int place){
int i,freq[range]={0}; //range for integers is 10 as digits range from 0-9
int output[n];
for(i=0;i<n;i++)
freq[(arr[i]/place)%range]++;
for(i=1;i<range;i++)
freq[i]+=freq[i-1];
for(i=n-1;i>=0;i--){
output[freq[(arr[i]/place)%range]-1]=arr[i];
freq[(arr[i]/place)%range]--;
}
for(i=0;i<n;i++)
arr[i]=output[i];
}
void radixsort(ll arr[],int n,int maxx){ //maxx is the maximum element in the array
int mul=1;
while(maxx){
countsort(arr,n,mul);
mul*=10;
maxx/=10;
}
}
```
### More Information:
- [Wikipedia](https://en.wikipedia.org/wiki/Radix_sort)
- [GeeksForGeeks](http://www.geeksforgeeks.org/radix-sort/)

View File

@ -0,0 +1,96 @@
---
title: Selection Sort
---
## Selection Sort
Selection Sort is one of the most simple sorting algorithms. It works in the following way,
1. Find the smallest element. Swap it with the first element.
2. Find the second smallest element. Swap it with the second element.
3. Find the third smallest element. Swap it with the third element.
4. Repeat finding the next smallest element and swapping it into the corresponding correct position till the array is sorted.
As you can guess, this algorithm is called Selection Sort because it repeatedly selects the next smallest element and swaps it into its place.
But, how would you write the code for finding the index of the second smallest value in an array?
* An easy way is to notice that the smallest value has already been swapped into index 0, so the problem reduces to finding the smallest element in the array starting at index 1.
### Implementation in C/C++
```C
for(int i = 0; i < n; i++)
{
int min_index = i;
int min_element = a[i];
for(int j = i +1; j < n; j++)
{
if(a[j] < min_element)
{
min_element = a[j];
min_index = j;
}
}
swap(&a[i], &a[min_index]);
}
```
### Implementation in Javascript
``` Javascript
function selection_sort(A) {
var len = array_length(A);
for (var i = 0; i < len - 1; i = i + 1) {
var j_min = i;
for (var j = i + 1; j < len; j = j + 1) {
if (A[j] < A[j_min]) {
j_min = j;
} else {}
}
if (j_min !== i) {
swap(A, i, j_min);
} else {}
}
}
function swap(A, x, y) {
var temp = A[x];
A[x] = A[y];
A[y] = temp;
}
```
### Implementation in Python
```python
def seletion_sort(arr):
if not arr:
return arr
for i in range(len(arr)):
min_i = i
for j in range(i + 1, len(arr)):
if arr[j] < arr[min_i]:
min_i = j
arr[i], arr[min_i] = arr[min_i], arr[i]
```
### Properties
* Space Complexity: <b>O(n)</b>
* Time Complexity: <b>O(n<sup>2</sup>)</b>
* Sorting in Place: <b>Yes</b>
* Stable: <b>No</b>
### Visualization
* [USFCA](https://www.cs.usfca.edu/~galles/visualization/ComparisonSort.html)
* [HackerEarth](https://www.hackerearth.com/practice/algorithms/sorting/selection-sort/visualize/)
### References
* [Wikipedia](https://en.wikipedia.org/wiki/Selection_sort)
* [KhanAcademy](https://www.khanacademy.org/computing/computer-science/algorithms#sorting-algorithms)
* [MyCodeSchool](https://www.youtube.com/watch?v=GUDLRan2DWM)

View File

@ -0,0 +1,111 @@
---
title: Timsort
---
## Timsort
Timsort is a fast sorting algorithm working at stable O(N log(N)) complexity
Timsort is a blend on Insertion Sort and Mergesort. This algorithm is implemented in Javas Arrays.sort() as well as Pythons sorted() and sort()
The smaller part are sorted using Insertion Sort and is later merged together using Mergesort.
A quick implementation in Python:
```
def binary_search(the_array, item, start, end):
if start == end:
if the_array[start] > item:
return start
else:
return start + 1
if start > end:
return start
mid = round((start + end)/ 2)
if the_array[mid] < item:
return binary_search(the_array, item, mid + 1, end)
elif the_array[mid] > item:
return binary_search(the_array, item, start, mid - 1)
else:
return mid
"""
Insertion sort that timsort uses if the array size is small or if
the size of the "run" is small
"""
def insertion_sort(the_array):
l = len(the_array)
for index in range(1, l):
value = the_array[index]
pos = binary_search(the_array, value, 0, index - 1)
the_array = the_array[:pos] + [value] + the_array[pos:index] + the_array[index+1:]
return the_array
def merge(left, right):
"""Takes two sorted lists and returns a single sorted list by comparing the
elements one at a time.
[1, 2, 3, 4, 5, 6]
"""
if not left:
return right
if not right:
return left
if left[0] < right[0]:
return [left[0]] + merge(left[1:], right)
return [right[0]] + merge(left, right[1:])
def timsort(the_array):
runs, sorted_runs = [], []
length = len(the_array)
new_run = [the_array[0]]
# for every i in the range of 1 to length of array
for i in range(1, length):
# if i is at the end of the list
if i == length - 1:
new_run.append(the_array[i])
runs.append(new_run)
break
# if the i'th element of the array is less than the one before it
if the_array[i] < the_array[i-1]:
# if new_run is set to None (NULL)
if not new_run:
runs.append([the_array[i]])
new_run.append(the_array[i])
else:
runs.append(new_run)
new_run = []
# else if its equal to or more than
else:
new_run.append(the_array[i])
# for every item in runs, append it using insertion sort
for item in runs:
sorted_runs.append(insertion_sort(item))
# for every run in sorted_runs, merge them
sorted_array = []
for run in sorted_runs:
sorted_array = merge(sorted_array, run)
print(sorted_array)
timsort([2, 3, 1, 5, 6, 7])
```
#### Complexity:
Tim sort has a stable Complexity of O(N log(N)) and compares really well with Quicksort.
A comaprison of complexities can be found on this [chart](https://cdn-images-1.medium.com/max/1600/1*1CkG3c4mZGswDShAV9eHbQ.png)
#### More Information:
- <a href='https://en.wikipedia.org/wiki/Timsort' target='_blank' rel='nofollow'>Wikipedia</a>
- <a href='https://www.geeksforgeeks.org/timsort/' target='_blank' rel='nofollow'>GeeksForGeeks</a>
- <a href='https://www.youtube.com/watch?v=jVXsjswWo44' target='_blank' rel='nofollow'>Youtube: A Visual Explanation of Quicksort</a>
#### Credits:
[Python Implementation](https://hackernoon.com/timsort-the-fastest-sorting-algorithm-youve-never-heard-of-36b28417f399)

View File

@ -0,0 +1,54 @@
# KnuthMorrisPratt Algorithm for Pattern Searching
Pattern searching is an important problem in computer science. When we do search for a string in notepad/word file or browser or database, pattern searching algorithms are used to show the search results.
**Problem :**
Given a text _txt[0..n-1]_ and a pattern _pat[0..m-1]_, write a function _search(char pat[], char txt[])_ that prints all occurrences of _pat[]_ in _txt[]_. You may assume that _n > m_.
**Example :**
```
Input: txt[] = "AABAACAADAABAABA"
pat[] = "AABA"
Output: Pattern found at index 0
Pattern found at index 9
Pattern found at index 12
```
**Idea :**
The basic idea behind KMPs algorithm is: whenever we detect a mismatch (after some matches), we already know some of the characters in the text of the next window. We take advantage of this information to avoid matching the characters that we know will anyway match. Let us consider below example to understand this.
**Preprocessing Pattern String :**
- KMP algorithm preprocesses pat[] and constructs an auxiliary **lps[]** of size m (same as size of pattern) which is used to skip characters while matching.
- Name lps indicates **longest proper prefix** which is also suffix. A proper prefix is prefix with whole string **not** allowed. For example, prefixes of “ABC” are “”, “A”, “AB” and “ABC”. Proper prefixes are “”, “A” and “AB”. Suffixes of the string are “”, “C”, “BC” and “ABC”.
- We search for lps in sub-patterns. More clearly we focus on sub-strings of patterns that are either prefix and suffix.
- For each sub-pattern pat[0..i] where i = 0 to m-1, lps[i] stores length of the maximum matching proper prefix which is also a suffix of the sub-pattern pat[0..i].
`lps[i] = the longest proper prefix of pat[0..i] which is also a suffix of pat[0..i]. `
**Note :** lps[i] could also be defined as longest prefix which is also proper suffix. We need to use properly at one place to make sure that the whole substring is not considered.
**Examples of lps[] construction :**
```
For the pattern “ABCDE”,
lps[] is [0, 0, 0, 0, 0]
For the pattern “AABAACAABAA”,
lps[] is [0, 1, 0, 1, 2, 0, 1, 2, 3, 4, 5]
```
### Searching Algorithm :
In this algorithm, we use a value from lps[] to decide the next characters to be matched. The idea is to not match a character that we know will anyway match.
How to use lps[] to decide next positions (or to know a number of characters to be skipped)?
- We start comparison of pat[j] with j = 0 with characters of current window of text.
- We keep matching characters txt[i] and pat[j] and keep incrementing i and j while pat[j] and txt[i] keep **matching**.
- When we see a **mismatch**
- We know that characters pat[0..j-1] match with txt[i-j+1…i-1] (Note that j starts with 0 and increment it only when there is a match).
- We also know (from above definition) that lps[j-1] is count of characters of pat[0…j-1] that are both proper prefix and suffix.
- From above two points, we can conclude that we do not need to match these lps[j-1] characters with txt[i-j…i-1] because we know that these characters will anyway match. Let us consider above example to understand this.
<br>
**More Infromation :**
- [kmp algorithm for pattern searching](https://www.geeksforgeeks.org/kmp-algorithm-for-pattern-searching/)
- [KnuthMorrisPratt algorithm](https://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm)

View File

@ -0,0 +1,48 @@
---
title: Rabin Karp Algorithm
---
## Rabin-Karp Algorithm
* A string matching/searching algorithm developed by Michael O. Rabin and Richard M. Karp.
* Uses ***hashing*** technique and ***brute force*** for comparison.
#### Important terms
* ***pattern*** is the string to be searched.
Consider length of pattern as ***M*** characters.
* ***text*** is the whole text from which the pattern is to be searched.
Consider length of text as ***N*** characters.
#### What is brute force comparison?
In brute force comparison each character of pattern is compared with each character of text untill unmatching characters are found.
#### Working of Rabin-Karp Algorithm
1. Calculate hash value of *pattern*
2. Calculate hash value of first *M* characters of *text*
3. Compare both hash values
4. If they are unequal, calculate hash value for next *M* characters of *text* and compare again.
5. If they are equal, perform a brute force comparison.
```
hash_p = hash value of pattern
hash_t = hash value of first M letters in body of text
do
if (hash_p == hash_t)
brute force comparison of pattern and selected section of text
hash_t= hash value of next section of text, one character over
while (end of text or brute force comparison == true)
```
#### Advantage over Naive String Matching Algorithm
This technique results in only one comparison per text sub-sequence and brute force is only required when the hash values match.
#### Applications
* ***Plagiarism Detection***
#### More Information:
<a href='https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm/' target='_blank' rel='nofollow'>Rabin-Karp on Wikipedia</a>