fix(guide): simplify directory structure

This commit is contained in:
Mrugesh Mohapatra
2018-10-16 21:26:13 +05:30
parent f989c28c52
commit da0df12ab7
35752 changed files with 0 additions and 317652 deletions

View File

@@ -0,0 +1,20 @@
---
title: Asymptotic Notation
---
## Asymptotic Notation
How do we measure the performance value of algorithms?
Consider how time is one of our most valuable resources. In computing, we can measure performance with the amount of time a process takes to complete. If the data processed by two algorithms is the same, we can decide on the best implementation to solve a problem.
We do this by defining the mathematical limits of an algorithm. These are the big-O, big-omega, and big-theta, or the asymptotic notations of an algorithm. On a graph the big-O would be the longest an algorithm could take for any given data set, or the "upper bound". Big-omega is like the opposite of big-O, the "lower bound". That's where the algorithm reaches its top-speed for any data set. Big theta is either the exact performance value of the algorithm, or a useful range between narrow upper and lower bounds.
Some examples:
- "The delivery will be there within your lifetime." (big-O, upper-bound)
- "I can pay you at least one dollar." (big-omega, lower bound)
- "The high today will be 25ºC and the low will be 19ºC." (big-theta, narrow)
- "It's a kilometer walk to the beach." (big-theta, exact)
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
- <a href='https://learnxinyminutes.com/docs/asymptotic-notation/' target='_blank' rel='nofollow'>Asymptotic Notation</a>

View File

@@ -0,0 +1,75 @@
---
title: Big O Notation
---
## Big O Notation
*As a computer scientist, if you are working on an important piece of software, you will likely need to be able to estimate how fast some algorithm or other going to run.*
Big O notation is used in computer science to describe the performance or complexity of an algorithm. Actually Big O notation is special symbol that tells you how fast an algorithm is. Of course youll use predefined algorithms oftenand when you do, its vital to understand how fast or slow they are.
#### What Big O notation looks like?
<img align="left" src="https://user-images.githubusercontent.com/5860906/31781171-74c6b48a-b500-11e7-9626-f715b37b10f0.png">
This tells you the number of operations an algorithm will make. Its called Big O notation because you put a “Big O” in front of the number of operations.
<br clear="left"/>
#### Big O establishes a worst-case run time
Say you are a doctor who is treating Harry Abbit, you might look into the electronic records that are related to Harry Abbit's medical history (he is the first person in a list). Lets consider the situation when his life depends on all available medical data.
Suppose youre using simple search to look for a person in the electronic records. You know that simple search takes O(n) time to run, so youll have to look through every single entry for Abbit. Of course, youve noticed that Abbit is the first entry, so you didnt have to look at every entryyou found it on the first try.
*Did this algorithm take O(n) time? Or did it take O(1) time because you found the person on the first try?*
In this case, thats the best-case scenario. But Big O notation is about the worst-case scenario. Thats O(n) time (simple search still takes). Its a reassurance that simple search will never be slower than O(n) time.
#### Algorithm running times grow at different rates
Lets assume it takes 1 millisecond to check one entry. With simple search, the doctor has to check 10 entries, so the search takes 10 ms to run. On the other hand, he only has to check 3 elements with *binary search algorithm* (log10 is roughly 3), so that search takes 3 ms to run. 
But realistically, the list will have more than a hundred elements. 
*If it does, how long will simple search take? How long will binary search take?*
The run time for simple search with 1 billion items will be 1 billion ms, which is 11 days. The problem is, the run times for binary search and simple search *dont grow at the same rate*.
<p align="center">
<img src="https://user-images.githubusercontent.com/5860906/31781165-723a053c-b500-11e7-937c-7b33db281efe.png">
</p>
So as the list of numbers gets bigger, binary search becomes a lot faster than simple search. That is, as the number of items increases, binary search takes a little more time to run. But simple search takes a *lot* more time to run. So as the list of numbers gets bigger, binary search becomes a lot faster than simple search. 
*Thats why its not enough to know how long an algorithm takes to runyou need to know how the running time increases as the list size increases. Thats where Big O notation comes in.*
#### Big O notation lets you compare the number of operations
For example, suppose you have a list of size n. Simple search needs to check each element, so it will take n operations. The run time in Big O notation is O(n). 
*Where are the seconds?*
There are noneBig O doesnt tell you the speed in seconds. *Big O notation lets you compare the number of operations.* It tells you how fast the algorithm grows.
<p align="center">
<img src="https://user-images.githubusercontent.com/5860906/31781175-768c208e-b500-11e7-9718-e632d1391e2d.png">
</p
#### Most common running times for algorithms
A list of the most common running times for algorithms in terms of Big O notation. 
Here are five Big O run times that youll encounter a lot, sorted from fastest to slowest:
1. O(log n), also known as *log time*. 
Example: Binary search.
2. O(n), also known as *linear time*. 
Example: Simple search.
3. O(n * log n)
Example: A fast sorting algorithm, like quicksort (coming up in chapter 4).
4. O(n2)
Example: A slow sorting algorithm, like selection sort (coming up in chapter 2).
5. O(n!)
Example: A really slow algorithm, like the traveling salesperson (coming up next!).
*This article only covers the very basics of Big O. For a more in-depth explanation take a look at their respective FreeCodeCamp guides for algorithms.*
### More Information
- [Khan Academy](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation)
- [Big O cheat sheet](http://bigocheatsheet.com/)

View File

@@ -0,0 +1,28 @@
---
title: Big Omega Notation
---
## Big Omega Notation
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/computer-science/notation/big-omega-notation/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
Similar to [big O](https://guide.freecodecamp.org/computer-science/notation/big-o-notation) notation, big Omega(Ω) function is used in computer science to describe the performance or complexity of an algorithm.
If a running time is Ω(f(n)), then for large enough n, the running time is at least k⋅f(n) for some constant k. Here's how to think of a running time that is Ω(f(n)):
<img src="https://s3.amazonaws.com/ka-cs-algorithms/Omega_fn.png" alt="big-omega function"/>
We say that the running time is "big-Ω of f(n)." We use big-Ω notation for **asymptotic lower bounds**, since it bounds the growth of the running time from below for large enough input sizes.
### Difference between Big O and Big Ω
The difference between Big O notation and Big Ω notation is that Big O is used to describe the worst case running time for an algorithm. But, Big Ω notation, on the other hand, is used to describe the best case running time for a given algorithm.
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
- [Big-Ω (Big-Omega) notation](https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-big-omega-notation)
- <a href="http://www.youtube.com/watch?feature=player_embedded&v=OpebHLAf99Y
" target="_blank"><img src="http://img.youtube.com/vi/OpebHLAf99Y/0.jpg"
alt="MYCODSCHOOL Time complexity analysis" width="240" height="180" border="10" /></a>

View File

@@ -0,0 +1,47 @@
---
title: Big Theta Notation
---
## Big Theta Notation
Big Omega tells us the lower bound of the runtime of a function, and Big O tells us the upper bound. Often times, they are different and we can't put a guarantee on the runtime - it will vary between the two bounds and the inputs. But what happens when they're the same? Then we can give a **theta** (Θ) bound - our function will run in that time, no matter what input we give it. In general, we always want to give a theta bound if possible because it is the most accurate and tightest bound. If we can't give a theta bound, the next best thing is the tightest O bound possible.
Take, for example, a function that searches an array for the value 0:
```python
def containsZero(arr): #assume normal array of length n with no edge cases
for num x in arr:
if x == 0:
return true
return false
```
1. What's the best case? Well, if the array we give it has 0 as the first value, it will take constant time: Ω (1)
2. What's the worst case? If the array doesn't contain 0, we will have iterated through the whole array: O(n)
We've given it an omega and O bound, so what about theta? We can't give it one! Depending on the array we give it, the runtime will be somewhere in between constant and linear.
Let's change our code a bit.
```python
def printNums(arr): #assume normal array of length n with no edge cases
for num x in arr:
print(x)
```
Can you think of a best case and worst case??
I can't! No matter what array we give it, we have to iterate through every value in the array. So the function will take AT LEAST n time (Ω(n)), but we also know it won't take any longer than n time (O(n)). What does this mean? Our function will take **exactly** n time: Θ(n).
If the bounds are confusing, think about it like this. We have 2 numbers, x and y. We are given that x <= y and that y <= x. If x is less than or equal to y, and y is less than or equal to x, then x has to equal y!
If you're familiar with linked lists, test yourself and think about the runtimes for each of these functions!
1. get
2. remove
3. add
Things get even more interesting when you consider a doubly linked list!
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-big-theta-notation
https://stackoverflow.com/questions/10376740/what-exactly-does-big-%D3%A8-notation-represent
https://www.geeksforgeeks.org/analysis-of-algorithms-set-3asymptotic-notations/

View File

@@ -0,0 +1,15 @@
---
title: Notation
---
## Notation
This is a stub. <a href='https://github.com/freecodecamp/guides/tree/master/src/pages/computer-science/notation/index.md' target='_blank' rel='nofollow'>Help our community expand it</a>.
<a href='https://github.com/freecodecamp/guides/blob/master/README.md' target='_blank' rel='nofollow'>This quick style guide will help ensure your pull request gets accepted</a>.
<!-- The article goes here, in GitHub-flavored Markdown. Feel free to add YouTube videos, images, and CodePen/JSBin embeds -->
#### More Information:
<!-- Please add any articles you think might be helpful to read before writing the article -->