if you are here for a quick lesson, you can just go through this para rather than the whole post,but still i believe you do know what is best for you:
->Big O notation [a.k.a O(n) ] :
talking of computer science, describing any sort of run time or required time its a common way for. in bookish way, it describes the growth of several different cases of orders for different algorithms.
{**not necessary for newbies: as like in monotonic function for f(n) and g(n). from any positive integer to another positive integer it can be said that f(n) = O(g(n)). [where constants, c>0 & n0>0, so that f(n) ≤ c * g(n), for all n ≥ n0] this means f(n) can't get to grow faster than g(n) or in other way g(n) indicates the upper bound for f(n) [ when n is way too large, n→∞ ] }
dramatic explanation:
for f(n) = O(g(n)) relation -
example:
you can easily prove n2 + 2 n + 1 = O(n2). all you have to do is to find c and n0,that:
n 2 + 2 n + 1 ≤ c*n2.
Let n0=1, then for n ≥ 1
1 + 2 n + n2 ≤ n + 2 n + n2 ≤ n2 + 2 n2
+ n
2 = 4 n2
Therefore, c = 4.
**heads up: big O notation isn't symmetric, i.e. n = O(n2) but n2 ≠ O(n)
Kinds:
a. linear time : T(n) = O(n): if the algorithm's execution is directly proportional to the input size. it means if the time grows linearly as input size grows bigger.
examples:
b. fixed (constant) time: T(n) = O(1): if any algorithm regarding of its input taking size, runs in a constant time, maintains O(1) notational time complexity.- best-case runtime : O(1) [ shortest and probably most accurate]
- average-case runtime : O(n/2)=O(n) [expected value of the function in most cases]
- worst-case runtime : O(n) [maximum value of the function for any possible input]
- O(n) = log2(n) grows mostly the slowest
- exponential function 2n grows most rapidly
- polynomial function n2 grows just according to its exponent c
talking of computer science, describing any sort of run time or required time its a common way for. in bookish way, it describes the growth of several different cases of orders for different algorithms.
{**not necessary for newbies: as like in monotonic function for f(n) and g(n). from any positive integer to another positive integer it can be said that f(n) = O(g(n)). [where constants, c>0 & n0>0, so that f(n) ≤ c * g(n), for all n ≥ n0] this means f(n) can't get to grow faster than g(n) or in other way g(n) indicates the upper bound for f(n) [ when n is way too large, n→∞ ] }
dramatic explanation:
for f(n) = O(g(n)) relation -
example:
you can easily prove n2 + 2 n + 1 = O(n2). all you have to do is to find c and n0,that:
n 2 + 2 n + 1 ≤ c*n2.
Let n0=1, then for n ≥ 1
**heads up: big O notation isn't symmetric, i.e. n = O(n2) but n2 ≠ O(n)
Kinds:
a. linear time : T(n) = O(n): if the algorithm's execution is directly proportional to the input size. it means if the time grows linearly as input size grows bigger.
examples:
- in array like, Linear Search, Traverse, Minimum finding
- in arraylist like, Contains method
- in queue like, Contains method
examples:
- fixed size stack(e.g. push, pop methods) or queue (e.g. enqueue, dequeue methods)
- accession to any element of the array
examples:
d. Quadratic Time: T(n) = O( n2 ): time execution is proportional to the square of the size of its input.
examples:
->Big Omega notation [a.k.a Ω(n) ] :
Omega or more precisely Capital Omega notation is use for indicating lower bound. its like f(n) = Ω( g(n) ) when there exist constant c that f(n) ≥ c*g(n) for for all sufficiently large n.
if it seems hard to get, you actually need it for making better algorithm skills, so you can just memorize the example i've given bellow for now:
examples:
->Big Theta notation [a.k.a Θ(n) ] :
big Theta can be used for both upper and lower bounds. its like f(n) = Θ( g(n) ) if and only f(n) = O( g(n) ) and f(n) = Ω( g(n) )
same goes to big Theta as big Omega, you can just memorize it for a quick start.
example:
thank you :)
- binary search
- guessing game / 20 questions game / "guess-what-number-i-am-thinking" game
d. Quadratic Time: T(n) = O( n2 ): time execution is proportional to the square of the size of its input.
examples:
- insertion sort
- bubble sort
- selection sort
->Big Omega notation [a.k.a Ω(n) ] :
Omega or more precisely Capital Omega notation is use for indicating lower bound. its like f(n) = Ω( g(n) ) when there exist constant c that f(n) ≥ c*g(n) for for all sufficiently large n.
if it seems hard to get, you actually need it for making better algorithm skills, so you can just memorize the example i've given bellow for now:
examples:
- n = Ω(1)
- n2 = Ω(n)
- n2 = Ω( n log(n) )
->Big Theta notation [a.k.a Θ(n) ] :
big Theta can be used for both upper and lower bounds. its like f(n) = Θ( g(n) ) if and only f(n) = O( g(n) ) and f(n) = Ω( g(n) )
same goes to big Theta as big Omega, you can just memorize it for a quick start.
example:
- 2n = Θ(n)
- n2 + 2 n + 1 = Θ( n2)
thank you :)