Space & time complexity

For more than a year now I have been itching to start studying and going back to uni days, start learning/un-learning/revising. Been in the corporate world for 7 years now, quitting and joining a university seems hard, so I found my own way of teaching myself from the start and what's better than revisiting the bachelor degree concepts and finally making sense of theory by relating them with actual scenarios. This article talks about Data structures related concept of Space and time complexity

Algorithm:
Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be implemented in more than one programming language.

There are no well-defined standards for writing algorithms. Rather, it is problem and resource dependent. Algorithms are never written to support a particular programming code.An algorithm written, is analyzed for its efficiency, this happens in 2 stages:

1) Priori: This is a theoretical analysis of an algorithm
2) Posterior: This is an empirical analysis of an algorithm.

Algorithm analysis deals with the execution or running time of various operations involved. The running time of an operation can be defined as the number of computer instructions executed per operation. Knowing the complexity of algorithms allows you to answer questions such as

    How long will a program run on an input?
    How much space will it take?
    Is the problem solvable?

Two main factors that determine efficiency are:
a)Space: How much memory space does the algorithm take.
b)Time: How much time do the key operations take to complete.

The space required by an algorithm is equal to the sum of the following two components −

    A fixed part that is a space required to store certain data and variables, that are independent of the size of the problem. For example, simple variables and constants used, program size, etc.
    A variable part is a space required by variables, whose size depends on the size of the problem. For example, dynamic memory allocation, recursion stack space, etc.
 Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the variable part of the algorithm, which depends on instance characteristic I. Following is a simple example that tries to explain the concept −

Algorithm: SUM(A, B)
Step 1 -  START
Step 2 -  C ← A + B + 10
Step 3 -  Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3

Time complexity T(n) can be measured as the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here, we observe that T(n) grows linearly as the input size increases.

Asymptotic Analysis : It is the mathematical analysis that gives us worst,average and best case scenario of an algorithm.
A function f(n)=c*n will grow larger as n grows
We typically ignore small values of n, since we are usually interested in estimating how slow the program will be on large inputs. A good rule of thumb is: the slower the asymptotic growth rate, the better the algorithm.

By this measure, a linear algorithm (i.e., f(n)=d*n+k) is always asymptotically better than a quadratic one (e.g., f(n)=c*n2+q). That is because for any given (positive) c,k,d, and q there is always some n at which the magnitude of c*n2+q overtakes d*n+k

Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.

    Ο Notation
    Ω Notation
    θ Notation
I will focus more on these in next articles.




Source

Comments

Popular Posts