Design and Analysis of Algorithms CHAPTER WISE QUESTONS COLLECTION

GYAN WALLA
0




 Design and Analysis of Algorithms


🔹 UNIT 1: Foundation of Algorithm Analysis

1. What is an algorithm? Explain the properties of an algorithm. Why do we need algorithms?
Solution
An algorithm is a finite sequence of well-defined steps used to solve a problem.
It takes input values, processes them, and produces output values.
Properties of an Algorithm
The properties of an algorithm define the conditions that make an algorithm valid and effective.
These properties ensure correctness, termination, and efficiency.

Properties
Input
An algorithm can have zero or more inputs.
Output
It must produce at least one output.
Correctness
It gives the correct output for every valid input.
Finiteness
The algorithm terminates after finite steps.
Definiteness
Every step is clear and unambiguous.
Effectiveness
Each operation is simple and executable in finite time.
Generality
The algorithm works for all instances of a problem.
Algorithms are needed to solve problems systematically.
They provide a logical and efficient method for computation.

Reasons
Efficiency
Reduces time and memory usage.
Scalability
Handles large input sizes effectively.
Consistency
Same input always produces the same output.
Automation
Enables automatic execution of tasks.

Example
Binary Search is faster than Linear Search for large datasets.

2. Explain the RAM model with a suitable example. Define time and space complexity.

Solution:

The Random Access Machine (RAM) Model

The RAM (Random Access Machine) model is a theoretical model used for analyzing algorithms in a machine-independent way. It assumes that an algorithm runs on an ideal computer with simple and uniform operations. In this model, every basic operation is assumed to take constant time.

Characteristics of the RAM Model:

Unit-Time Operations: Basic operations such as addition, subtraction, comparison, and assignment take one unit of time.

Direct Memory Access: Any memory location can be accessed in constant time,

Sequential Execution: Instructions are executed one after another, unless controlled by loops or conditions.

Uniform Cost Measure: Each basic instruction is counted as one step, and total running time is measured by counting these steps.

No Hardware Dependence: The model ignores hardware differences and focuses only on algorithmic efficiency.

Example: Fibonacci Number (Iterative Method)

fib(n) {

  a = 0, b = 1, f = 1;

  for(i = 2; i <= n; i++) {

    f = a + b;

    a = b;

    b = f;

  } return f;

}

RAM Model Analysis:

Initialization statements take constant time.

The for loop runs approximately (n-1)

Inside each loop, there are:

One addition (a + b)

Three assignments (f = a + b, a = b, b = f)

Each of these takes constant time.

Time Complexity:

 Measures the time an algorithm takes to run as the input size increases.

 It focuses on the number of steps or operations performed.


Space Complexity: Measures the memory (RAM) an algorithm uses during its execution.

 It focuses on the extra storage required beyond the input itself.

3. Explain best, worst, and average case complexity.
Solution
Worst Case Complexity:
Worst case complexity is the maximum time an algorithm takes for any input of size n.
It gives the upper bound of running time
Occurs when the maximum number of operations are performed
Most commonly used for analysis
Notation: Big-O → O(n)

Example:
In linear search, when the element is not present, all elements are checked

Best Case Complexity:
Best case complexity is the minimum time an algorithm takes for any input.
It gives the lower bound of running time
Occurs when the minimum number of operations are performed
Rarely used in practical analysis
Notation: Omega → Ω(1)

Example:
In linear search, when the element is found at first position

Average Case Complexity:
Average case complexity is the expected time taken over all possible inputs.
Calculated by taking all possible cases and averaging them
Gives a realistic performance measure
Difficult to compute in many cases
Notation: Theta → Θ(n)

Example:
In linear search, element may be found at any position or not found

4. Explain aggregate analysis with example.
Solution 
Aggregate analysis is a method in computer science used for amortized analysis to determine the average, worst-case time complexity of a sequence of operations. 

It determines the overall running time for n operations.
The average cost per operation is then calculated.
It calculates the total cost of n operations and then divides by n.
No probability is used (unlike probabilistic analysis).
Even if some operations are expensive, the overall average cost remains small.
Helps prove that a sequence of operations is efficient in the long run.
Commonly used in analyzing data structures like stacks, dynamic arrays, etc.

Steps in Aggregate Analysis
Find the total cost T(n) of n operations in the worst case.
Divide the total cost by n to get average cost per operation.

Example: Stack Operations
Consider push and pop operations in a stack.
Each push takes constant time O(1).
Each pop also takes constant time O(1).
For n stack operations, total time = O(n).
Average time per operation = O(n) / n = O(1).



5. Explain Big-O notation with example
Solution
Big-O notation is a concept in Algorithm Analysis used to describe the upper bound (worst-case growth rate) of an algorithm’s time or space complexity as input size increases.

A function f(n)f(n)is O(g(n))O(g(n)) if there exist constants c1,c2>0c > 0and n00n_0 \ge 0 such that:

f(n)cg(n)for all nn0​ 




It shows how execution time or memory grows with input size 𝑛.
Focuses on asymptotic behavior, not exact values.
Ignores constants and lower-order terms (only dominant term matters).
Represents the worst-case scenario of an algorithm.
Used to compare efficiency of algorithms.
Helps in predicting scalability for large inputs.




6.Explain Big-Ω notation with example.
Solution
Big-Omega notation is a concept in Algorithm Analysis used to describe the lower bound (best-case growth rate) of an algorithm’s time or space complexity.

A function  is
\Omega(g(n))
if there exist constants
c > 0
and n00n_0 \ge 0 such that:

f(n)cg(n)for all nn0f(n) \ge c \cdot g(n) \quad \text{for all } n \ge n_0
It represents the minimum time required by an algorithm.
Describes the best-case scenario.
Provides a lower limit of growth.
Ignores constants and lower-order terms.
Used to analyze guaranteed performance.
Helps in understanding algorithm efficiency from below.



7. Explain Big-Θ notation with example.
Solution
Big-Theta notation is a concept in Algorithm Analysis used to describe the tight bound (exact growth rate) of an algorithm’s time or space complexity.

A function
f(n)f(n) is Θ(g(n))\Theta(g(n)) if there exist constants c1,c2>0 and n00n_0 \ge 0 such that:

c1g(n)f(n)c2g(n)for all nn0c_1 \cdot g(n) \le f(n) \le c_2 \cdot g(n) \quad \text{for all } n \ge n_0

It gives both upper and lower bounds.
Represents the exact asymptotic behavior.
More precise than Big-O notation.
Used when growth rate is tightly bounded.
Ignores constants and lower-order terms.
Helps in accurate comparison of algorithms.


Compare Big-O, Big-Ω, and Big-Θ.


8.Explain the geometrical interpretation of asymptotic notations.
Solution
The geometrical interpretation of asymptotic notations in Algorithm Analysis explains how functions behave graphically as input size 𝑛 becomes very large.
Big-O notation:
Big-O notation is a concept in Algorithm Analysis used to describe the upper bound (worst-case growth rate) of an algorithm’s time or space complexity as input size increases.

A function f(n)is O(g(n)) if there exist constants c1,c2>0and n00 such that:

f(n)cg(n)for all nn0​ 




It shows how execution time or memory grows with input size 𝑛.
Focuses on asymptotic behavior, not exact values.
Ignores constants and lower-order terms (only dominant term matters).
Represents the worst-case scenario of an algorithm.
Used to compare efficiency of algorithms.
Helps in predicting scalability for large inputs.




Big-Ω notation;
Big-Omega notation is a concept in Algorithm Analysis used to describe the lower bound (best-case growth rate) of an algorithm’s time or space complexity.

A function  is 
 if there exist constants 
 and n00 such that:

f(n)cg(n)for all nn0
It represents the minimum time required by an algorithm.
Describes the best-case scenario.
Provides a lower limit of growth.
Ignores constants and lower-order terms.
Used to analyze guaranteed performance.
Helps in understanding algorithm efficiency from below.



Big-Θ notation:
Big-Theta notation is a concept in Algorithm Analysis used to describe the tight bound (exact growth rate) of an algorithm’s time or space complexity.

A function
f(n) is Θ(g(n)) if there exist constants c1,c2>0 and n00 such that:

c1g(n)f(n)c2g(n)for all nn0

It gives both upper and lower bounds.
Represents the exact asymptotic behavior.
More precise than Big-O notation.
Used when growth rate is tightly bounded.
Ignores constants and lower-order terms.
Helps in accurate comparison of algorithms.


9. What is a recurrence relation?
Solution
A recurrence relation is a concept in Algorithm Analysis used to define a function in terms of its previous values, commonly used to express the time complexity of recursive algorithms.
A recurrence relation expresses
T(n)T(n) in terms of smaller inputs such as T(n1),T(n/2)T(n-1), T(n/2) etc.

It defines a problem recursively in terms of subproblems.
Commonly used to analyze recursive algorithms.
Requires a base case to stop recursion.
Helps in finding time complexity of divide-and-conquer algorithms.
Can be solved using methods like substitution, recursion tree, and master theorem.
Shows how problem size reduces step by step.



Solve recurrence using recursion tree method.
( T(n) = 2T(n/2) + 1 )
( T(n) = T(n-1) + 1 )
Solution

Solve recurrence using substitution method.
( T(n) = 2T(n/2) + 1 )
( T(n) = T(n-1) + 1 )
Solution


Solve recurrence using Master’s theorem.
( T(n) = 2T(n/2) + 1 )
( T(n) = T(n-1) + 1 )
Solution


🔹 UNIT 2: Iterative Algorithms

1. Write an iterative algorithm to find the factorial and analyze it.
Solution 
An iterative algorithm is a concept in Algorithm Analysis used to solve a problem by repeating a set of instructions using loops instead of recursion.

Algorithm:
Factorial(n)
1. fact ← 1
2. for i ← 1 to n do
3.     fact ← fact × i
4. return fact
Uses a loop (iteration) instead of recursion.
Multiplies numbers from 1 to n sequentially.
Requires only one variable (fact) for computation.
Easy to implement and understand.
Avoids function call overhead of recursion.
Suitable for large inputs compared to recursive approach.

2. Write an algorithm to find the nth Fibonacci number and analyze it.
Solution 
Fibonacci series is a sequence where each number is the sum of the previous two numbers.

Algorithm (Iterative Fibonacci)

Start
Set first = 0, second = 1
Read the term number n
Set i = 3
While (i ≤ n)
temp = first + second
first = second
second = temp
i = i + 1
Print temp
Stop

Analysis
Time Complexity: O(n), because the loop executes (n − 2) times.
Space Complexity: O(1), since only a fixed number of variables are used.

3. Explain Euclid’s algorithm for GCD. 
Solution 
GCD (Greatest Common Divisor) is the largest positive integer that divides two numbers exactly.
Euclid’s algorithm is an iterative method to find the GCD using division.
It is based on the principle that GCD(m, n) = GCD(n, m % n).

Algorithm (Euclid’s Method)
Start
Read two numbers m and n
While (n ≠ 0)
r = m % n
m = n
n = r
Print m as the GCD
Stop

Analysis
Time Complexity: O(n), because the loop executes a finite number of times.
Space Complexity: O(1), since only constant variables are used.

4. Define sequential search. Write its algorithm and analyze it.
Solution 
Sequential search is a searching technique that checks elements one by one.
It is also known as linear search.
It compares the search key with each element until a match is found or the list ends.

Algorithm (Sequential Search)

Start
Read the search key
Compare the key with the first element of the array
If a match is found, print “Search Successful” and stop
Otherwise, compare the key with the next element
Repeat steps 4 and 5 until the last element is checked
If no match is found, print “Search Unsuccessful.”
Stop

Analysis
Time Complexity: O(n), because each element may be compared once.
Space Complexity: O(n), because the array requires n memory locations.

5. Write an algorithm for selection sort and analyze it.
Solution
Selection sort is a simple sorting technique.
It repeatedly selects the smallest element from the unsorted part.
The selected element is placed in its correct position.

Algorithm (Selection Sort)

Start
For i = 0 to n − 1
Assume A[i] as the minimum element
For j = i + 1 to n − 1
If A[j] < A[i], update minimum
Swap the minimum element with A[i]
Repeat until the array is sorted
Stop

Analysis
Time Complexity: O(n²), due to nested loops.
Space Complexity: O(n), since the array requires n memory locations.

6. Write an algorithm for insertion sort and analyze it.
Solution
Insertion sort is a comparison-based sorting algorithm.
It builds the sorted list one element at a time.
The first element is considered already sorted.

Algorithm (Insertion Sort)
Start
For i = 1 to n − 1
Set temp = A[i]
Set j = i − 1
While (j ≥ 0 and A[j] > temp)
Shift A[j] to A[j+1]
Decrease j by 1
Insert temp at position j + 1
Repeat until all elements are sorted
Stop

Analysis
Time Complexity: O(n²) in average and worst cases.
Space Complexity: O(n), since the array requires n memory locations.

7. Compare bubble sort, selection sort, and insertion sort.

1. Bubble Sort

Advantages
Very simple and easy to understand
Easy to implement
Works well for small datasets

Disadvantages
Very slow for large datasets
High number of comparisons and swaps
Worst and average time complexity is O(n²)

2. Selection Sort

Advantages
Simple algorithm
Fewer swaps compared to bubble sort
Performance does not depend on data order

Disadvantages
Time complexity is always O(n²)
Not efficient for large datasets
Do unnecessary comparisons

3. Insertion Sort
Advantages
Faster than bubble and selection sort for small data
Efficient for nearly sorted lists
Simple and stable sorting method

Disadvantages
Slow for large datasets
Worst-case time complexity is O(n²)
More shifting operations in reverse order of data

Conclusion
Bubble sort is the least efficient.
Selection sort reduces swaps, but still slow.
Insertion sort is the best among the three for small or partially sorted data.





🔹 UNIT 3: Divide and Conquer Algorithms

  1. Explain divide and conquer strategy with example.

  2. Write recursive algorithm for binary search and analyze it.

  3. Write min-max algorithm using divide and conquer and analyze it.

  4. Write merge sort algorithm and analyze its complexity.

  5. Write quick sort algorithm and analyze best, worst, and average case.

  6. What is the worst case of quick sort? How does randomized quick sort handle it?

  7. Trace quick sort for a given array.

  8. Explain heapify operation with example.

  9. Write heap sort algorithm and analyze it.

  10. Trace heap sort for given data.

  11. Define order statistics problem.

  12. Explain selection in expected linear time.

  13. Explain selection in worst-case linear time.


🔹 UNIT 4: Greedy Algorithms

  1. What is an optimization problem?

  2. What is greedy strategy?

  3. When does greedy algorithm give optimal solution?

  4. Explain elements of greedy strategy.

  5. Solve fractional knapsack problem using greedy method.

  6. Write job sequencing with deadlines algorithm and analyze it.

  7. Find minimum spanning tree using Kruskal’s algorithm.

  8. Find minimum spanning tree using Prim’s algorithm.

  9. Explain Dijkstra’s algorithm with complexity.

  10. What is Huffman coding?

  11. Construct Huffman codes for given frequencies.

  12. Find total number of bits using Huffman coding.


🔹 UNIT 5: Dynamic Programming

  1. What is dynamic programming?

  2. Explain elements of dynamic programming.

  3. Compare greedy algorithm and dynamic programming.

  4. Compare recursion and dynamic programming.

  5. Explain memoization strategy.

  6. Compare dynamic programming and memoization.

  7. Solve matrix chain multiplication using DP.

  8. Find optimal parenthesization of matrix chain.

  9. Find edit distance between two strings using DP.

  10. Solve 0/1 knapsack problem using DP.

  11. Explain Floyd-Warshall algorithm.

  12. Find all-pairs shortest path using Floyd-Warshall algorithm.

  13. Explain travelling salesman problem using DP.


🔹 UNIT 6: Backtracking

  1. What is backtracking?

  2. How does backtracking differ from recursion?

  3. Solve subset sum problem using backtracking.

  4. Trace subset sum algorithm for given set and sum.

  5. Solve 0/1 knapsack problem using backtracking.

  6. Explain N-queen problem using backtracking.

  7. Analyze time complexity of backtracking algorithms.

  8. Does backtracking give multiple solutions? Explain.


🔹 UNIT 7: Number Theoretic Algorithms

  1. Explain Euclid’s algorithm and analyze it.

  2. Explain extended Euclid’s algorithm and analyze it.

  3. Why is extended Euclid’s algorithm used?

  4. Solve modular linear equations.

  5. Solve system of congruences using Chinese Remainder Theorem.

  6. Explain Miller-Rabin randomized primality test.

  7. Analyze complexity of Miller-Rabin test.


🔹 UNIT 8: NP-Completeness & Approximation Algorithms

  1. Define tractable and intractable problems.

  2. Explain polynomial time and super-polynomial time.

  3. Define class P.

  4. Define class NP.

  5. Define NP-Hard problems.

  6. Define NP-Complete problems.

  7. State Cook’s theorem.

  8. Explain problem reducibility.

  9. Prove SAT is NP-Complete.

  10. Prove vertex cover is NP-Complete.

  11. Prove subset sum is NP-Complete.

  12. What is approximation algorithm?

  13. Explain approximation algorithm for vertex cover.

  14. Explain approximation algorithm for subset sum.



Tags

Post a Comment

0Comments

Post a Comment (0)