1. What is an algorithm? Explain the properties of an algorithm. Why do we need algorithms?
Solution
An algorithm is a finite sequence of well-defined steps used to solve a problem.
It takes input values, processes them, and produces output values.
Properties of an Algorithm
The properties of an algorithm define the conditions that make an algorithm valid and effective.
These properties ensure correctness, termination, and efficiency.
Properties
Input
An algorithm can have zero or more inputs.
Output
It must produce at least one output.
Correctness
It gives the correct output for every valid input.
Finiteness
The algorithm terminates after finite steps.
Definiteness
Every step is clear and unambiguous.
Effectiveness
Each operation is simple and executable in finite time.
Generality
The algorithm works for all instances of a problem.
Algorithms are needed to solve problems systematically.
They provide a logical and efficient method for computation.
Reasons
Efficiency
Reduces time and memory usage.
Scalability
Handles large input sizes effectively.
Consistency
Same input always produces the same output.
Automation
Enables automatic execution of tasks.
Example
Binary Search is faster than Linear Search for large datasets.
2. Explain the RAM model with a suitable example. Define time and space complexity.
Solution:
The Random Access Machine (RAM) Model
The RAM (Random Access Machine) model is a theoretical model used for analyzing algorithms in a machine-independent way. It assumes that an algorithm runs on an ideal computer with simple and uniform operations. In this model, every basic operation is assumed to take constant time.
Characteristics of the RAM Model:
Unit-Time Operations: Basic operations such as addition, subtraction, comparison, and assignment take one unit of time.
Direct Memory Access: Any memory location can be accessed in constant time,
Sequential Execution: Instructions are executed one after another, unless controlled by loops or conditions.
Uniform Cost Measure: Each basic instruction is counted as one step, and total running time is measured by counting these steps.
No Hardware Dependence: The model ignores hardware differences and focuses only on algorithmic efficiency.
Example: Fibonacci Number (Iterative Method)
fib(n) {
a = 0, b = 1, f = 1;
for(i = 2; i <= n; i++) {
f = a + b;
a = b;
b = f;
} return f;
}
RAM Model Analysis:
Initialization statements take constant time.
The for loop runs approximately (n-1)
Inside each loop, there are:
One addition (a + b)
Three assignments (f = a + b, a = b, b = f)
Each of these takes constant time.
Time Complexity:
Measures the time an algorithm takes to run as the input size increases.
It focuses on the number of steps or operations performed.
Space Complexity: Measures the memory (RAM) an algorithm uses during its execution.
It focuses on the extra storage required beyond the input itself.
3. Explain best, worst, and average case complexity.
Solution
Worst Case Complexity:
Worst case complexity is the maximum time an algorithm takes for any input of size n.
It gives the upper bound of running time
Occurs when the maximum number of operations are performed
Most commonly used for analysis
Notation: Big-O → O(n)
Example:
In linear search, when the element is not present, all elements are checked
Best Case Complexity:
Best case complexity is the minimum time an algorithm takes for any input.
It gives the lower bound of running time
Occurs when the minimum number of operations are performed
Rarely used in practical analysis
Notation: Omega → Ω(1)
Example:
In linear search, when the element is found at first position
Average Case Complexity:
Average case complexity is the expected time taken over all possible inputs.
Calculated by taking all possible cases and averaging them
Gives a realistic performance measure
Difficult to compute in many cases
Notation: Theta → Θ(n)
Example:
In linear search, element may be found at any position or not found
4. Explain aggregate analysis with example.
Solution
Aggregate analysis is a method in computer science used for amortized analysis to determine the average, worst-case time complexity of a sequence of operations.
It determines the overall running time for n operations.
The average cost per operation is then calculated.
It calculates the total cost of n operations and then divides by n.
No probability is used (unlike probabilistic analysis).
Even if some operations are expensive, the overall average cost remains small.
Helps prove that a sequence of operations is efficient in the long run.
Commonly used in analyzing data structures like stacks, dynamic arrays, etc.
Steps in Aggregate Analysis
Find the total cost T(n) of n operations in the worst case.
Divide the total cost by n to get average cost per operation.
Example: Stack Operations
Consider push and pop operations in a stack.
Each push takes constant time O(1).
Each pop also takes constant time O(1).
For n stack operations, total time = O(n).
Average time per operation = O(n) / n = O(1).
5. Explain Big-O notation with example
Solution
Big-O notation is a concept in Algorithm Analysis used to describe the upper bound (worst-case growth rate) of an algorithm’s time or space complexity as input size increases.
A function is if there exist constants and such that:
It shows how execution time or memory grows with input size 𝑛.
Focuses on asymptotic behavior, not exact values.
Ignores constants and lower-order terms (only dominant term matters).
Represents the worst-case scenario of an algorithm.
Used to compare efficiency of algorithms.
Helps in predicting scalability for large inputs.
1. Write an iterative algorithm to find the factorial and analyze it.
Solution
An iterative algorithm is a concept in Algorithm Analysis used to solve a problem by repeating a set of instructions using loops instead of recursion.
Algorithm:
Factorial(n)
1. fact ← 1
2. for i ← 1 to n do
3. fact ← fact × i
4. return fact
Uses a loop (iteration) instead of recursion.
Multiplies numbers from 1 to n sequentially.
Requires only one variable (fact) for computation.
Easy to implement and understand.
Avoids function call overhead of recursion.
Suitable for large inputs compared to recursive approach.
2. Write an algorithm to find the nth Fibonacci number and analyze it.
Solution
Fibonacci series is a sequence where each number is the sum of the previous two numbers.
Algorithm (Iterative Fibonacci)
Start
Set first = 0, second = 1
Read the term number n
Set i = 3
While (i ≤ n)
temp = first + second
first = second
second = temp
i = i + 1
Print temp
Stop
Analysis
Time Complexity: O(n), because the loop executes (n − 2) times.
Space Complexity: O(1), since only a fixed number of variables are used.
3. Explain Euclid’s algorithm for GCD.
Solution
GCD (Greatest Common Divisor) is the largest positive integer that divides two numbers exactly.
Euclid’s algorithm is an iterative method to find the GCD using division.
It is based on the principle that GCD(m, n) = GCD(n, m % n).
Algorithm (Euclid’s Method)
Start
Read two numbers m and n
While (n ≠ 0)
r = m % n
m = n
n = r
Print m as the GCD
Stop
Analysis
Time Complexity: O(n), because the loop executes a finite number of times.
Space Complexity: O(1), since only constant variables are used.
4. Define sequential search. Write its algorithm and analyze it.
Solution
Sequential search is a searching technique that checks elements one by one.
It is also known as linear search.
It compares the search key with each element until a match is found or the list ends.
Algorithm (Sequential Search)
Start
Read the search key
Compare the key with the first element of the array
If a match is found, print “Search Successful” and stop
Otherwise, compare the key with the next element
Repeat steps 4 and 5 until the last element is checked
If no match is found, print “Search Unsuccessful.”
Stop
Analysis
Time Complexity: O(n), because each element may be compared once.
Space Complexity: O(n), because the array requires n memory locations.
5. Write an algorithm for selection sort and analyze it.
Solution
Selection sort is a simple sorting technique.
It repeatedly selects the smallest element from the unsorted part.
The selected element is placed in its correct position.
Algorithm (Selection Sort)
Start
For i = 0 to n − 1
Assume A[i] as the minimum element
For j = i + 1 to n − 1
If A[j] < A[i], update minimum
Swap the minimum element with A[i]
Repeat until the array is sorted
Stop
Analysis
Time Complexity: O(n²), due to nested loops.
Space Complexity: O(n), since the array requires n memory locations.
6. Write an algorithm for insertion sort and analyze it.
Solution
Insertion sort is a comparison-based sorting algorithm.
It builds the sorted list one element at a time.
The first element is considered already sorted.
Algorithm (Insertion Sort)
Start
For i = 1 to n − 1
Set temp = A[i]
Set j = i − 1
While (j ≥ 0 and A[j] > temp)
Shift A[j] to A[j+1]
Decrease j by 1
Insert temp at position j + 1
Repeat until all elements are sorted
Stop
Analysis
Time Complexity: O(n²) in average and worst cases.
Space Complexity: O(n), since the array requires n memory locations.
7. Compare bubble sort, selection sort, and insertion sort.
1. Bubble Sort
Advantages
Very simple and easy to understand
Easy to implement
Works well for small datasets
Disadvantages
Very slow for large datasets
High number of comparisons and swaps
Worst and average time complexity is O(n²)
2. Selection Sort
Advantages
Simple algorithm
Fewer swaps compared to bubble sort
Performance does not depend on data order
Disadvantages
Time complexity is always O(n²)
Not efficient for large datasets
Do unnecessary comparisons
3. Insertion Sort
Advantages
Faster than bubble and selection sort for small data
Efficient for nearly sorted lists
Simple and stable sorting method
Disadvantages
Slow for large datasets
Worst-case time complexity is O(n²)
More shifting operations in reverse order of data
Conclusion
Bubble sort is the least efficient.
Selection sort reduces swaps, but still slow.
Insertion sort is the best among the three for small or partially sorted data.
🔹 UNIT 3: Divide and Conquer Algorithms
Explain divide and conquer strategy with example.
Write recursive algorithm for binary search and analyze it.
Write min-max algorithm using divide and conquer and analyze it.
Write merge sort algorithm and analyze its complexity.
Write quick sort algorithm and analyze best, worst, and average case.
What is the worst case of quick sort? How does randomized quick sort handle it?
Trace quick sort for a given array.
Explain heapify operation with example.
Write heap sort algorithm and analyze it.
Trace heap sort for given data.
Define order statistics problem.
Explain selection in expected linear time.
Explain selection in worst-case linear time.
🔹 UNIT 4: Greedy Algorithms
What is an optimization problem?
What is greedy strategy?
When does greedy algorithm give optimal solution?
Explain elements of greedy strategy.
Solve fractional knapsack problem using greedy method.
Write job sequencing with deadlines algorithm and analyze it.
Find minimum spanning tree using Kruskal’s algorithm.
Find minimum spanning tree using Prim’s algorithm.
Explain Dijkstra’s algorithm with complexity.
What is Huffman coding?
Construct Huffman codes for given frequencies.
Find total number of bits using Huffman coding.
🔹 UNIT 5: Dynamic Programming
What is dynamic programming?
Explain elements of dynamic programming.
Compare greedy algorithm and dynamic programming.
Compare recursion and dynamic programming.
Explain memoization strategy.
Compare dynamic programming and memoization.
Solve matrix chain multiplication using DP.
Find optimal parenthesization of matrix chain.
Find edit distance between two strings using DP.
Solve 0/1 knapsack problem using DP.
Explain Floyd-Warshall algorithm.
Find all-pairs shortest path using Floyd-Warshall algorithm.
Explain travelling salesman problem using DP.
🔹 UNIT 6: Backtracking
What is backtracking?
How does backtracking differ from recursion?
Solve subset sum problem using backtracking.
Trace subset sum algorithm for given set and sum.
Solve 0/1 knapsack problem using backtracking.
Explain N-queen problem using backtracking.
Analyze time complexity of backtracking algorithms.
Does backtracking give multiple solutions? Explain.
🔹 UNIT 7: Number Theoretic Algorithms
1. Using the Extended Euclidean Algorithm, find the GCD of 12 and 16.
Soluiton
2. Solve the following linear equation using the Chinese Remainder Theorem.
x = 1 MOD 3
x = 2 MOD 5
x = 0 MOD 7
Solution
3. What is the purpose of Euclid’s algorithm? Explain with suitable explain.
Solution
Purpose of Euclid’s Algorithm:
Euclid’s algorithm is used to find the Greatest Common Divisor (GCD) of two integers.
It is an efficient method based on repeated division.
It reduces large problems into smaller subproblems.
Used to compute GCD(a, b) where a > b.
Based on rule: gcd(a, b) = gcd(b, a mod b).
Continues until the remainder becomes zero.
Last non-zero remainder is the GCD.
Works faster than naive factorization methods.
Example: Find GCD of 48 and 18
48 ÷ 18 → remainder = 12
18 ÷ 12 → remainder = 6
12 ÷ 6 → remainder = 0
GCD = 6
Conclusion: Euclid’s algorithm efficiently finds the GCD using repeated division, making it widely used in mathematics and cryptography.
4. Why extended euclidean algorithm is used? Write down its algorithm and analyze its complexity.
Solution
Purpose of Extended Euclidean Algorithm
The Extended Euclidean Algorithm is used to find GCD(a, b) along with integers x and y such that:
It is important for computing modular inverse.
Widely used in cryptography (e.g., RSA).
Finds coefficients (x, y) in addition to GCD.
Helps solve linear Diophantine equations.
Used in modular arithmetic operations.
Algorithm (Extended Euclidean Algorithm)
Input: Integers a, b
Output: gcd(a, b), x, y
If b = 0:
Return gcd = a, x = 1, y = 0
Else:
Recursively compute: gcd(b, a mod b)
Update:
x = y₁
y = x₁ − ⌊a/b⌋ × y₁
Repeat until base condition is reached.
Complexity Analysis
Time complexity is O(log min(a, b)).
Based on repeated division operations like Euclid’s algorithm.
Efficient even for large numbers.
Space complexity is O(log n) (due to recursion).
Conclusion: Extended Euclidean Algorithm efficiently computes GCD and coefficients, making it essential for number theory and cryptographic applications.
7. Solve the following linear congruences using Chinese Remainder Theorem.
X=l (MOD 2)
X=3 (MOD 5)
x=6 (MOD 7)
8. Explain Miller-Rabin randomized primality test.
Solution
Miller–Rabin Randomized Primality Test:
Miller–Rabin test is a probabilistic algorithm used to check whether a number is prime or composite.
It is based on properties of modular exponentiation.
It gives a highly reliable result with small error probability.
Key Points:
Faster than deterministic tests for large numbers.
Used in cryptography (RSA, key generation).
Error probability decreases with more iterations.
Conclusion: Miller–Rabin is an efficient and reliable probabilistic test for checking primality of large numbers.
9. Analyze complexity of Miller-Rabin test.
Solution
Complexity of Miller–Rabin Test
Miller–Rabin test is a probabilistic primality test with efficient performance.
Its complexity depends on modular exponentiation and number of iterations.
It is suitable for large integers.
For one iteration:
Main operation is modular exponentiation (a^d mod n).
Using fast exponentiation → O(log n) multiplications.
Each multiplication costs O((log n)²) (basic method).
So, one iteration ≈ O((log n)³).
If test is repeated k times:
Total time = O(k · (log n)³).
Key Points:
k is number of random trials.
More iterations → higher accuracy.
Error probability ≤ (1/4)^k.
Much faster than deterministic primality tests for large n.
Widely used in cryptographic applications.
Conclusion: Miller–Rabin runs in polynomial time O(k · (log n)³), making it efficient and practical for large numbers.


🔹 UNIT 8: NP-Completeness & Approximation Algorithms
1.Define class P and NP problem. Why do we need approximation algorithms? Justify.
Solution
Class P Problem:
Class P consists of decision problems that can be solved in polynomial time.
These problems are considered efficiently solvable.
Time complexity is typically O(nᵏ) for some constant k.
Algorithms exist with guaranteed fast solutions.
Examples include sorting, shortest path, minimum spanning tree.
Example: Finding shortest path using Dijkstra’s algorithm.
Class NP Problem:
Class NP consists of problems whose solutions can be verified in polynomial time.
Finding the solution may be hard, but checking it is easy.
Includes complex decision problems.
Not all NP problems are known to be in P.
Contains NP-complete and NP-hard problems.
No known polynomial-time solution for all NP problems.
Example: Travelling Salesman Problem (TSP).
Need for Approximation Algorithms:
Approximation algorithms are used when exact solutions are too slow or impractical.
They provide near-optimal solutions in reasonable time.
Essential for solving NP-hard problems.
Exact algorithms may take exponential time.
Approximation gives good enough solutions quickly.
Useful in real-time and large-scale systems.
Provides performance guarantees (approximation ratio).
Example: Approximate solution for TSP instead of exact optimal path.
Conclusion: Approximation algorithms are needed to efficiently handle complex problems where exact solutions are not feasible.
2. Write short notes on:
a) Class P, Class NP, and NP-Complete
Solution
Class P Problem:
Class P consists of decision problems that can be solved in polynomial time.
These problems are considered efficiently solvable.
Time complexity is typically O(nᵏ) for some constant k.
Algorithms exist with guaranteed fast solutions.
Examples include sorting, shortest path, minimum spanning tree.
Example: Finding shortest path using Dijkstra’s algorithm.
Class NP Problem:
Class NP consists of problems whose solutions can be verified in polynomial time.
Finding the solution may be hard, but checking it is easy.
Includes complex decision problems.
Not all NP problems are known to be in P.
Contains NP-complete and NP-hard problems.
No known polynomial-time solution for all NP problems.
Example: Travelling Salesman Problem (TSP).
NP-Complete Problems
NP-Complete problems are a class of problems that are both in NP and NP-hard.
A problem is NP-Complete if its solution can be verified in polynomial time.
They are considered the hardest problems in NP.
If any NP-Complete problem is solved in polynomial time, then P = NP.
Every problem in NP can be reduced to an NP-Complete problem.
No known efficient (polynomial-time) algorithm exists for them.
Solving them exactly usually takes exponential time.
Examples:
Travelling Salesman Problem (decision version)
3-SAT problem
Vertex Cover problem
Used to identify computational complexity of problems.
Basis for designing approximation and heuristic algorithms.
Conclusion: NP-Complete problems represent the most challenging class in NP, with no known efficient exact solutions.
3. Explain the approximation algorithm for the vertex cover of a connected graph with an example.
Solution
Approximation Algorithm for Vertex Cover
A vertex cover is a set of vertices such that every edge in the graph is incident to at least one selected vertex.
Finding the minimum vertex cover is an NP-Complete problem.
An approximation algorithm gives a near-optimal solution efficiently.
Key Points:
Runs in polynomial time.
Produces a solution at most 2 times the optimal.
Simple and widely used approach.
Conclusion: The approximation algorithm provides a fast and acceptable solution for vertex cover, though it may not be optimal.
4. State Cook’s theorem. Discuss the problem reducibility.
Solution
Cook’s Theorem:
Cook’s Theorem states that the Boolean satisfiability problem (SAT) is NP-Complete.
It was the first problem proven NP-Complete.
It shows that every problem in NP can be reduced to SAT in polynomial time.
Establishes the concept of NP-Completeness.
If SAT is solvable in polynomial time, then all NP problems are solvable in polynomial time.
Forms the foundation for complexity theory.
Used to prove other problems are NP-Complete.
Example: Problems like 3-SAT, Vertex Cover are proven NP-Complete using reduction from SAT.
Conclusion: Cook’s Theorem identifies SAT as a central NP-Complete problem and basis for complexity analysis.
Problem Reducibility:
Problem reducibility is a method of converting one problem into another.
It helps compare the difficulty of problems.
Used mainly in complexity theory and NP-Completeness proofs.
If problem A reduces to B (A ≤p B), then solving B helps solve A.
Reduction must be done in polynomial time.
Used to prove a problem is NP-Complete or NP-hard.
Shows relative computational hardness.
Types:
Polynomial-time reduction (most common).
Many-one reduction.
Example:
Reducing SAT to 3-SAT or 3-SAT to Vertex Cover.
Conclusion: Reducibility is a key tool to classify and analyze computational problems.
5. Define NP-complete problems with examples. Give brief proof of the statement “SAT is NP-complete”.
Soluiton
NP-Complete Problems
NP-Complete problems are those that are in NP and NP-hard.
Their solutions can be verified in polynomial time.
They are the hardest problems in NP.
Every NP problem can be reduced to an NP-Complete problem.
No known polynomial-time algorithm exists for them.
If one NP-Complete problem is solved in polynomial time, then P = NP.
Examples:
SAT (Boolean Satisfiability)
3-SAT
Vertex Cover
Travelling Salesman Problem (decision version)
Conclusion: NP-Complete problems are computationally difficult and central to complexity theory.
Proof that “SAT is NP-Complete”
To prove a problem is NP-Complete, two conditions must be satisfied:
It is in NP
It is NP-hard
Step 1: SAT ∈ NP
Given a truth assignment, we can verify if it satisfies the formula in polynomial time.
Step 2: SAT is NP-hard
Any problem in NP can be transformed (reduced) to SAT in polynomial time.
This is shown using Cook’s Theorem.
Since both conditions are satisfied, SAT is NP-Complete.
Conclusion: SAT is NP-Complete because it is verifiable in polynomial time and all NP problems reduce to it.
6. Define tractable and intractable problems. Illustrate the vertex cover problem with an example.
Solution
Tractable Problems:
Tractable problems are problems that can be solved in polynomial time.
They are considered efficient and practical to solve.
Usually belong to Class P.
Time complexity is O(nᵏ) for some constant k.
Solutions can be obtained quickly even for large inputs.
Well-defined efficient algorithms exist.
Example: Sorting, shortest path problem.
Conclusion: Tractable problems are computationally manageable.
Intractable Problems:
Intractable problems are problems that cannot be solved efficiently in polynomial time.
They require exponential or very high time complexity.
Often belong to NP-Hard or NP-Complete classes.
Exact solutions take very long time for large inputs.
No known efficient algorithms exist.
Often solved using approximation or heuristics.
Example: Travelling Salesman Problem (exact solution).
Conclusion: Intractable problems are computationally difficult to solve exactly.
Vertex Cover Problem
A vertex cover is a set of vertices such that every edge in the graph is connected to at least one vertex in the set.
The goal is to find the minimum vertex cover.
It is an NP-Complete problem.
Covers all edges using minimum number of vertices.
Used in network security, resource allocation.
Can be solved using approximation algorithms.
Example:
Graph edges: {(A,B), (B,C), (C,D)}
Possible vertex cover: {B, C}
(A,B) covered by B
(B,C) covered by B or C
(C,D) covered by C
Conclusion: Vertex cover is an important optimization problem, often solved approximately due to its complexity.
7. Explain the approximation for solving vertex cover with a suitable example.
Solution
Approximation Algorithm for Vertex Cover
A vertex cover is a set of vertices such that every edge has at least one endpoint in the set.
Finding the minimum vertex cover is NP-Complete.
Approximation gives a near-optimal solution in polynomial time.
Algorithm (2-Approximation Method):
Initialize C = ∅ (empty set).
Select any edge (u, v) from the graph.
Add both u and v to C.
Remove all edges incident on u and v.
Repeat until no edges remain.
Properties:
Runs in polynomial time.
Size of solution is at most 2 × optimal.
Simple and efficient.
Example:
Graph edges: {(A,B), (B,C), (C,D), (D,E)}
Pick (A,B) → add A, B → remove (A,B), (B,C)
Remaining: (C,D), (D,E)
Pick (C,D) → add C, D → remove (C,D), (D,E)
Vertex Cover C = {A, B, C, D}
Optimal solution may be {B, D}, but approximation gives a valid cover.
Conclusion: The approximation algorithm provides a fast and acceptable solution for vertex cover, though it may not be optimal.
8.Write short notes on:
a) NP Hard Problems and NP Completeness
b) Problem Reductio
a) NP-Hard Problems and NP-Completeness
NP-Hard problems are problems that are at least as hard as NP problems.
They may not belong to NP (solution may not be verifiable in polynomial time).
NP-Complete problems are those that are both NP and NP-Hard.
NP-Hard:
Every NP problem can be reduced to it.
May be optimization or non-decision problems.
No guarantee of polynomial-time verification.
NP-Complete:
Must be in NP (verifiable in polynomial time).
Also NP-Hard.
Represent the hardest problems within NP.
If any NP-Complete problem is solved in polynomial time → P = NP.
Used to identify computational difficulty of problems.
Examples:
NP-Hard: Optimization version of TSP
NP-Complete: SAT, 3-SAT, Vertex Cover
Conclusion: NP-Hard shows general hardness, while NP-Complete defines the hardest verifiable problems in NP.
b) Problem Reduction
Problem reduction is the process of converting one problem into another.
It is used to compare complexity of problems.
Important in proving NP-Completeness.
If A reduces to B (A ≤p B), solving B helps solve A.
Reduction must be done in polynomial time.
Used to show a problem is NP-Hard or NP-Complete.
Preserves solution correctness during transformation.
Common types:
Polynomial-time reduction
Many-one reduction
Example: Reducing 3-SAT to Vertex Cover.
Conclusion: Problem reduction helps classify problems and prove their computational difficulty.