Unit 1: Introduction to AI
- What is AI? How can you define AI from the perspective of the thought process?
- How philosophy, sociology, and economics influence the study of artificial intelligence?
- Describe how Turing test is used to define AI as acting humanly.
- What is intelligence? Describe the foundation of AI.
- What is Turing Test? What properties an agent should have to pass the Turing Test?
Unit 2: Intelligent Agents
- What is an agent? How does a utility agent work? Give an example of a utility agent.
- What are the properties of an intelligent agent? How do simple reflex agents work? Give an example.
- Differentiate between model-based and simple reflex agent with an example.
- What do you mean by Rational Agent? What are differences between Utility-based and model-based agent?
- Discuss the types of environment where an agent can work on.
- Using your own assumptions, design PEAS framework for:
- Medicine delivery drone
- Covid medicine prescriber
- Internet Shopping Assistant
- English Language Tutor
- Covid-19 prediction system
- Vaccine recommender system  
PDF Link: Click here 
Unit 3: Problem Solving by Searching
- Define state space graph. Differentiate between A* search and greedy best first search.
- How is informed search different from uninformed search?
- How uniform cost search is used to search goal in the state space? Illustrate with example.
- What is game search? How is Minimax search used in game playing? Illustrate with example.
- Why alpha-beta pruning is necessary? How is it done? Illustrate.
- What is constraint satisfaction problem? Illustrate graph coloring problem.
- What is state space representation? Illustrate with one example.
- How depth-limited search and iterative deepening search work? Illustrate with example.
- Construct state space and apply:
- A*
- Greedy Best First Search
- Hill Climbing (and discuss its incompleteness)
Unit 4: Knowledge Representation
- 
What is semantic network? Construct semantic network with given facts. 
- 
What is frame? How is knowledge encoded in a frame? With example. 
- 
How is knowledge represented using scripts? Create a knowledge base. 
- 
Convert statements into FOPL and CNF. 
- 
What is forward chaining? Explain with example. 
- 
How uncertain knowledge is represented? (Bayesian/Joint distribution) 
- 
What is fuzzy logic? Construct fuzzy rule base expert system. 
- 
What do you mean by unification and lifting? 
- 
Write the rules to convert statements into CNF form. 
- 
How resolution algorithm is used in FOPL to infer conclusion? 
- 
Construct belief network from given probability conditions. 
- 
Discuss predicate logic resolution example: 
Unit 5: Machine Learning
- 
Differentiate supervised learning from unsupervised. How Naive Bayes model works? 
- 
What is supervised learning? Discuss Naive Bayes model with example. 
- 
What is reinforcement learning? Give example. 
- 
Define genetic algorithm. Explain selection, crossover, mutation with example. 
- 
Write algorithm for learning by genetic approach. 
- 
Define artificial neural network. Explain with model. 
- 
Discuss Perceptron learning. 
- 
Simulate OR gate using ANN. 
- 
Explain Hebbian learning with example. 
- 
What is the role of activation function? How sigmoid works? 
- 
Write algorithm for backpropagation learning and show 1 iteration. 
Unit 6: Applications of AI
Q1: What is expert system? Define with example. Stages of development.
Solution 
An Expert System is a computer-based application that imitates the decision-making ability of a human expert.
It uses artificial intelligence (AI) techniques to solve complex problems in a specific domain.
Expert systems are designed to provide advice, diagnosis, or recommendations like a human specialist.
They consist mainly of a knowledge base and an inference engine.
Example:
MYCIN: A medical expert system used to diagnose bacterial infections and recommend antibiotics.
DENDRAL: Used in chemistry to identify molecular structures.
Stages of Development of an Expert System:
Knowledge Acquisition:
Collecting knowledge from human experts, books, or databases. 
Knowledge is analyzed and structured for use in the system.
Tools: Interviews, observations, or questionnaires.
Knowledge Representation:
Storing the acquired knowledge in a formal format understandable by the computer.
Techniques include rules, frames, or semantic networks.
Example: IF–THEN rules like “IF fever AND cough THEN flu.”
Knowledge Base Creation:
Building the knowledge base that contains all facts, rules, and relationships.
This serves as the brain of the expert system.
Inference Engine Design:
The inference engine applies logical reasoning to the knowledge base.
It draws conclusions or solutions based on given inputs (facts).
User Interface Development:
Provides a way for the user to interact with the system.
Users input data, and the system provides reasoning or advice in understandable form.
Testing and Refinement:
The system is tested using real-world problems.
Errors or weaknesses are corrected to improve performance.
Q2: Explain major components of expert system.
Solution
Major Components:
Knowledge Base:
The core part of the expert system.
Stores facts, rules, and relationships about the problem domain.
Contains two types of knowledge:
Factual knowledge – basic information about the domain.
Heuristic knowledge – experience-based rules of thumb.
Example: “IF patient has high fever AND rash THEN possible measles.”
Inference Engine:
Acts as the brain of the system.
Applies logical reasoning to the knowledge base to reach conclusions.
Uses methods like:
Forward chaining: Reasoning from facts to conclusions.
Backward chaining: Reasoning from goals to supporting facts.
Example: Diagnosing a disease by matching symptoms with rules.
Knowledge Acquisition Subsystem:
Responsible for collecting and updating knowledge from human experts or other sources.
Converts human knowledge into a machine-understandable format.
Ensures the knowledge base stays accurate and current.
User Interface:
Provides a way for the user to communicate with the expert system.
Users input data (facts or questions) and receive explanations, advice, or decisions.
Should be simple and interactive for non-technical users.
Explanation Facility:
Explains the reasoning process of the expert system to the user.
Answers questions like “Why?” and “How?” a conclusion was reached.
Builds trust and understanding in the system’s recommendations.
Q3: How machine vision is used in robotics?
Solution
Machine vision refers to the ability of a computer or robot to see, analyze, and interpret visual information from the surrounding environment.
In robotics, it enables robots to perceive objects, make decisions, and perform actions based on visual inputs.
It uses cameras, sensors, and image processing algorithms to simulate human vision.
Machine vision is a key part of intelligent and autonomous robotics.
Uses of Machine Vision in Robotics
Object Detection and Recognition:
Robots use vision systems to identify and classify objects for tasks like sorting, picking, or detecting defective products on conveyor belts.
Guidance and Navigation:
Vision helps robots understand their environment and move safely, as seen in autonomous vehicles and warehouse robots avoiding obstacles.
Inspection and Quality Control:
Machine vision ensures product accuracy by detecting defects, measuring dimensions, and maintaining consistency in manufacturing.
Positioning and Alignment:
Vision systems guide precise placement and alignment of tools or parts in assembly, ensuring accurate joining or welding.
Human–Robot Interaction (HRI):
Robots use vision to recognize human faces, gestures, or movements, allowing safe and effective collaboration with humans.
3D Vision and Mapping:
Stereo or depth cameras create 3D maps of environments, helping robots handle complex tasks like object manipulation or terrain mapping.
Q4: How NLP works? Explain all steps: Morphological, Syntactic, Semantic, Pragmatic.
Solution
Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that enables computers to understand, interpret, and generate human language.
It helps machines communicate with humans in a natural and meaningful way.
NLP works through several processing stages that convert human language into a machine-understandable form.
These stages include Morphological, Syntactic, Semantic, and Pragmatic analysis.
Steps in NLP Processing:
Morphological Analysis:
Deals with the structure and formation of words.
Breaks sentences into morphemes (smallest units of meaning).
Identifies prefixes, roots, suffixes, and grammatical forms.
Example:
Word: “Unhappiness” → “un-” (prefix), “happy” (root), “-ness” (suffix).
Helps recognize word variations (e.g., run, running, ran).
Syntactic Analysis (Parsing):
Examines the grammatical structure of a sentence.
Checks whether the arrangement of words follows language rules.
Builds a parse tree to represent the sentence structure.
Example:
Sentence: “The boy eats an apple.”
Structure: Subject (boy) + Verb (eats) + Object (apple).
Ensures syntactically correct sentences for further processing.
Semantic Analysis:
Focuses on the meaning of words and sentences.
Maps syntactic structures to their logical or real-world meaning.
Resolves ambiguities (e.g., “bank” as river bank or financial bank).
Example:
Sentence: “John ate an apple.”
Semantic meaning: John is the eater, apple is the thing eaten.
Pragmatic Analysis:
Considers the context and intention behind the sentence.
Understands meaning beyond literal words (tone, situation, relationship).
Deals with speaker’s intent, sarcasm, or hidden meaning.
Example:
“Can you open the window?” → It’s not a question about ability, but a polite request.
Summary:
NLP processes human language in multiple stages — morphological, syntactic, semantic, and pragmatic.
Each step adds deeper understanding, allowing computers to interpret language meaningfully and respond intelligently like humans.
Q5: What is pragmatic analysis? How is it done?
Solution
Pragmatic analysis is the final stage of Natural Language Processing (NLP).
It focuses on understanding the intended meaning of a sentence based on its context, situation, and speaker’s intention.
It goes beyond literal meaning to interpret what the speaker actually means.
Pragmatics helps machines handle real-world language use, including sarcasm, politeness, and indirect speech.
How Pragmatic Analysis is Done:
Context Understanding:
The system analyzes the context in which a sentence is used (time, place, participants).
Example:
“It’s cold here.” → May mean a request to close the window, not just a statement.
Speaker’s Intention Identification:
Determines why the sentence was spoken — to request, order, ask, or suggest.
Example:
“Can you pass the salt?” → The intention is a request, not a question about ability.
Discourse Analysis:
Considers the relationship between sentences in a conversation.
Helps maintain coherence and understand references (like pronouns).
Example:
“John dropped the glass. It broke.” → “It” refers to the glass.
Reference and Ambiguity Resolution:
Identifies what words like ‘he’, ‘she’, or ‘it’ refer to in context.
Resolves ambiguous meanings based on previous sentences or world knowledge.
Use of World Knowledge:
The system uses common sense or real-world facts to interpret meaning correctly.
Example:
“The teacher is late because the bus broke down.” → The bus refers to the teacher’s bus, not any random bus.
Example:
Sentence: “Do you know what time it is?”
Literal meaning: Asking about someone’s knowledge.
Pragmatic meaning: A polite request for the current time.
Summary:
Pragmatic analysis interprets intended meaning using context, situation, and common sense.
It enables NLP systems to understand human-like communication, making responses more natural and meaningful.
Q6: What is morphological analysis in NLP?
Solution
Morphological analysis is the first step in Natural Language Processing (NLP).
It deals with the structure and formation of words.
The process identifies morphemes, which are the smallest units of meaning in a language.
It helps computers understand how words are formed and related to each other.
Morphological analysis studies how words are constructed and modified.
It helps NLP systems understand word meaning, structure, and grammar, forming the foundation for higher-level language processing.
Purpose:
To break a word into its base (root) and affixes (prefixes and suffixes).
Helps in recognizing different forms of the same word (e.g., play, playing, played).
Used in spell checking, machine translation, text analysis, and speech recognition.
Q7: Construct fuzzy rule-based expert system.
Solution
Q8: Describe components of machine vision system.
Solution
A Machine Vision System is a technology that enables machines to see, analyze, and interpret visual information from the environment.
It uses cameras, sensors, and software to capture and process images for inspection, identification, and control.
Machine vision is widely used in robotics, manufacturing, quality control, and automation.
Main Components of a Machine Vision System:
Image Acquisition Device (Camera/Sensor):
Captures the visual image of the object or scene.
Can be digital cameras, CCD (Charge-Coupled Device), or CMOS sensors.
The quality of the image depends on resolution, lighting, and lens used.
Example: An industrial camera capturing product images on a conveyor belt.
Lighting System:
Provides appropriate illumination for clear image capture.
Helps highlight features like edges, texture, or color of the object.
Types: LED lights, backlights, ring lights, laser lighting.
Example: Backlighting used to detect cracks or edges on glass products.
Optics (Lens):
Focuses the image of the object onto the camera sensor.
Determines the field of view, magnification, and depth of focus.
The correct lens ensures a clear and distortion-free image.
Image Processing Hardware:
Converts the captured optical image into digital form.
Includes frame grabbers, processors, and computers for data handling.
Performs real-time image enhancement and processing operations.
Image Processing Software:
The core of the vision system that analyzes digital images.
Performs operations such as:
Filtering (to remove noise)
Edge detection
Object recognition
Measurement and comparison
Helps make decisions based on visual data (e.g., pass/fail inspection).
Vision Controller / Computer:
Coordinates all components and runs vision algorithms.
Controls the sequence of image capture, processing, and result output.
Often integrated with robotic or automation systems.
Output / Actuator Interface:
Sends the processed result to external devices for further action.
Can trigger alarms, display outputs, or control robotic movement.
Example: Rejecting defective products automatically from a production line.