I. General Graph Theory Concepts
-
Q: What is a graph? What are its basic components? A: A graph is a mathematical structure defined as a pair , where is a finite set of vertices (or nodes) and is a set of edges (or arcs). Each edge connects a pair of vertices. Edges can be directed (arcs) or undirected.
-
Q: What’s the difference between a directed and an undirected graph? A: In an undirected graph, edges represent symmetric relationships; an edge means there’s a connection between and . In a directed graph (digraph), edges (arcs) have a direction; an arc represents a connection from to .
-
Q: What does it mean for a graph to be weighted? A: A weighted graph is a graph where each edge is assigned a numerical value, , called a weight or cost.
-
Q: What is the degree of a vertex? Differentiate between in-degree and out-degree. A: In an undirected graph, the degree of a vertex , denoted , is the number of edges incident to it. In a directed graph, the in-degree of is the number of incoming edges, and the out-degree is the number of outgoing edges.
-
Q: What does it mean for a graph to be connected? What about strongly connected (for digraphs)? A: An undirected graph is connected if there is a path between every pair of distinct vertices in . A directed graph is strongly connected if there is a directed path from to and a directed path from to for every pair of distinct vertices .
II. Graph Representations (Experiment 1, 2, 3)
-
Q: Describe the two main ways to represent a graph in a computer program. A: The two main ways are:
- Adjacency Matrix: A matrix where entry is (or for weighted graphs) if there’s an edge from vertex to vertex , and (or /special value) otherwise.
- Adjacency List: An array (or map) of lists. The list contains all vertices such that . For weighted graphs, the list stores pairs .
-
Q: Compare the Adjacency Matrix and Adjacency List representations in terms of space complexity and efficiency for common operations (checking edge existence, finding neighbors of ). A:
- Space Complexity: Matrix is . List is . List is better for sparse graphs ().
- Edge Check : Matrix is . List is in the worst case.
- Finding Neighbors of : Matrix is . List is .
- Adding/Removing Edge: Matrix is . List is on average or .
- Adding/Removing Vertex: Matrix is . List is more flexible, often to add, to remove.
-
Q: Your Experiment 1 used a linked representation. Explain how that structure works. How does it relate to the standard Adjacency List? A: The linked representation uses header nodes for vertices, each with
info,nextnode, andarcptr.arcptrpoints to the Adjacency List for that vertex, implemented as a linked list of arc nodes, each containingndptr(pointer to destination vertex) andnextarc. This is a concrete linked-list implementation of the Adjacency List concept .
III. Graph Traversal (Experiment 1)
-
Q: Explain Breadth-First Search (BFS). What kind of data structure does it use? What is it typically used for? A: BFS explores layer by layer from a source . It uses a Queue. It finds the shortest path in terms of the number of edges in unweighted graphs, checks connectivity, and is used in algorithms like Edmonds-Karp.
-
Q: Explain Depth-First Search (DFS). What kind of data structure does it use (implicitly or explicitly)? What are some applications? A: DFS explores as deeply as possible before backtracking. It uses a Stack (often the call stack via recursion). Applications include cycle detection, topological sorting (for DAGs), finding connected components (like ‘s decomposition).
-
Q: Compare BFS and DFS. When would you prefer one over the other? A:
- Structure: BFS uses Queue (level-order). DFS uses Stack (pre-order style).
- Pathfinding: BFS finds shortest path (edge count). DFS doesn’t guarantee shortest.
- Space: BFS can use memory. DFS might use less but worst-case.
- Use Cases: BFS for shortest unweighted path. DFS for cycles, topological sort.
IV. Minimum Spanning Tree (MST) (Experiment 2, 3)
-
Q: What is a Minimum Spanning Tree (MST) of a connected, undirected, weighted graph ? A: An MST is a subgraph where , is a tree, connects all vertices in , and the total weight is minimized.
-
Q: Explain Prim’s algorithm (). How does it build the MST? What is its core greedy strategy? A: Prim’s starts at a root and grows a tree . It maintains a set of vertices in . In each step, it finds the minimum-weight edge with and , and adds to and to . Greedy choice: cheapest edge crossing the cut .
-
Q: What data structure makes Prim’s algorithm efficient? How does it affect the complexity? (Referencing , , ) A: A Priority Queue storing vertices , prioritized by (minimum edge weight connecting to ).
- Matrix/scan: .
- Adj. List + Binary Heap: using and .
- Adj. List + Fibonacci Heap: .
-
Q: Explain Kruskal’s algorithm (). What is its core greedy strategy? A: Kruskal’s sorts all edges by weight . It iterates through sorted edges , adding to the MST set if it doesn’t form a cycle with edges already in . Greedy choice: consider cheapest edges first overall.
-
Q: How does Kruskal’s algorithm typically detect cycles? (Referencing , , ) A: Uses a Disjoint Set Union (DSU) data structure. Each vertex starts in its own set via . For edge , if , the edge connects different components and is added to , followed by . Otherwise, it forms a cycle and is skipped.
-
Q: Compare Prim’s and Kruskal’s algorithms. When might one be preferred over the other? A:
- Approach: Prim’s grows one tree. Kruskal’s merges components (forest).
- Complexity: Prim’s (Fib. heap). Kruskal’s or (sorting + DSU).
- Preference: Prim’s for dense graphs (). Kruskal’s for sparse graphs (), or if edges are pre-sorted, or for finding Minimum Spanning Forests.
V. Euler Tours (Experiment 5)
-
Q: What is an Euler Path? What is an Euler Circuit (or Tour)? A: An Euler Path is a trail visiting every edge exactly once. An Euler Circuit is an Euler path starting and ending at the same vertex.
-
Q: What are the necessary and sufficient conditions for an undirected graph to have an Euler Circuit? An Euler Path? A: For a connected :
- Euler Circuit: is even for all .
- Euler Path: Exactly or vertices have odd . If , it’s a circuit. If , the path connects the two odd-degree vertices.
-
Q: Explain Hierholzer’s algorithm for finding an Euler Tour. Why does it work? A: Start at , follow unused edges forming a cycle . If edges remain, find with unused incident edges, start a new tour from , splice into . Repeat. Uses a stack: push vertices during traversal, pop to circuit list when stuck. Works because even degrees (or 2 odd) guarantee traversability until all edges are used.
VI. Hamiltonian Cycles (Experiment 6)
-
Q: What is a Hamiltonian Path? What is a Hamiltonian Cycle? A: A Hamiltonian Path visits every vertex exactly once. A Hamiltonian Cycle is a Hamiltonian path where is also an edge in .
-
Q: Is finding Hamiltonian cycles easy or hard compared to Euler tours? Why? A: Much harder. Hamiltonian Cycle is NP-complete. No simple local conditions (like ) guarantee existence.
-
Q: Describe the backtracking approach used in Experiment 6 to find Hamiltonian cycles. A: Build path . Try adding adjacent to where . Recurse. If and , cycle found. If stuck, backtrack: remove , try different neighbor for .
-
Q: Can you name any theorems that give sufficient (but not necessary) conditions for a Hamiltonian cycle? A:
- Dirac’s Theorem: If has and minimum degree , then is Hamiltonian.
- Ore’s Theorem: If has and for every pair of non-adjacent , then is Hamiltonian.
VII. Maximum Flow (Experiment 7, 8)
-
Q: Define: Flow Network, Source (), Sink (), Capacity (), Flow (). A:
- Flow Network: Directed graph with edge capacities .
- Source (): Vertex with in-degree (or designated).
- Sink (): Vertex with out-degree (or designated).
- Capacity : Max flow on edge .
- Flow : Function satisfying:
- Capacity Constraint: for all .
- Flow Conservation: , .
-
Q: What is the Maximum Flow problem? A: Find a feasible flow maximizing the net flow out of , .
-
Q: Explain the concept of a Residual Graph (). How is it used in Ford-Fulkerson? A: represents remaining capacity. For :
- Forward edge with capacity if .
- Backward edge with capacity if . Ford-Fulkerson finds paths in .
-
Q: What is an Augmenting Path () in the context of max flow? A: A simple path from to in the residual graph . Indicates flow can be increased along by the bottleneck capacity .
-
Q: Describe the Ford-Fulkerson method (). Is it an algorithm itself? A: Method:
- .
- While augmenting path in : a. . b. Augment flow along by . c. Update .
- Return . It’s a method; the specific algorithm depends on path choice (e.g., BFS, DFS).
-
Q: What is the Edmonds-Karp algorithm ()? How does it differ from the general Ford-Fulkerson method? What is its advantage? A: Edmonds-Karp is Ford-Fulkerson where the augmenting path is always chosen as the shortest path (min edges) in , found using BFS.
- Difference: Specifies BFS for path finding.
- Advantage: Guaranteed polynomial time complexity .
-
Q: What is the Max-Flow Min-Cut Theorem? A: The maximum value of an flow equals the minimum capacity of an cut . Cut capacity is .
-
Q: You implemented Ford-Fulkerson using DFS (Task 4, Exp 8). How might its performance compare to Edmonds-Karp (using BFS)? Could it be worse? A: Yes, DFS-based Ford-Fulkerson can be much worse. It doesn’t guarantee shortest paths and can lead to augmentations, which can be exponential in input size for pathological cases, unlike Edmonds-Karp’s .
VIII. Graph Coloring (Experiment 9, 10)
-
Q: What is Vertex Coloring? What is the Chromatic Number ? A: Vertex coloring assigns colors to vertices such that if . The Chromatic Number is the minimum number of colors needed.
-
Q: What is Edge Coloring? What is the Chromatic Index ? A: Edge coloring assigns colors to edges such that if edges share a common vertex. The Chromatic Index is the minimum number of colors needed.
-
Q: Explain the Greedy Coloring algorithm for vertices. Is it guaranteed to find ? A: Process vertices . Assign . Not optimal (depends on order), but uses colors, where is max degree.
-
Q: How does the backtracking algorithm find the optimal vertex coloring ()? A: Tries assigning colors to vertices recursively. If = is assigned, check constraints for adjacent, already colored . If valid, recurse. If not, backtrack and try . Explores state space to find minimum , potentially exponential time.
-
Q: What does Vizing’s Theorem state about edge coloring ? A: For any simple graph , is either or . Graphs are Class 1 () or Class 2 ().
-
Q: Explain the concept of Face Coloring for planar graphs (Experiment 10). A: Assign colors to faces of a planar embedding of such that adjacent faces (sharing edge boundary) get different colors.
-
Q: How can you use the Dual Graph to solve the face coloring problem? A: Construct : one vertex for each face in . Edge exists in if faces share an edge in . Face coloring is equivalent to vertex coloring .
-
Q: What is the famous theorem related to face coloring planar graphs? Did your algorithm likely rely on it directly? A: The Four Color Theorem: for any planar . My greedy algorithm on colors faces but doesn’t guarantee using only colors; it applies the principle but doesn’t prove the theorem.
IX. Maximum Weight Matching (Experiment 11)
-
Q: What is a Matching in a graph ? What is a Maximum Cardinality Matching? A: A matching such that no two edges in share a vertex. Maximum Cardinality Matching has the largest possible .
-
Q: What is a Bipartite Graph? A: where can be partitioned into two disjoint sets () such that every edge connects to .
-
Q: What problem does the Hungarian Algorithm solve? A: Finds a maximum weight matching in a weighted bipartite graph . Finds maximizing .
-
Q: Can you briefly explain the high-level idea behind the Hungarian algorithm (Kuhn-Munkres)? What are concepts like ‘labels’ and ‘equality subgraph’ ? A: Maintains labels for . The equality subgraph contains edges where such that . Find augmenting path in . If none, update labels based on slack to expand . Repeat until max matching found in .
-
Q: What is the time complexity of the Hungarian Algorithm? A: Standard implementation is where is vertices per partition ( if ). Can be or with heaps. (Here refers to , the size of one partition, consistent with typical analysis).
X. Implementation & Practical Aspects
-
Q: When implementing Prim’s algorithm (Exp 2), matrix vs priority queue. What practical differences? A: Matrix (Task 1) is . Priority Queue (Task 2) is (binary heap), much faster for sparse graphs ().
-
Q: In Kruskal’s algorithm (Exp 3), why is the efficiency of the Disjoint Set Union (, ) operations important? A: Kruskal’s performs s and s. Efficient DSU (path compression, union by rank/size) makes this part nearly linear , where is inverse Ackermann. The overall is then dominated by sorting.
-
Q: For user input tasks, challenges in parsing graph input? A: Format definition, input validation (indices in ?, weights ?), mapping symbolic names (‘a’) to indices (), choosing internal representation (matrix vs list).
-
Q: Why is visualization useful in graph algorithms (Exp 3, 6, 9, 10)? Tools? A: Understanding structure/progress, debugging logic/representation, communication. Tools: Python (
NetworkX,Matplotlib),Graphviz, Gephi.