# Evolution Notes – Variation under Domestication (1 of 3)

References used: The Origin of Species (first edition) by Charles Darwin. Chapter 1.

Cultivated plants and animals generally show more variety as compared to species bred naturally. This is probably because of the fact that domestic plants and animals are raised in conditions that are not as uniform as the natural habitat of their ancestors. Variation in domestic animals and plants is observed to be a continuous and unceasing process. For example, wheat, which is a very old cultivated plant, continues to produce new varieties.

Claim: Conditions of life are not the direct and only cause of this variation

Evidence: Seedlings of the same fruit and offspring from the same litter can show be considerably different from each other even if both the offspring and parents are raised under exactly the same conditions. If the conditions of life were the only cause of variation in organisms, there would have been no variation among the offspring or even if there was variation, all the offspring should have varied in the same manner. This can be used as evidence of the fact that the laws of reproduction, growth and inheritance have a greater impact on the characteristics of an organism than do the conditions of life directly.

Claim: Correlation of growth. If an organism is selectively bred for a certain characteristic, other parts of the structure of that organism are also modified.

Evidence: Cats with blue eyes are always observed to be deaf. Hairless dogs have imperfect teeth. Long limbs are always accompanied by an elongated head. White sheep and pigs are differently affected by vegetable poisons as compared to their domestic counterparts.

Claim: As a general rule, every characteristic in an organism inherited.

Evidence: When in individuals exposed to the same conditions of life as the rest of the population, a very rare deviation such as albinism, for example, appears which is only found in one of several millions of individuals in a population, the same deviation is often also found in that individual’s offspring. By the laws of probability, one is compelled to attribute its reappearance in the offspring to the laws of inheritance.

# The Best Course I’ve Taken Yet

CS210 – Discrete Mathematics. Hands down the best course I have ever taken. I will argue my case from three different perspectives.

Discrete Mathematics as a Mathematics course

a.k.a Why Formal Math is good too, but Discrete Math is better.

I took both Discrete Mathematics and Introduction to Formal Mathematics in the same semester. I found Discrete Mathematics to be better. Why? It covers almost everything one learns in a formal mathematics course plus more exciting stuff! Formal mathematics covered proof-writing, set theory, cardinality, equivalence relations, functions and (very very) introductory group theory. Discrete Mathematics covered all of this except for the group theory part. However, it also covered introductory number theory, RSA Encryption (we missed that part, though. Due to cancellation of the last class), principles of counting and (the best and most exciting part) graph theory.

Discrete Mathematics as a Computer Science course

a.k.a Mathematics makes code fast and pretty.

Every Computer Scientist on the planet should take a course on discrete mathematics. It gives one the mathematical maturity needed for algorithm design, automata theory, data structures, computational complexity etc. On top of that, it helps one become a better programmer. It teaches one a way of thinking that enables one to write more efficient and, indeed, more beautiful code.

Discrete Mathematics as a Liberal Arts course

a.k.a One can’t argue with a sound mathematical proof

“Liberal arts and math? Lol, who is this nerd trying to kid.” I’m serious, though. To start with, Discrete Mathematics teaches logic. What’s better? It teaches quantified logic and proof writing. This helps one make better arguments and (more importantly) accept only claims backed with rational arguments, which helps one understand the world better.

# Algorithms Notes – Chapter 10

Obligatory Computer Science Comic

Depth-First Search – Analysis (continued)

White Path Theorem

Theorem: Let $G$ be a directed or undirected graph on which a depth-first search is executed. Let $u$ and $v$ be two vertices in $G$. $v$ is a descendant of $u$ if and only if at the time $u.d$ when the algorithm discovers $u$, there is a path from $u$ to $v$ that consists entirely of white vertices.

Proof: Since this is an if-and-only-if theorem, we must prove both sides of the implication.

Proof of $\Rightarrow$: Suppose $v$ is a descendant of $u$ in the depth-first forest. By corollary 1 (from the last chapter), we know that $u.d < v.d$. So $v$ is not discovered before $u$ is discovered, hence $v$ is white at time $u.d$. Since $v$ can be any descendant of $u$, all vertices in the unique simple path from $u$ to $v$ in the depth-first forest must be white at time $u.d$. In the case that $v = u$, the path just contains the vertex $u$ which is still white when the value of $u.d$ is set by the algorithm.

Proof of $\Leftarrow$: Suppose there is a path containing only white vertices from $u$ to $v$ at time $u.d$ and $v$ does not become a descendant of $u$. Let $v$ be the closest non-descendant vertex to $u$ in the path. Let $w$ be the predecessor of $v$. Since $v$ is the closest non-descendant vertex to $u$, $w$ is a descendant of $u$. By corollary 1 from the last chapter, we know that $w.f \le u.f$ (and $w.f = u.f$ if $u$ and $w$ are the same vertex). Since $v$ is a descendant of $w$, it must be discovered after $u$ is discovered and before the adjacency list scan of $w$ is finished. Hence, it must be true that $u.d < v.d < w.f < u.f$. The parenthesis theorem then implies that $[v.d, v.f]$ is contained entirely within the interval $[u.d, u.f]$. By corollary 1, this means that $v$ is still a descendant of $u$. $\square$

Note: A depth-first forest is often denoted by $G_{\pi}$ by convention. Just like we denote a graph as $G$ and vertices as $u$ and $v$. Not necessary at all, just a convention.

xkcd/410

$G_{\pi}$ Edge Classification

We can use a depth-first forest to classify edges into four types:

1. Tree Edges – Let $v$ be a vertex discovered by a DFS. If $v$ was first discovered by exploring any edge $(u, v)$ then that edge is a tree edge. (i.e the edges that are part of a depth-first forest $G_{\pi}$.
2. Back Edges – Let $u$ and $v$ be vertices in a depth-first tree such that $u$ is a descendant of $v$. An edge $(u, v)$ connecting $u$ to its ancestor $v$ is a back-edge. In a directed graph, a self-loop is also a back-edge.
3. Forward Edges – Non-tree edges that connect two vertices $u$ and $v$ such that $v$ is a descendant of $u$ in a depth-first tree.
4. Cross Edges – All edges that are not tree edges, back edges or forward edges (i.e an edge that is not in any of the above three categories is a cross edge).

Let $(u, v)$ be an edge explored by the depth-first search algorithm. The color of the vertex $v$ indicates the type of edge $(u, v)$ is:

1. White – tree edge
2. Gray – back edge
3. Black – forward or cross edge

Note: I do not understand the proof of this relation between vertex color and type of edge properly myself. The book doesn’t deal with it very formally, in my opinion. Anyway, onto the next theorem.

Theorem 2: Let $G$ be any undirected graph. In a depth-first search of $G$, every edge of $G$ is either a tree edge or a back edge.

Proof: Let $(u, v)$ be any edge in $G$. Suppose $u.d < v.d$ (without loss of generality). Then before the algorithm finishes $u$, it must discover and finish $v$ while the color of $u$ remains gray, since $v$ is in the adjacency list of $u$. If the direction of the edge $(u, v)$ is from $u$ to $v$ the first time the algorithm explores it, then the vertex $v$ has not been discovered before that time. This is true because if it had been discovered previously, the algorithm would have already explored the edge $(u,v )$ in the direction from $v$ to $u$.

Since, at the time $(u, v)$ is first explored, $v$ is undiscovered, its color is white. Hence $(u, v)$ becomes a tree edge. If the algorithm explores $(u, v)$ first in the direction from $v$ to $u$ instead then, since the color of $u$ is gray, the edge $(u, v)$ becomes a back edge. $\square$

# Algorithms Notes – Chapter 9

xkcd/292

Depth-First Search – Analysis

Time Complexity

Let $G = (V, E)$ be the graph on which the Depth-First Search algorithm gets executed. Not taking into account the time taken by $DFS-VISIT$, the time taken by $DFS$ is essentially of the same order as the time taken by both the for loops in $DFS$. Since both for loops have time $latex\Theta(|V|)$, $DFS$ takes time $\Theta(|V|)$ if one excludes the calls to $DFS-VISIT$. Since $DFS-VISIT$ is called exactly once for each vertex in the graph $G$, the for loop in $DFS-VISIT$ executes $Adj[v]$ times for any vertex $v$ on which it is run. Hence (by the handshaking lemma) the total time for running $DFS-VISIT$ on all vertices in $G$ is $\sum_{v \in V} = \Theta (|E|)$. Hence the time complexity of the algorithm is $\Theta (|V| + |E|)$.

Paranthesis Theorem

Theorem: Let $G = (V, E)$ be the graph on which the Depth-First Search algorithm gets executed. Let $u$ and $v$ be any two vertices in $G$. Exactly one of the following three conditions must hold:

1. The intervals $[u.d, u.f]$ and $[v.d, v.f]$ are entirely disjoint and neither $u$ nor $v$ are descendants of each other in the depth-first forest.
2. The interval $[u.d, u.f]$ is contained entirely in the interval $[v.d, v.f]$ and $u$ is a descendant of $v$ in a depth-first tree.
3. The interval $[v.d, v.f]$ is contained entirely in the interval $[u.d, u.f]$ and $v$ is a descendant of $u$ in a depth-first tree.

Proof: Proof by case analysis. We begin with the case when $u.d < v.d$. There can be two subcases.

Subcase 1: $v.d < u.f$. This means that $v$ was discovered when $u$ was gray and, hence, $v$ is a descendant of $u$. Since $v$ was discovered after you, its adjacency list scan has to be completed before the search returns to scanning the adjacency list of $u$. Hence it must be true that $v.f < u.f$. In other words, case 2 of the theorem is true in this subcase.

Subcase 2: $u.f < v.d$. Since, in a depth first search, $w.d < w.f$ for any vertex $w$ searched by the algorithm, it must be true that $u.d < u.f < v.d < v.f$. This means that the intervals $[u.d, u.f]$ and $[v.d, v.f]$ are disjoint. Since the intervals are disjoint, neither of the two vertices was discovered while the other was gray. In other words, neither vertex is a descendant of the other.

The proof for the case when $v.d < u.d$ is similar. $\square$

Corollary 1: Let $v$ and $u$ be any two vertices in a depth-first forest of a graph $G$. $v$ is a proper descendant of $u$ if and only if $u.d < v.d < v.f < u.f$.

Proof: Immediate from the paranthesis theorem. (Once again, don’t blame me for lack of explanation. Blame the textbook Introduction to Algorithms 3rd Edition by Cormen et al.).

# Algorithms Notes – Chapter 8

xkcd/761

Depth-First Search

One thing that is important to understand is that the Breadth-First Search algorithm generates a tree as it searches a graph (for a proof of this, refer to Lemma 22.6 in Chapter 22 of Introduction to Algorithms 3rd Edition by Cormen et al.). This is called a Breadth-First Tree. A Depth-First Search, on the other hand, does not generate a single tree. Instead, it generates several trees (called depth-first trees) and creates a depth-first forest.

Check this animation out to get a general understand of how a depth-first search works.

Here is the pseudocode for the Depth-First Search algorithm:


DFS(G)
for each vertex u ϵ G.V
u.color = WHITE
u.π = NIL
time = 0
for each vertex u ϵ G.V
if u.color = WHITE
DFS-VISIT(G, u)

DFS-VISIT(G, u)
time = time + 1    // white vertex discovered
u.d = time
u.color = GRAY
for each v ϵ G.Adj[u]  // explore edge (u, v)
if v.color == WHITE
v.π = u
DFS-VISIT(G, v)
u.color = BLACK       // u is finished. Mark it black.
time = time + 1
u.f = time


Notation

Most of the notation is very similar to that used in the Breadth-First Search pseudocode with one key difference. Let $v$ be a vertex in $G$. The property $v.d$ does not store the distance from the source, unlike in BFS. Instead it, along with the new property $v.f$ store time-stamps.

• $v.d$ stores the time when the vertex $v$ is discovered by the algorithm
• $v.f$ stores the time when the algorithm finishes scanning the adjacency list of $v$ and sets its color to black.
• The global variable $time$ is used for the above described time-stamping.

The following two GIFs provide a good comparison between a BFS and a DFS.

# Algorithms Notes – Chapter 7

xkcd/263

Correctness – Continued

Note: An easier way to understand the following lemma is to realize that it shows that at any given time, the queue $Q$ holds at most two distinct $d$ values.

Lemma 3: Suppose during the execution of the algorithm, the queue $Q$ contains the vertices $\langle v_{1}, v_{2}, ... , v_{r} \rangle$ where $v_{1}$ is the head of the queue and $v_{r}$ is the tail. Then $v_{r}.d \le v_{1}.d + 1$ and $v_{i}.d \le v_{i+1}.d$ for $i = 1, 2, ... , r-1$.

Proof: Proof by induction on queue operations.

Basis Step: Since the queue only contains $s$ initially, the lemma is true. This completes the basis step.

Inductive Step: We need to prove that the lemma remains true after both enqueueing and dequeueing a vertex.

• Dequeueing – If $v_{1}$ was the head of the queue, now $v_{2}$ will become the head (If $v_{1}$ was the only element and hence the queue is empty now, the lemma holds vacuously). By the inductive hypothesis, $v_{1}.d \le v_{2}.d$. Also by the inductive hypothesis, $v_{r}.d \le v_{1}.d + 1$. Now since $v_{r}.d \le v_{1}.d + 1$ and $v_{1}.d \le v_{2}.d$, it is evident that $v_{r}.d \le v_{2}.d + 1$. Hence the lemma remains true.
• Enqueueing – When a vertex $v$ is enqueued in line 18 of the algorithm, it becomes $v_{r+1}$. Since we have already dequeued the vertex $u$ whose adjacency list is being scanned, it must be true that $v_{1}.d \ge u.d$ where $v_{1}.d$ is the current head of the queue. Hence $v_{r+1}.d = u.d + 1 \le v_{1}.d + 1$. Since, by the inductive hypothesis, $v_{r}.d \le u.d + 1$ and we know that $v_{r+1}.d = u.d + 1$, it is true that $v_{r}.d \le r_{r + 1}.d$. Hence the lemma remains true.

This completes the inductive step. $\square$

Corollary 1: Let $v_{i}$ and $v_{j}$ be two vertices that are enqueued during the execution of the algorithm such that $v_{i}$ is enqueued before $v_{j}$. Then $v_{i}.d \le v_{j}.d$ at the time when $v_{j}$ is enqueued.

Proof: It is evident from lemma 3 (this looks more like an assertion than a proof, I know. Blame the textbook Introduction to Algorithms by Cormen et al. not me).

xkcd/257

We are, at last, ready to prove the correctness of the Breadth-First Search algorithm. *drumroll*

Theorem (Correction of BFS):

Let $G = (V, E)$ be a directed or undirected graph. Suppose the algorithm is run on $G$ with the vertex $s$ as the source. Then

1. During its execution, the algorithm discovers every vertex $v$ in $G$ that can be reached from the source $s$ and, when the algorithm terminates, the distance of that vertex $v.d$ equals the shortest path distance from $s$ to $v$ i.e $\delta (s, v)$. and,
2. For any vertex $v$ other than the source vertex $s$ that is reachable from $s$, one of the shortest paths from $s$ to Slatex v$is the shortest path from $s$ to the parent of $v$: $v.\pi$ plus the edge that connects $v.\pi$ to $v$: $(v.\pi, v)$. Proof: Proof of 1- Proof by contradiction. Suppose a vertex $v$ receives a $d$ value such that $v.d \not= \delta(s, v)$. Since, from line 7, $s.d = 0 = \delta(s, s)$, it can be seen than $v \not= s$. By lemma 2, we know that $v.d \ge \delta(s, v)$. Since $v.d \not= \delta(s, v)$, it must be true that $v.d > \delta(s, v)$. If $v$ is not reachable from $s$, then $\delta(s, v) = \infty$ and hence it must be true that $v.d \le \delta(s, v) = \infty$, which contradicts the above statement. Hence $v$ must be a vertex that is reachable from $s$. Let the vertex $u$ be the predecessor of $v$ on the shortest path from $s$ to $v$. Then $\delta(s, v) = \delta(s, v) + 1 = u.d + 1$. Since $v.d > \delta(s, v)$, we can infer the following inequality: $v.d > \delta(u, v) + 1$. We shall call this inequality $P$. When the vertex $u$ is dequeued by the algorithm in line 12, v can have one of the three colors: white, gray or black. We need to prove that, in all three cases, we arrive at a contradiction to inequality $P$. 1. If $v$ is white, then line 16 sets the value of $v.d$ to $u.d + 1$ which is the shortest path distance from $s$ to $v$. Hence $v.d = \delta(s, v)$, which contradicts $P$. 2. If $v$ is black then it has already been dequeued, which means it was enqueued before $u$ was enqueued. By corollary 1, this implies than $v.d \le u.d$, which contradicts $P$. 3. If $v$ is gray then it was discovered while scanning the adjacency list of some vertex $w$ which was dequeued before $u$. By corollary 1, this means that $w.d \le u.d$ and, since $w$ is the predecessor of $v$, that $v.d = w.d + 1$. Since $w.d \le u.d$, it is also true that $w.d + 1 \le u.d + 1$. Since $v.d = w.d + 1$ and $\delta(s, v) = u.d + 1$, this means that $v.d \le \delta(s, v)$, which contradicts $P$. Hence, at the termination of the algorithm, $v.d = \delta(s, v)$ for every vertex $v$ in $G$. Suppose there is a vertex $v$ in $G$ such that it is reachable from $s$ but not discovered by the algorithm. Then $v.d = \infty$ and hence $v.d > \delta(s, v)$. But we already established that $v.d = \delta(s, v)$ hence this is a contradiction. Therefore all vertices that are reachable from $s$ are discovered by the algorithm. $\square$ Proof of 2- If $v.\pi = u$, then $v.d = u.d + 1$. Therefore we can obtain a shortest path from $s$ to $v$ by taking the shortest path from $s$ to $v.\pi$ and then moving along that edge $(v.\pi, v)$. $\square$ And voila, we have proved that the Breadth-First Search algorithm is correct. (whew, I’m exhausted. You’re welcome.) # Algorithms Notes – Chapter 6 Analysis of Breadth-First Search (Keep the pseudocode of the algorithm from the previous chapter somewhere handy) Complexity Let $G = (V, E)$ be the input graph for the algorithm. Time Complexity: We know that the algorithm never sets the color of a vertex to white after initialization hence the $if$ condition in line 14 ensures that every vertex is enqueued at most once. If a vertex is enqueued at most once then it is dequeued at most once. Since the enqueuing and dequeuing operations take $\mathcal{O}(1)$ time, the total time required for queueing all the vertices is $\mathcal{O}(|V|)$. We also know that the algorithms scan the adjacency list of each vertex only after it is dequeued. Since a vertex is dequeued at most once, every adjacency list is scanned at most once. The same of the lenghts of all the adjacency lists is $\Theta(|E|)$ (see handshaking lemma). Hence the time for initialization is $\mathcal{O}|V|$ and the time for running the algorithm is $\mathcal{O}|E|$. Therefore the time complexity of the algorithm can be expressed as $\mathcal{O}(|V| + |E|)$. Space Complexity: Since we are using adjacency lists to represent the graph, the space complexity of the algorithm is $\mathcal{O}(|V| + |E|)$. Correctness We shall prove the correctness of the Breadth-First Search algorithm by proving that it correctly calculates shortest path distances. Definition: The shortest-path distance $\delta (u, v)$ between any two vertices $u$ and $v$ in a graph is the minimum number of edges between any path from $u$ to $v$ in the graph. If no path from $u$ to $v$ exists, then we say that $\delta(u, v) = \infty$. Lemma 1: Let $G = (V, E)$ be the input graph (it can be either directed or undirected, doesn’t matter) and let $s$ be any vertex in $G$. Then for any edge $(u, v)$ in $G$, it is true that $\delta(s, v) \le \delta(s, u) + 1$. Proof: If a path exists from $s$ to $u$ then, since there is the edge $(u, v)$, there is also path from $s$ to $v$. Since there is an edge between $u$ and $v$, the shortest path between $s$ and $v$ cannot be longer than one edge more than the shortest path between $s$ and $u$. If there is no path between $s$ and $u$, then $\delta(u, v) = \infty$, in which case the inequality obviously holds. $\square$ Obligatory Math Terminology Comic Lemma 2: Let $G = (V, E)$ be the input graph (once again, it can be either directed or undirected) and let $s$ be any vertex in$latexc G$. Let BFS be run on $G$ with $s$ as the source. Upon termination, for each vertex $v$ in $G$, it is true that $v.d \ge \delta(s, v)$. Proof: Proof by Induction on vertices with the inductive hypothesis being: $v.d \ge \delta(s, v)$ for all$v \in V$. Basis Step: When $Q$ contains only $s$, the inductive hypothesis is true since $s.d = 0 = \delta(s, s)$ and for any other vertex $v$, $v.d = \infty \ge \delta(s, v)$. This completes the basis step. Inductive Step: Let $v$ be a white vertex that is discovered after scanning the adjacency list of a vertex $u$. From line 16, we know that, $v.d$ is set to $u.d + 1$. From the inductive hypothesis, we know that $u.d \ge \delta(s, u)$ and hence $u.d + 1 \ge \delta(s, u) + 1$. Since $v.d = u.d + 1$, this implies that $v.d \ge \delta(s, u) + 1$. By lemma 1, we know that $\delta(s, u) + 1 \ge \delta(s, v)$ and since $v.d \ge \delta(s, u) + 1$, it is evident that $v.d \ge \delta(s, v)$. This completes the inductive step. $\square$ I’ll continue the proof for correctness in the next chapter. Happy New Year, by the way. # Algorithms Notes – Chapter 5 Obligatory Algorithm Comic Graph Algorithms – Breadth First Search The following GIF I got off of wikipedia will give you a general idea of how a Breadth-First Search works: Also, take a look at this animation. Here is the pseudocode for the Breadth-First Search (BFS) algorithm: BFS(G, s) for each vertex u ϵ G.V - {s} u.color = WHITE u.d = ∞ u.π = NIL s.color = GRAY s.d = 0 s.π = NIL Q = Ø ENQUEUE(Q, s) while Q ≠ Ø u = DEQUEUE(Q) for each v ϵ G.Adj[u] if v.color == WHITE v.color = GRAY v.d = u.d + 1 v.π = u ENQUEUE(Q, v) u.color = BLACK  Notation • $G$ – the graph • $s$ – the root of the graph • $v.\pi$ – predecessor (or parent) of vertex $v$. • $v.d$ – distance between the vertex $v$ and the root $s$. • $v.color$ – the color of the vertex $v$. It is white if $v$ is undiscovered, gray if it is discovered. A vertex is black if all vertices adjacent to it have also been discovered. • $Q$ a queue that contains all the gray vertices. A breadth-first search can correctly calculate the shortest path between two given vertices in a graph (assuming a path exists). We will prove this in the next chapter when we analyse the Breadth-First Search algorithm. # Algorithms Notes – Chapter 4 xkcd/835 Graph Algorithms Over the next few chapters, we will learn and analyse the following two graph algorithms: • Breadth First Search (BFS) • Depth First Search (DFS) Representation of Graphs In computer science, it is common to represent graphs in one of two ways. 1. Adjacency List 2. Adjacency Matrix An adjacency list of a graph $G = (V, E)$ is written as $G.Adj$.It is an array that contains a list of vertices adjacent to the vertex in the index. In other words, let $u$ be a vertex of $G$. $G.Adj[u]$ will then consist of all the vertices that are adjacent to $u$ in $G$. An adjacency matrix is a matrix defined in the following way. Let $1, 2, ... , |V|$ be any arbitrary ordering of the vertices in $G$. The $a_{ij}th$ element of the adjacency matrix that represents $G$ has a value of $1$ if there exists an edge between the vertices $i$ and $j$ (i.e $(i, j) \in E$) and has a value of $0$ otherwise. The following figure shows both the adjacency list and adjacency matrix for a graph: # Algorithms Notes – Chapter 3 xkcd/399 References used for this section: A Elementary Data Structures Sets form the basis of both Mathematics and Computer Science. In Mathematics, we usually deal static (or frozen) sets. In other words, sets that don’t change much after their definition. In computer science, it is also common to deal with dynamic sets. A dynamic set is a set which allows the insertion and deletion of elements. Just like static sets have operations like union, intersection etc., dynamic sets have additional operations. The operations on a dynamic set are divided into two categories: 1. Queries – only return information from the set. 2. Modifying Operations – change the set. Here are some common dynamic set operations: $S$ = the set. $k$ = the key value. $x$ = pointer to an element in the set. Queries Search (S, k): returns a pointer $x$ to an element in $S$ such that $x.key = k$, or $NIL$ if $S$ contains no such element. Minimum(S): Let $S$ be a totally ordered set. This returns a pointer to the element in $S$ with the smallest key. Maximum(S): Let $S$ be a totally ordered set. This returns a pointer to the element in$latex  with the largest key.

Successor(S, x): Let $S$ be a totally ordered set. This returns a pointer to the element that is the next largest to the one pointed to by $x$, or $NIL$ if $x$ points to the largest element.

Predecessor(S, x): Let $S$ be a totally ordered set. This returns a pointer to the element that is the next smallest to the one pointed to by $x$, or $NIL$ if $x$ points to the smallest element.

Dynamic Operations

Insert(S, x): Adds the element pointed to by $x$ to the set $S$, assuming the element already has the attributes necessary to be a part of $S$.

Delete(S, x): Removes the element pointed to by $x$ from the set $S$.

Two Important Dynamic Sets

Stack: A stack has a prespecified delete operation. It implements what is called the last-in first-out (or LIFO for short) policy. The first element that gets deleted is the last element that was inserted into the stack.

Queue: A queue also has a prespecified delete operation. It implements the first-in first-out (or FIFO for short) policy. The first element that gets deleted is the first element that was inserted into a queue.

Implementing a Stack

Here’s a figure I got from wikipedia that makes it easy to understand how a stack works:

A stack of at most $n$ elements can be implemented using the following:

• An array $S[1 ... n]$.
• An attribute of the array, $S.top$ which stores the index of the most recently inserted element.

Stack Operations:
To implement a stack, you need the following three operations:

A query to check if the stack is empty.

STACK-EMPTY(S)
if S.top == 0
return TRUE
else return FALSE


The insert operation, which in this case is called push. A dynamic operation.

PUSH(S, x)
S.top = S.top + 1
S[S.top] = x


The delete operation, which in this case is called pop. A dynamic operation.

POP(S)
if STACK-EMPTY(S)
error &amp;quot;underflow&amp;quot;
else
S.top = S.top - 1
return S[S.top + 1]


Notes:

• When $S.top = 0$, the stack is empty.
• The stack underflows when we try to pop an empty stack
• The stack overflows if $S.top$ exceeds the value of $n$.

Implementing a Queue

Here’s a figure I got from wikipedia that makes it easy to understand how a queue works:

A stack of at most $n - 1$ elements can be implemented using the following:

• An array $Q[1 ... n]$.
• An attribute of the array $Q.head$ that indexes the head of the queue.
• An attribute of the array $Q.tail$ that indexes the tail of the queue.

Queue Operations:

To implement a queue, you need the following two dynamic operations.

The insert operation, called enqueue in this case.


ENQUEUE(Q, x)
Q[Q.tail] = x
if Q.tail == Q.length
Q.tail = 1
else Q.tail = Q.tail + 1


and the delete operation, called dequeue.


DEQUEUE(Q)