# Networks Notes: RFC 768 – User Datagram Protocol

User Datagram Protocol

Characteristics:

• Uses IP as its underlying protool.
• Does not guarantee reliable delivery of data streams
• Protocol mechanism is minimal

• Source port. (optional)
• Contains the port of the sending process.
• Any replies will be possibly addressed to this port.
• Contains a zero value if unused.
• Destination port.
• Contains the port of the destination address.
• Length.
• Contains the length (in octets i.e bytes) of the datagram being sent including the header and the data.
• Minimum value is eight because the header is eight bytes.
• Checksum.
• 16-bit one’s complement of the one’s complement sum of the information being sent.
• Sums up the information in the IP header*, the UDP header and the data.
• Pads data with zero octets to make a multiple of 2 octets (16-bits remember?)
• Has a value of all zeroes if not generated (e.g for debugging purposes)
• Same checksum procedure is also used in TCP.

* or, to be more precise, the pseudoheader which is assumed will be prefixed to the datagram.

A user-interface designed for this protocol should allow:

• The creation of new receive ports
• Functionality on the receive ports that does operations like returning the data octets and indicating the source port and address.
• An operation allowing a datagram to be sent, specifying the data and destination ports and addresses to be sent.

IP Interface guidelines:

• UDP module should get the source and destination addresses and the protocol field from the IP header.
• One possible UDP/IP interface can:
• return whole internet datagram (including internet header) on a receive operation.
• pass whole internet datagram (again, including header) to the IP on a send operation.
• Let the IP verify certain fields for consistency and compute the internet header checksum.

Possible applications of this protocol include:

• Internet Name Server
• Trivial File Transfer

When used in the Internet Protocol, the protocol number of UDP is 17 (21 in octal).

# The Best Course I’ve Taken Yet

CS210 – Discrete Mathematics. Hands down the best course I have ever taken. I will argue my case from three different perspectives.

Discrete Mathematics as a Mathematics course

a.k.a Why Formal Math is good too, but Discrete Math is better.

I took both Discrete Mathematics and Introduction to Formal Mathematics in the same semester. I found Discrete Mathematics to be better. Why? It covers almost everything one learns in a formal mathematics course plus more exciting stuff! Formal mathematics covered proof-writing, set theory, cardinality, equivalence relations, functions and (very very) introductory group theory. Discrete Mathematics covered all of this except for the group theory part. However, it also covered introductory number theory, RSA Encryption (we missed that part, though. Due to cancellation of the last class), principles of counting and (the best and most exciting part) graph theory.

Discrete Mathematics as a Computer Science course

a.k.a Mathematics makes code fast and pretty.

Every Computer Scientist on the planet should take a course on discrete mathematics. It gives one the mathematical maturity needed for algorithm design, automata theory, data structures, computational complexity etc. On top of that, it helps one become a better programmer. It teaches one a way of thinking that enables one to write more efficient and, indeed, more beautiful code.

Discrete Mathematics as a Liberal Arts course

a.k.a One can’t argue with a sound mathematical proof

“Liberal arts and math? Lol, who is this nerd trying to kid.” I’m serious, though. To start with, Discrete Mathematics teaches logic. What’s better? It teaches quantified logic and proof writing. This helps one make better arguments and (more importantly) accept only claims backed with rational arguments, which helps one understand the world better.

# Algorithms Notes – Chapter 10

Obligatory Computer Science Comic

Depth-First Search – Analysis (continued)

White Path Theorem

Theorem: Let $G$ be a directed or undirected graph on which a depth-first search is executed. Let $u$ and $v$ be two vertices in $G$. $v$ is a descendant of $u$ if and only if at the time $u.d$ when the algorithm discovers $u$, there is a path from $u$ to $v$ that consists entirely of white vertices.

Proof: Since this is an if-and-only-if theorem, we must prove both sides of the implication.

Proof of $\Rightarrow$: Suppose $v$ is a descendant of $u$ in the depth-first forest. By corollary 1 (from the last chapter), we know that $u.d < v.d$. So $v$ is not discovered before $u$ is discovered, hence $v$ is white at time $u.d$. Since $v$ can be any descendant of $u$, all vertices in the unique simple path from $u$ to $v$ in the depth-first forest must be white at time $u.d$. In the case that $v = u$, the path just contains the vertex $u$ which is still white when the value of $u.d$ is set by the algorithm.

Proof of $\Leftarrow$: Suppose there is a path containing only white vertices from $u$ to $v$ at time $u.d$ and $v$ does not become a descendant of $u$. Let $v$ be the closest non-descendant vertex to $u$ in the path. Let $w$ be the predecessor of $v$. Since $v$ is the closest non-descendant vertex to $u$, $w$ is a descendant of $u$. By corollary 1 from the last chapter, we know that $w.f \le u.f$ (and $w.f = u.f$ if $u$ and $w$ are the same vertex). Since $v$ is a descendant of $w$, it must be discovered after $u$ is discovered and before the adjacency list scan of $w$ is finished. Hence, it must be true that $u.d < v.d < w.f < u.f$. The parenthesis theorem then implies that $[v.d, v.f]$ is contained entirely within the interval $[u.d, u.f]$. By corollary 1, this means that $v$ is still a descendant of $u$. $\square$

Note: A depth-first forest is often denoted by $G_{\pi}$ by convention. Just like we denote a graph as $G$ and vertices as $u$ and $v$. Not necessary at all, just a convention.

xkcd/410

$G_{\pi}$ Edge Classification

We can use a depth-first forest to classify edges into four types:

1. Tree Edges – Let $v$ be a vertex discovered by a DFS. If $v$ was first discovered by exploring any edge $(u, v)$ then that edge is a tree edge. (i.e the edges that are part of a depth-first forest $G_{\pi}$.
2. Back Edges – Let $u$ and $v$ be vertices in a depth-first tree such that $u$ is a descendant of $v$. An edge $(u, v)$ connecting $u$ to its ancestor $v$ is a back-edge. In a directed graph, a self-loop is also a back-edge.
3. Forward Edges – Non-tree edges that connect two vertices $u$ and $v$ such that $v$ is a descendant of $u$ in a depth-first tree.
4. Cross Edges – All edges that are not tree edges, back edges or forward edges (i.e an edge that is not in any of the above three categories is a cross edge).

Let $(u, v)$ be an edge explored by the depth-first search algorithm. The color of the vertex $v$ indicates the type of edge $(u, v)$ is:

1. White – tree edge
2. Gray – back edge
3. Black – forward or cross edge

Note: I do not understand the proof of this relation between vertex color and type of edge properly myself. The book doesn’t deal with it very formally, in my opinion. Anyway, onto the next theorem.

Theorem 2: Let $G$ be any undirected graph. In a depth-first search of $G$, every edge of $G$ is either a tree edge or a back edge.

Proof: Let $(u, v)$ be any edge in $G$. Suppose $u.d < v.d$ (without loss of generality). Then before the algorithm finishes $u$, it must discover and finish $v$ while the color of $u$ remains gray, since $v$ is in the adjacency list of $u$. If the direction of the edge $(u, v)$ is from $u$ to $v$ the first time the algorithm explores it, then the vertex $v$ has not been discovered before that time. This is true because if it had been discovered previously, the algorithm would have already explored the edge $(u,v )$ in the direction from $v$ to $u$.

Since, at the time $(u, v)$ is first explored, $v$ is undiscovered, its color is white. Hence $(u, v)$ becomes a tree edge. If the algorithm explores $(u, v)$ first in the direction from $v$ to $u$ instead then, since the color of $u$ is gray, the edge $(u, v)$ becomes a back edge. $\square$

# Algorithms Notes – Chapter 1

What  is an algorithm? The textbook Introduction to Algorithms 3rd Edition by Cormen et. al defines an algorithm thus:

an algorithm is any well-defined computational procedure that takes
some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.

Wikipedia defines algorithms in the following way:

In mathematics and computer science, an algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning.

I personally, adhere to the definition on a couple of Ask.com billboards that was temporarily endorsed by XKCD. Repeat after me:

We did not invent the algorithm. The algorithm consistently finds Jesus. The algorithm killed Jeeves. The algorithm is banned in China. The algorithm is from Jersey. The algorithm constantly finds Jesus. This is not the algorithm. This is close.

Onto the actual stuff.

Loop Invariance

A property that satisfies the following conditions:

1. Initialization. It is true before the start of the loop.
2. Maintenance. If it is true before an iteration of the loop then it mus remain true before the next iteration.
3. Termination. When the loop terminates, it must give a useful property that helps prove the correctness of the algorithm.

Note: Loop invariance is kind of like mathematical induction. Initialization is similar to the basis step in a proof by induction, and maintenance is similar to the inductive step. One key difference is the fact that in a proof by induction, the inductive step is true for all values till infinity i.e it does not terminate. In a loop invariant statement, on the other hand, there is termination after a finite number of steps.

Insertion Sort

Input: A sequence of n numbers $$

Output: A permutation $$ of the input sequence such that $a'_{1} \leq a'_{2} \leq ... \leq a'_{n}$.

Algorithm:


Insertion-Sort(A)
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key


Here’s a nice little GIF demonstrating how insertion sort works, courtesy wikipedia:

Loop Invariance for Insertion Sort:

At the start of each iteration of the for loop of lines 2-8, the subarray A[1 … j-1] consists of the elements originally in A[1 … j-1], but in sorted order.

Let’s see how this statement holds for the above mentioned conditions required for loop invariance.

Initialization – At the start of the first loop, the variable $j = 2$, hence the subarray A[1 … j – 1] evaluates to A[1]. Since this only contains one element (the first element), it is sorted (this is the trivial case. One element is always sorted). Also, this is the original element. Hence the statement holds.

Maintenance – We won’t tackle this formally yet. Wait a while. You can see from the GIF that this is intuitively true though. At every successive iteration, the subarray moves one element right and since the element is sorted into the correct place in the sub-array without changing the value of any other element, the loop invariant statement about the subarray consisting of its original elements but in sorted order remains true.

Termination – Notice that the loop terminates when $j > A.length$. Since $A$ had $n$ elements, the value of $j$ will be $latexn+1$ at the time of termination of the loop. It follows that the subarray $A[1 .. j-1]$ will be $A[1 .. n$ i.e the entire array. By the loop invariant statement, $A$ contains its original elements but in sorted order. Since $A$ at termination is the entire array, the entire array contains its original elements and is sorted at the end of an insertion sort. This proves the correctness of the insertion sort algorithm.

# Curiosity Rover’s Diary – Entry 2

SOL 3 – August 9, 2012

I took my first color pictures of Mars today using the two high resolution color MASTCAMS I have. Look at them, aren’t they cool?

Let me tell you a bit about the cameras I have. I have two MASTCAMS which are high-resolution and colored. I have four NAVCAMS which are grey-scale (two of them are backups) along with a bunch of other cameras used by scientists to analyse the structure and composition of the Martian surface. You can find more info about my cameras here.

SOL 16 – August 22, 2012

Today was a big day. I carried out my first drive. It was really fun to ride around on the rocky martian surface. It’s kind of like driving in a desert. Pretty fun. Best part: no traffic! This is the path I took on my first trip. Pretty arbitrary, I know… but I had lots of fun. 🙂

Here are a few pictures of me during my trip. I took them using my NAVCAMs.

This is getting exciting. I have begun to feel a little lonely though, but I try to keep myself busy. Science is fun!