# Networks Notes: RFC 768 – User Datagram Protocol

User Datagram Protocol

Characteristics:

• Uses IP as its underlying protool.
• Does not guarantee reliable delivery of data streams
• Protocol mechanism is minimal

• Source port. (optional)
• Contains the port of the sending process.
• Any replies will be possibly addressed to this port.
• Contains a zero value if unused.
• Destination port.
• Contains the port of the destination address.
• Length.
• Contains the length (in octets i.e bytes) of the datagram being sent including the header and the data.
• Minimum value is eight because the header is eight bytes.
• Checksum.
• 16-bit one’s complement of the one’s complement sum of the information being sent.
• Sums up the information in the IP header*, the UDP header and the data.
• Pads data with zero octets to make a multiple of 2 octets (16-bits remember?)
• Has a value of all zeroes if not generated (e.g for debugging purposes)
• Same checksum procedure is also used in TCP.

* or, to be more precise, the pseudoheader which is assumed will be prefixed to the datagram.

A user-interface designed for this protocol should allow:

• The creation of new receive ports
• Functionality on the receive ports that does operations like returning the data octets and indicating the source port and address.
• An operation allowing a datagram to be sent, specifying the data and destination ports and addresses to be sent.

IP Interface guidelines:

• UDP module should get the source and destination addresses and the protocol field from the IP header.
• One possible UDP/IP interface can:
• return whole internet datagram (including internet header) on a receive operation.
• pass whole internet datagram (again, including header) to the IP on a send operation.
• Let the IP verify certain fields for consistency and compute the internet header checksum.

Possible applications of this protocol include:

• Internet Name Server
• Trivial File Transfer

When used in the Internet Protocol, the protocol number of UDP is 17 (21 in octal).

# Project Twain

To hype-up my vocabulary for the GRE and gain lots of information on the way, I am going to read articles regularly and list them here with my comments.

#1 “Waiting for Godel” by Siobhan Roberts in The New Yorker. June 29, 2016. ( link )

My comments: A very informative article about Godel’s Theorem and Mathematics in general. Centered around the experience of Siobhan Roberts while attending a crash-course on Godel’s Incompleteness Theorem at the Brooklyn Institute for Social Research where he encountered people as diverse as “a computer scientist obsessed with recursion…, a public-health nutritionist with a fondness for her “Philoslothical” T-shirt; a philosopher in the tradition of American pragmatism; an ad man versed in the classics; and a private-school teacher who’d spent a lonely, life-changing winter reading Douglas Hofstadter’s “Gödel, Escher, Bach,”” A very pleasant read.

#2 “Killing the Computer to Save It” by John Markoff in The New York Times. October 29, 2012. ( link )

My comments: This article discusses the attempt of a famour Computer Scientist and Network Security specialist, Peter G. Neumann to redesign computer systems in an attempt to prevent the security flaws the modern-day internet is plagued with. With nostalgic descriptions of SRI, the early days of Computers and the Internet, and a meeting between Dr. Neumann and Albert Einstein which was to influence Dr. Neumann’s life and work greatly, the article is very vivid and engaging.

#3 “Soviet Computer Push” in The New York Times. January 5, 1988. ( link )

My comments: Not a particularly good article. Monotonously reports a 1985 Soviet attempt to accelerate the development of its computer industry in order to compete with the West.

#4 “Ada Lovelace, The First Tech Visionary” by Betsy Morais in The New Yorker. October 15, 2013. ( link )

My comments: This article provides a glimpse into the life of Ada Lovelace using various sources. It discusses her contributions to the “Analytical Engine” introduced to her by its inventor, Charles Babbage, which was later to be recognized as the world’s first designed computer. It describes how Ada was the first person to recognize the true potential of the Analytical Engine and write the world’s first computer program which weaved a sequence of Bernoulli numbers. Moreover it also informs us that Ada was the first person to predict the rise of a new science emergent from mathematics based on this design, which she called the “science of operations” and is now called computer science. In parallel with all this, the article also discusses the discrimination faced by women in mathematics and computing and even the attempts at discrediting Ada in the twentieth century in order to establish male domination in the field. ““As people realized how important computer programming was, there was a greater backlash and an attempt to reclaim it as a male activity,” Aurora told me. “In order to keep that wealth and power in a man’s hands, there’s a backlash to try to redefine it as something a woman didn’t do, and shouldn’t do, and couldn’t do.”” The author concludes with a mention of the Ada programming language, named after the Countess of Lovelace which “brought together a number of different programming languages. It’s fitting for Lovelace—a woman who rode horses and played the harp and studied poetry—to tie seemingly disparate elements together.”

#5 “Beautiful Code” by Zeke Turner in The New Yorker. March 30, 2015. ( link )

My comments: Some people might find this article beautiful. I see it as a narration of how three people tried to sell other people’s code and algorithms as art on fancy sandstone tablets. Wasn’t particularly informative or inspiring.

#6 “Tesla Slept Here” by Mark Singer in The New Yorker. January 14, 2008. ( link )

My comments: This article uses the setting of the hotel Nikola Tesla lived in for the last decade of his life as a starting point to discuss his life. It is unique because, unlike most Tesla articles, it has almost no details about his scientific contributions. Instead it focuses more on Tesla the person, and Tesla the occupant of rooms 3327 and 3328.

#7 “Slavoj Žižek on Brexit, the crisis of the Left, and the future of Europe” by Slavoj Zizek and Benjamin Ramm in Open Democracy. July 1, 2016.  ( link )  [ entry by Ali Mehdi Zaidi ]

Mehdi’s comments: This article has an interesting discussion on the feasibility of popular democracy and the various forms of governance which can be instituted to bring about radical change.

My comments: Very illuminating. Zizek is not afraid to make assertions he can back up with solid reasoning, even if it is criticism of the Left and democracy. ““Direct democracy is the last Leftist myth”, Žižek tells me. “When there is a genuine democratic moment – when you really have to decide – it’s because there is a crisis”.” I am not sure what his exact opinion on the EU is, but seems somewhat supportive with a bit of suggested changes in policy. Analyzes the Brexit fuck-up really well.

#8 “Claude Shannon, the Father of the Information Age, Turns 1100100” by Siobhan Roberts in The New Yorker. April 30, 2016. ( link )

My comments: This article gives a heartening account of the mathematician Claude Shannon who is aptly called the father of the Information Age. It discusses some of Shannon’s greatest contributions, laying the foundation of Information Theory and establishing the connection between Boolean Algebra and Circuit Design in his Master’s thesis which completely changed the field of Circuit Design. As the article quotes, there will probably be an entry Shannon in the 166th edition of Encyclopedia Galactica, something like “Claude Shannon: Born on the planet Earth (Sol III) in the year 1916 A.D. Generally regarded as the father of the information age, he formulated the notion of channel capacity in 1948 A.D. Within several decades, mathematicians and engineers had devised practical ways to communicate reliably at data rates within one per cent of the Shannon limit.”

#9 “The Saint and the Skyscraper” by Mohammed Hanif in The New York Times. June 15, 2016. ( link )

My comments: Very well-written article informing us about the shrine of Abdullah Shah Ghazi in Karachi and what it means to the poor people there. While being threatened by religious extremists and having suffered a suicide blast in 2010, the shrine now has a new enemy: corporate real estate investors. Bahria Icon towers has completely surrounded the shrine with concrete walls and has made it impossible for its visitors, the poor, to visit Ghazi’s shrine. Shielding Ghazi’s shrine from the people who actually needed it. The article moves on to criticize the general trend in Karachi for development by the rich and exclusively for the rich and how it harms and sidelines the poor. “This is the development model Karachi has followed. There are signal-free corridors for car owners, but hardly any footpaths for the millions who walk to work. There are air-conditioned shopping malls for affluent consumers, but the police hound street vendors claiming they’re a threat to public order…But then who needs the sea, or a saint to protect us against it, when we can have infinity pools in the sky?”

#10 “I Worry About Muslims” by Mohammed Hanif in The New York Times. December 17, 2015. ( link )

My comments: This article discusses the death of Sabeen Mahmud. The writer, Mohammed Hanif, discusses the hypocrisy of both her Muslim killers and the moderates who take it upon themselves to defend the reputation of Islam. Very insightful read. It ends with: “The most poetic bit Muslim pundits tell the world is that Islam says if you murder one human being you murder the whole human race. So how come Sabeen Mahmud is gone and the whole bloody human race, including her killers, is still alive?”

#11 “Leopold Weiss, The Jew Who Helped Invent the Modern Islamic State” by Shalom Goldman in Tablet. July 1, 2016. ( link ) [entry by Ali Mehdi Zaidi]

My comments: Very interesting story, especially considering his diverse background and very globalized life.

#12 “How Nikola Tesla Predicted the Smartphone” by Celena Chong in TIME magazine. November 10, 2015. ( link )

My comments: This articles depicts the genius of the Serbian scientist and inventor Nikola Tesla using yet another example, his accurate almost prophetic prediction of the smartphone. Today, Tesla is widely regarded as the father of the electric age and a man far ahead of his time.

#13 “There is no difference between computer art and human art” by Oliver Roeder in Aeon. July 20, 2016. ( link )

My comments: This article challenges the distinction some commentators make between human and computer art, arguing that computer art is as human as “paint art or piano art”.

# FlowBender: Flow-level Adaptive Routing for Improved Latency and Throughput in Datacenter Networks by Kabbani, Vamanan, Hasan et al.

Problem: Load-balancing schemes hash a flow to a random path.

• Long flows collide on the same path. High congestion
• Other paths underutilized.

Proposed Solution: Flowbender.

• No excessive packet reordering.
• End-host driven rehashing. Assigns flows to paths dynamically.
• Recovers from link failures within Retransmit Timeout (RTO)
• Readily deployable in data-centers. (< 50 lines of kernel code)
• Robust. Easy to tune.

Top-Level Design Decisions of Flowbender:

• Uses flow load-balancing instead of packet-level load-balancing. Shift flow to a different path only once it is congested or disconnected. This avoids:
• Excessive out-of-order packet delivery
• Hardware changes to existing datacenter infrastructure
• ECN etc. already demonstrated to promptly propagate congestion information.
• Using ECN => targeting long flows taking several roundtrips to finish but these flows carry most of the traffic anyway.
• Transport-layer algorithms (like TCP) can quickly detect end-to-end link-failures. Flowbender can hence promptly avoid broken paths.

Gist of the Algorithm: Transmit a flow on a single path. Reroute that individual flow when it is congested.

• How do we know a flow is congestion?
• How is rerouting done?

Sensing Congestion: (uses the same technique as DCTCP)

• Switch marks every packet exceeding queue size above a threshold.
• Keep track of the fraction of ECN-marked acks in every RTT.

If congestion is larger than a threshold, we reroute the flow.

Rerouting Flows:

• ECMP engines in switches compute a hash-function based on packet-header fields.
• “Hacks” a packet-header field, TTL, based on congestion to change the hash generated. Maps hash to paths. (Why TTL? long story. Feel free to email me if you want to know)

Evaluation Details:

• Evaluated against existing schemes: ECMP, RPS and DeTail.
• Evaluation methods used:
• Simulation (simulated in ns-3, not lame old ns-2 with a stupid Tcl binding and incredibly fucked up C++ API from the prehistoric period)
• Test-bed implementation

# George Carlin on Issues

I may not completely agree with everything he says, but I love George Carlin. He’s hilarious and spot-on. He also has a knack of making excellent arguments brilliantly while being drop-dead hilarious at the same time, which is why he has had a lot of influence on my own opinions. For my own benefit as much as anyone else’s, here is a compilation of his comic take on various issues.

George Carlin on chickens (yay!):

George Carlin on what to do with the male chauvinists:

George Carlin on voting:

George Carlin on war:

George Carlin on questioning things:

George Carlin on abortion:

# Evolution Notes – Variation under Domestication (1 of 3)

References used: The Origin of Species (first edition) by Charles Darwin. Chapter 1.

Cultivated plants and animals generally show more variety as compared to species bred naturally. This is probably because of the fact that domestic plants and animals are raised in conditions that are not as uniform as the natural habitat of their ancestors. Variation in domestic animals and plants is observed to be a continuous and unceasing process. For example, wheat, which is a very old cultivated plant, continues to produce new varieties.

Claim: Conditions of life are not the direct and only cause of this variation

Evidence: Seedlings of the same fruit and offspring from the same litter can show be considerably different from each other even if both the offspring and parents are raised under exactly the same conditions. If the conditions of life were the only cause of variation in organisms, there would have been no variation among the offspring or even if there was variation, all the offspring should have varied in the same manner. This can be used as evidence of the fact that the laws of reproduction, growth and inheritance have a greater impact on the characteristics of an organism than do the conditions of life directly.

Claim: Correlation of growth. If an organism is selectively bred for a certain characteristic, other parts of the structure of that organism are also modified.

Evidence: Cats with blue eyes are always observed to be deaf. Hairless dogs have imperfect teeth. Long limbs are always accompanied by an elongated head. White sheep and pigs are differently affected by vegetable poisons as compared to their domestic counterparts.

Claim: As a general rule, every characteristic in an organism inherited.

Evidence: When in individuals exposed to the same conditions of life as the rest of the population, a very rare deviation such as albinism, for example, appears which is only found in one of several millions of individuals in a population, the same deviation is often also found in that individual’s offspring. By the laws of probability, one is compelled to attribute its reappearance in the offspring to the laws of inheritance.

# Spring Break Project: Biology

Since I have this week off from university as spring break, I have been given an epic opportunity to learn something new. So I have decided to learn something about a field that has always interested me: Biology. Especially things like evolutionary biology, ecology and genetics. My own field of Computer Science has a lot of things that are based on ideas in Biology including genetic algorithms, neural networks and even the botany-like terminology used for trees, their sub- and super-structures. Not to mention the fact that the father of computer science, Alan Turing, was also a part-time Biologist.

So here is my plan. I plan to read Charles Darwin’s The Origin of Species, two chapters every day for seven days (since the edition I was able to find at my university’s library contains fourteen chapters and I have seven days off) starting today and post notes related to them here on my blog. Let’s hope I manage to complete this project.

# On Loneliness

An indigenous american farming. Source

I used to be mortally terrified of being alone. I have now learnt to embrace it like a long lost brother. I understand the true power of this state. Being alone also means being independent. Independency leads to power and productivity. When you are not attached to anyone by symbolic strings of social convention, you are not held back by their mediocrity.

Now I want to be lonely, since only when I am alone am I truly free. A wanderer ardently experiencing the act of being alive and conscious on this planet and interestedly observing how others use up their lifetimes. I can understand the recurring patterns of nature and society without being encumbered by an unasked for and unsought membership of the latter.

I can get lost in the timeless world of mathematics and the arts and sciences emergent from it. I can express my true opinions on every issue and every matter without having to fake, censor or alter them for fear of ostracization. I can live a life that is not haunted by the ghosts of unnecessary emotional investment and unaffordable pseudo-familial or cultural expectation. I am free to explore the cascading branches born by my ideas and traverse the deepest and truest thoughts produced in my mind.

It is in this state that I am most likely to reminisce the memories of my childhood. It is in this condition that I am closest to comprehending the vast services my parents have granted me and the emotional foundation and mountains of love and affection my siblings have unreservedly made me the recipient of.

# Algorithm Notes – Classification of Algorithms

References used for this section: C

Algorithms usually have a primary parameter which we will call $N$. This is basically the input size (i.e the number of data items to be processed) and can be anything like:

• The degree of a polynomial
• The size of a file on which sorting or searching needs to be done
• The number of nodes in a graph etc.

Algorithms can be classified with to the functions their running times are proportional to. Most common algorithms have running times proportional to one of the following functions (in descending order of efficiency):

1. Constant. All the instructions in the algorithm are executed once or at most a few times. (define ‘a few’ precisely?) These algorithms are the most efficient.
2. Logarithmic ( $log N$ ). When $N$ doubles, the running time of these algorithms increase by a constant. When $N$ increases to $N^2$, the running time doubles.
3. Linear ( $N$ ). When $N$ doubles, the running time doubles also.
4. Log-Linear ( $N log N$ ). When $N$ doubles, the running time more than doubles (but only slightly).
5. Quadratic ( $N^2$ ). When $N$ doubles, the running time quadruples.
6. Cubic ( $N^3$ ). When $N$ doubles, the running time increases eight times.
7. Exponential ( $2^N$ ). When $N$ doubles, the running time squares. These are hopelessly slow algorithms.

To summarise, the running time of common algorithms is usually an element of:

• $\mathcal{O}(c)$
• $\mathcal{O}(log N)$
• $\mathcal{O}(N log N)$
• $\mathcal{O}(N)$
• $\mathcal{O}(N^2)$
• $\mathcal{O}(N^3)$
• $\mathcal{O}(2^N)$.

# Algorithms Notes -Limitations of Simple Sorting

Def: Let $(i, j)$ be an ordered pair of indices of an array $A$ such that $i < j$ and $A[i] > A[j]$. $(i, j)$ is called an inversion in the array.

• Swapping two elements that are out of place removes exactly one inversion.
• A sorted array has zero inversions.

Note: Both the above assertions probably have good proofs. I can prove the second one by contradiction very prettily.

Technically, the running time of insertion sort is $\mathcal{O}(I + n)$ where $I$ is the number of inversions in the array. If the number of inversions is $\mathcal{O}(n)$, insertion sort will run in linear time.

Thm: The average number of inversions in an array of $n$ distinct numbers is $\frac{n(n-1)}{4}$.

Proof: Let $L$ be any list of numbers and $L_{r}$ be the list in reverse order. Consider any pair of two numbers $(x, y)$ with (without loss of generality) $y > x$ in $L$. It must be true that $(x, y)$ is an inversion in exactly one of $L$  and $L_{r}$. The total number of such pairs in any list $L$ and its reverse $L_{r}$ is $\frac{n(n-1)}{2}$. Hence, an average list has half this amount of inversions i.e $\frac{n(n-1)}{4}$ inversions. $\square$

By this theorem, it is evident that insertion sort runs in quadratic time on average.

Thm: Any algorithm that sorts by exchanging adjacent elements requires $\Omega(n^2)$ time on average.

Proof: The average number of inversions is initially $\frac{n(n-1)}{4} = \Omega(n^2)$ as was established by the previous theorem. Since each swap removes at most one inversion, $\Omega(n^2)$ swaps are required to sort the array. $\square$

This lower-bound proof is valid for an entire class of sorting algorithms that perform only adjacent exchanges. This includes, for example,

• Insertion Sort
• Bubble Sort
• Selection Sort

This represents a limitation in ‘simple’ sorting algorithms and makes it clear that to achieve more efficiency we need to consider algorithms that do comparisons and exchanges between elements that are far apart. To run faster than quadratic time, an algorithm must eliminate more than one inversion per exchange.

# Algorithms Notes – Insertion Sort

Animation depicting insertion sort

The following is the source code for insertion sort:

for i = 1 to length(A) - 1
x = A[i]
j = i - 1
while j >= 0 and A[j] > x
A[j+1] = A[j]
j = j - 1
end while
A[j+1] = x
end for

Here is an implementation in C++ from Weiss’s book:

template void Insertion_Sort(Etype A[], const unsigned int N)
{
Etype Tmp;

A[0] = Min_Val();
for (int P = 0; P <= N; P++)
{
Tmp = A[P];
for (int j = P; Tmp < A[j-1]; j–)
A[j] = A[j-1];
A[j] = Tmp;
}
}
[/sourcecode]

Analysis:

Since it has nested loops which take at most $n$ iterations, insertion sort is $\mathcal{O}(n^2)$. This is a tight bound (a reverse order array as input can achieve this bound). To calculate the average case:

$\sum_{P = 2}^{n} = 2 + 3 + ... + n = \Theta(n^2)$.