FlowBender: Flow-level Adaptive Routing for Improved Latency and Throughput in Datacenter Networks by Kabbani, Vamanan, Hasan et al.

Problem: Load-balancing schemes hash a flow to a random path.

  • Long flows collide on the same path. High congestion
  • Other paths underutilized.

Proposed Solution: Flowbender.

  • No excessive packet reordering.
  • End-host driven rehashing. Assigns flows to paths dynamically.
  • Recovers from link failures within Retransmit Timeout (RTO)
  • Readily deployable in data-centers. (< 50 lines of kernel code)
  • Robust. Easy to tune.

Selection_003

Top-Level Design Decisions of Flowbender:

  • Uses flow load-balancing instead of packet-level load-balancing. Shift flow to a different path only once it is congested or disconnected. This avoids:
    • Excessive out-of-order packet delivery
    • Hardware changes to existing datacenter infrastructure
  • Uses RTT-based load-balancing instead of Link-level load-balancing. This:
    • Avoids congestion spreading trees that link-level schemes can create.
    • ECN etc. already demonstrated to promptly propagate congestion information.
    • Using ECN => targeting long flows taking several roundtrips to finish but these flows carry most of the traffic anyway.
    • Transport-layer algorithms (like TCP) can quickly detect end-to-end link-failures. Flowbender can hence promptly avoid broken paths.

Gist of the Algorithm: Transmit a flow on a single path. Reroute that individual flow when it is congested.

  • How do we know a flow is congestion?
  • How is rerouting done?

Sensing Congestion: (uses the same technique as DCTCP)

  • Switch marks every packet exceeding queue size above a threshold.
  • Keep track of the fraction of ECN-marked acks in every RTT.

If congestion is larger than a threshold, we reroute the flow.

Rerouting Flows:

  • ECMP engines in switches compute a hash-function based on packet-header fields.
  • “Hacks” a packet-header field, TTL, based on congestion to change the hash generated. Maps hash to paths. (Why TTL? long story. Feel free to email me if you want to know)

Evaluation Details:

  • Evaluated against existing schemes: ECMP, RPS and DeTail.
  • Evaluation methods used:
    • Simulation (simulated in ns-3, not lame old ns-2 with a stupid Tcl binding and incredibly fucked up C++ API from the prehistoric period)
    • Test-bed implementation

 

George Carlin on Issues

o7V81aM

I may not completely agree with everything he says, but I love George Carlin. He’s hilarious and spot-on. He also has a knack of making excellent arguments brilliantly while being drop-dead hilarious at the same time, which is why he has had a lot of influence on my own opinions. For my own benefit as much as anyone else’s, here is a compilation of his comic take on various issues.

George Carlin on chickens (yay!):

George Carlin on what to do with the male chauvinists:

George Carlin on voting:

George Carlin on war:

George Carlin on questioning things:

George Carlin on abortion:

 

 

Evolution Notes – Variation under Domestication (1 of 3)

References used: The Origin of Species (first edition) by Charles Darwin. Chapter 1.

border_collie_sheepdog_trial

Cultivated plants and animals generally show more variety as compared to species bred naturally. This is probably because of the fact that domestic plants and animals are raised in conditions that are not as uniform as the natural habitat of their ancestors. Variation in domestic animals and plants is observed to be a continuous and unceasing process. For example, wheat, which is a very old cultivated plant, continues to produce new varieties.

Claim: Conditions of life are not the direct and only cause of this variation

Evidence: Seedlings of the same fruit and offspring from the same litter can show be considerably different from each other even if both the offspring and parents are raised under exactly the same conditions. If the conditions of life were the only cause of variation in organisms, there would have been no variation among the offspring or even if there was variation, all the offspring should have varied in the same manner. This can be used as evidence of the fact that the laws of reproduction, growth and inheritance have a greater impact on the characteristics of an organism than do the conditions of life directly.

Claim: Correlation of growth. If an organism is selectively bred for a certain characteristic, other parts of the structure of that organism are also modified.

Evidence: Cats with blue eyes are always observed to be deaf. Hairless dogs have imperfect teeth. Long limbs are always accompanied by an elongated head. White sheep and pigs are differently affected by vegetable poisons as compared to their domestic counterparts.

Claim: As a general rule, every characteristic in an organism inherited.

Evidence: When in individuals exposed to the same conditions of life as the rest of the population, a very rare deviation such as albinism, for example, appears which is only found in one of several millions of individuals in a population, the same deviation is often also found in that individual’s offspring. By the laws of probability, one is compelled to attribute its reappearance in the offspring to the laws of inheritance.

 

Spring Break Project: Biology

bengal-tiger-why-matter_7341043

Since I have this week off from university as spring break, I have been given an epic opportunity to learn something new. So I have decided to learn something about a field that has always interested me: Biology. Especially things like evolutionary biology, ecology and genetics. My own field of Computer Science has a lot of things that are based on ideas in Biology including genetic algorithms, neural networks and even the botany-like terminology used for trees, their sub- and super-structures. Not to mention the fact that the father of computer science, Alan Turing, was also a part-time Biologist.

So here is my plan. I plan to read Charles Darwin’s The Origin of Species, two chapters every day for seven days (since the edition I was able to find at my university’s library contains fourteen chapters and I have seven days off) starting today and post notes related to them here on my blog. Let’s hope I manage to complete this project.

On Loneliness

farming

An indigenous american farming. Source

I used to be mortally terrified of being alone. I have now learnt to embrace it like a long lost brother. I understand the true power of this state. Being alone also means being independent. Independency leads to power and productivity. When you are not attached to anyone by symbolic strings of social convention, you are not held back by their mediocrity.

Now I want to be lonely, since only when I am alone am I truly free. A wanderer ardently experiencing the act of being alive and conscious on this planet and interestedly observing how others use up their lifetimes. I can understand the recurring patterns of nature and society without being encumbered by an unasked for and unsought membership of the latter.

I can get lost in the timeless world of mathematics and the arts and sciences emergent from it. I can express my true opinions on every issue and every matter without having to fake, censor or alter them for fear of ostracization. I can live a life that is not haunted by the ghosts of unnecessary emotional investment and unaffordable pseudo-familial or cultural expectation. I am free to explore the cascading branches born by my ideas and traverse the deepest and truest thoughts produced in my mind.

It is in this state that I am most likely to reminisce the memories of my childhood. It is in this condition that I am closest to comprehending the vast services my parents have granted me and the emotional foundation and mountains of love and affection my siblings have unreservedly made me the recipient of.

Algorithm Notes – Classification of Algorithms

References used for this section: C

Algorithms usually have a primary parameter which we will call N. This is basically the input size (i.e the number of data items to be processed) and can be anything like:

  • The degree of a polynomial
  • The size of a file on which sorting or searching needs to be done
  • The number of nodes in a graph etc.

Algorithms can be classified with to the functions their running times are proportional to. Most common algorithms have running times proportional to one of the following functions (in descending order of efficiency):

  1. Constant. All the instructions in the algorithm are executed once or at most a few times. (define ‘a few’ precisely?) These algorithms are the most efficient.
  2. Logarithmic ( log N ). When N doubles, the running time of these algorithms increase by a constant. When N increases to N^2, the running time doubles.
  3. Linear ( N ). When N doubles, the running time doubles also.
  4. Log-Linear ( N log N ). When N doubles, the running time more than doubles (but only slightly).
  5. Quadratic ( N^2 ). When N doubles, the running time quadruples.
  6. Cubic ( N^3 ). When N doubles, the running time increases eight times.
  7. Exponential ( 2^N ). When N doubles, the running time squares. These are hopelessly slow algorithms.

To summarise, the running time of common algorithms is usually an element of:

  • \mathcal{O}(c)
  • \mathcal{O}(log N)
  • \mathcal{O}(N log N)
  • \mathcal{O}(N)
  • \mathcal{O}(N^2)
  • \mathcal{O}(N^3)
  • \mathcal{O}(2^N).

 

Algorithms Notes -Limitations of Simple Sorting

Def: Let (i, j) be an ordered pair of indices of an array A such that i < j and A[i] > A[j]. (i, j) is called an inversion in the array.

  • Swapping two elements that are out of place removes exactly one inversion.
  • A sorted array has zero inversions.

Note: Both the above assertions probably have good proofs. I can prove the second one by contradiction very prettily.

Technically, the running time of insertion sort is \mathcal{O}(I + n) where I is the number of inversions in the array. If the number of inversions is \mathcal{O}(n), insertion sort will run in linear time.

Thm: The average number of inversions in an array of n distinct numbers is \frac{n(n-1)}{4}.

Proof: Let $L$ be any list of numbers and L_{r} be the list in reverse order. Consider any pair of two numbers (x, y) with (without loss of generality) y > x in L. It must be true that (x, y) is an inversion in exactly one of L  and L_{r}. The total number of such pairs in any list L and its reverse L_{r} is \frac{n(n-1)}{2}. Hence, an average list has half this amount of inversions i.e \frac{n(n-1)}{4} inversions. \square

By this theorem, it is evident that insertion sort runs in quadratic time on average.

Thm: Any algorithm that sorts by exchanging adjacent elements requires \Omega(n^2) time on average.

Proof: The average number of inversions is initially \frac{n(n-1)}{4} = \Omega(n^2) as was established by the previous theorem. Since each swap removes at most one inversion, \Omega(n^2) swaps are required to sort the array. \square

This lower-bound proof is valid for an entire class of sorting algorithms that perform only adjacent exchanges. This includes, for example,

  • Insertion Sort
  • Bubble Sort
  • Selection Sort

This represents a limitation in ‘simple’ sorting algorithms and makes it clear that to achieve more efficiency we need to consider algorithms that do comparisons and exchanges between elements that are far apart. To run faster than quadratic time, an algorithm must eliminate more than one inversion per exchange.

Algorithms Notes – Insertion Sort

insertion-sort-example-300px

Animation depicting insertion sort

The following is the source code for insertion sort:

for i = 1 to length(A) - 1
    x = A[i]
    j = i - 1
    while j >= 0 and A[j] > x
        A[j+1] = A[j]
        j = j - 1
    end while
    A[j+1] = x
 end for

Here is an implementation in C++ from Weiss’s book:

template void Insertion_Sort(Etype A[], const unsigned int N)
{
Etype Tmp;

A[0] = Min_Val();
for (int P = 0; P <= N; P++)
{
Tmp = A[P];
for (int j = P; Tmp < A[j-1]; j–)
A[j] = A[j-1];
A[j] = Tmp;
}
}
[/sourcecode]

Analysis:

Since it has nested loops which take at most n iterations, insertion sort is \mathcal{O}(n^2). This is a tight bound (a reverse order array as input can achieve this bound). To calculate the average case:

\sum_{P = 2}^{n} = 2 + 3 + ... + n = \Theta(n^2).

Algorithms Notes – Binary Trees

720px-binary_search_tree-svg1

A binary tree.

Def: Let T be a tree such that no node in T has more than two children. T is referred to as a binary tree.

A binary tree effectively consists of a root r and two subtrees T_{L} and T_{R}, any (or both) of which could be empty.

Since we already know the maximum number of children each node in a binary tree can have, we can implement it in a way that is very simple in comparison to the implementation of a general tree. Just store pointers for the two children in every node.

 

struct Tree_Node
{
    Element_Type Element;
    Tree_Node* Left_Child;
    Tree_Node* Right_Child;
};

Def: Let T be a binary tree such that each node has either zero or exactly two children. T is called a full binary tree.

Full Binary Tree

Full Binary Tree

Def: Let T be a binary tree such that every level of T except for possibly the last is completely filled and all nodes are as far-left as possible. T is called a complete binary tree.

Complete Binary Tree

Complete Binary Tree

 

Algorithms Notes – Implementing Trees

The simplest way to implement a tree would be something like

struct Tree_Node
{
    Element_Type Element;
    Tree_Node* Child_One;
    Tree_Node* Child_Two;
    ...
};

This approach has problems, though. To begin with, the number of children per node can vary greatly and is not known in advance, hence this causes a lot of space to be wasted. A more efficient way to implement a tree would be to store the children of each node in a linked list of nodes.

struct Tree_Node
{
    Element_Type Element;
    Tree_Node* Child_One;
    Tree_Node* Next_Sibling;
};
Tree implementation

Pictorial Representation of a tree implementation using the above-described method.