George Carlin on Issues

I may not completely agree with everything he says, but I love George Carlin. He’s hilarious and spot-on. He also has a knack of making excellent arguments brilliantly while being drop-dead hilarious at the same time, which is why he has had a lot of influence on my own opinions. For my own benefit as much as anyone else’s, here is a compilation of his comic take on various issues.

George Carlin on chickens (yay!):

George Carlin on what to do with the male chauvinists:

George Carlin on voting:

George Carlin on war:

George Carlin on questioning things:

George Carlin on abortion:

Evolution Notes – Variation under Domestication (1 of 3)

References used: The Origin of Species (first edition) by Charles Darwin. Chapter 1.

Cultivated plants and animals generally show more variety as compared to species bred naturally. This is probably because of the fact that domestic plants and animals are raised in conditions that are not as uniform as the natural habitat of their ancestors. Variation in domestic animals and plants is observed to be a continuous and unceasing process. For example, wheat, which is a very old cultivated plant, continues to produce new varieties.

Claim: Conditions of life are not the direct and only cause of this variation

Evidence: Seedlings of the same fruit and offspring from the same litter can show be considerably different from each other even if both the offspring and parents are raised under exactly the same conditions. If the conditions of life were the only cause of variation in organisms, there would have been no variation among the offspring or even if there was variation, all the offspring should have varied in the same manner. This can be used as evidence of the fact that the laws of reproduction, growth and inheritance have a greater impact on the characteristics of an organism than do the conditions of life directly.

Claim: Correlation of growth. If an organism is selectively bred for a certain characteristic, other parts of the structure of that organism are also modified.

Evidence: Cats with blue eyes are always observed to be deaf. Hairless dogs have imperfect teeth. Long limbs are always accompanied by an elongated head. White sheep and pigs are differently affected by vegetable poisons as compared to their domestic counterparts.

Claim: As a general rule, every characteristic in an organism inherited.

Evidence: When in individuals exposed to the same conditions of life as the rest of the population, a very rare deviation such as albinism, for example, appears which is only found in one of several millions of individuals in a population, the same deviation is often also found in that individual’s offspring. By the laws of probability, one is compelled to attribute its reappearance in the offspring to the laws of inheritance.

Spring Break Project: Biology

Since I have this week off from university as spring break, I have been given an epic opportunity to learn something new. So I have decided to learn something about a field that has always interested me: Biology. Especially things like evolutionary biology, ecology and genetics. My own field of Computer Science has a lot of things that are based on ideas in Biology including genetic algorithms, neural networks and even the botany-like terminology used for trees, their sub- and super-structures. Not to mention the fact that the father of computer science, Alan Turing, was also a part-time Biologist.

So here is my plan. I plan to read Charles Darwin’s The Origin of Species, two chapters every day for seven days (since the edition I was able to find at my university’s library contains fourteen chapters and I have seven days off) starting today and post notes related to them here on my blog. Let’s hope I manage to complete this project.

As the award ceremony begins, the crowd can be heard whispering: “Where is the champion? Where is Ryu?” Where is Ryu as his admirers chant his name? Already seeking the next challenge. Ceremony means nothing to him. The fight is everything. ~ Street Fighter 2.

Before the Midterm exam of the Digital Logic Circuits course I am taking, a teaching assistant emailed us telling us an unofficial review session had been arranged for us by some person called Bilal Ahmad who had been the head TA in the last offering of the course. I wanted a good grade in the course and a review session would help, I figured, so I attended it. I was expecting a typical last-minute tutorial where they basically summarise all the stuff we learnt in class and somewhat mentally prepare us for the final exam.

Things turned out to be a lot different from what I expected, however. This… this person (for lack of a better term)… managed to teach me the pre-mid course content better in that two-hour tutorial than whatever I had gained out of half a semester worth of classes, book-reading and assignments combined. Moreover he seemed to be passionate about what he was teaching. After the tutorial ended, he remained in the library teaching us more and helping us out with problems till well after midnight and then gave us his contact number just in case we needed more help before the midterm. He was not a TA for the current offering of the course. He was not required to do any of this, he was doing it completely for free. The best part about it all was that when he taught something, it seemed so simple and so exciting.

“Who was this brilliant genius?” I found myself asking. It was Bilal Ahmad, an Electrical Engineering graduate. A person who had been head TA in the last offering of Digital Logic Circuits and has refused to stop helping students out with the course. The only reason I am writing this post is to show my gratitude to this gem of a person both for finally making me understand how a binary adder-subtractor works and for restoring my faith in humanity. This is a reminder of how there are heroes in our midst. Heroes to whom the ceremony means nothing… and the fight is everything.

On Loneliness

An indigenous american farming. Source

I used to be mortally terrified of being alone. I have now learnt to embrace it like a long lost brother. I understand the true power of this state. Being alone also means being independent. Independency leads to power and productivity. When you are not attached to anyone by symbolic strings of social convention, you are not held back by their mediocrity.

Now I want to be lonely, since only when I am alone am I truly free. A wanderer ardently experiencing the act of being alive and conscious on this planet and interestedly observing how others use up their lifetimes. I can understand the recurring patterns of nature and society without being encumbered by an unasked for and unsought membership of the latter.

I can get lost in the timeless world of mathematics and the arts and sciences emergent from it. I can express my true opinions on every issue and every matter without having to fake, censor or alter them for fear of ostracization. I can live a life that is not haunted by the ghosts of unnecessary emotional investment and unaffordable pseudo-familial or cultural expectation. I am free to explore the cascading branches born by my ideas and traverse the deepest and truest thoughts produced in my mind.

It is in this state that I am most likely to reminisce the memories of my childhood. It is in this condition that I am closest to comprehending the vast services my parents have granted me and the emotional foundation and mountains of love and affection my siblings have unreservedly made me the recipient of.

Algorithm Notes – Classification of Algorithms

References used for this section: C

Algorithms usually have a primary parameter which we will call $N$. This is basically the input size (i.e the number of data items to be processed) and can be anything like:

• The degree of a polynomial
• The size of a file on which sorting or searching needs to be done
• The number of nodes in a graph etc.

Algorithms can be classified with to the functions their running times are proportional to. Most common algorithms have running times proportional to one of the following functions (in descending order of efficiency):

1. Constant. All the instructions in the algorithm are executed once or at most a few times. (define ‘a few’ precisely?) These algorithms are the most efficient.
2. Logarithmic ( $log N$ ). When $N$ doubles, the running time of these algorithms increase by a constant. When $N$ increases to $N^2$, the running time doubles.
3. Linear ( $N$ ). When $N$ doubles, the running time doubles also.
4. Log-Linear ( $N log N$ ). When $N$ doubles, the running time more than doubles (but only slightly).
5. Quadratic ( $N^2$ ). When $N$ doubles, the running time quadruples.
6. Cubic ( $N^3$ ). When $N$ doubles, the running time increases eight times.
7. Exponential ( $2^N$ ). When $N$ doubles, the running time squares. These are hopelessly slow algorithms.

To summarise, the running time of common algorithms is usually an element of:

• $\mathcal{O}(c)$
• $\mathcal{O}(log N)$
• $\mathcal{O}(N log N)$
• $\mathcal{O}(N)$
• $\mathcal{O}(N^2)$
• $\mathcal{O}(N^3)$
• $\mathcal{O}(2^N)$.

Algorithms Notes -Limitations of Simple Sorting

Def: Let $(i, j)$ be an ordered pair of indices of an array $A$ such that $i < j$ and $A[i] > A[j]$. $(i, j)$ is called an inversion in the array.

• Swapping two elements that are out of place removes exactly one inversion.
• A sorted array has zero inversions.

Note: Both the above assertions probably have good proofs. I can prove the second one by contradiction very prettily.

Technically, the running time of insertion sort is $\mathcal{O}(I + n)$ where $I$ is the number of inversions in the array. If the number of inversions is $\mathcal{O}(n)$, insertion sort will run in linear time.

Thm: The average number of inversions in an array of $n$ distinct numbers is $\frac{n(n-1)}{4}$.

Proof: Let $L$ be any list of numbers and $L_{r}$ be the list in reverse order. Consider any pair of two numbers $(x, y)$ with (without loss of generality) $y > x$ in $L$. It must be true that $(x, y)$ is an inversion in exactly one of $L$  and $L_{r}$. The total number of such pairs in any list $L$ and its reverse $L_{r}$ is $\frac{n(n-1)}{2}$. Hence, an average list has half this amount of inversions i.e $\frac{n(n-1)}{4}$ inversions. $\square$

By this theorem, it is evident that insertion sort runs in quadratic time on average.

Thm: Any algorithm that sorts by exchanging adjacent elements requires $\Omega(n^2)$ time on average.

Proof: The average number of inversions is initially $\frac{n(n-1)}{4} = \Omega(n^2)$ as was established by the previous theorem. Since each swap removes at most one inversion, $\Omega(n^2)$ swaps are required to sort the array. $\square$

This lower-bound proof is valid for an entire class of sorting algorithms that perform only adjacent exchanges. This includes, for example,

• Insertion Sort
• Bubble Sort
• Selection Sort

This represents a limitation in ‘simple’ sorting algorithms and makes it clear that to achieve more efficiency we need to consider algorithms that do comparisons and exchanges between elements that are far apart. To run faster than quadratic time, an algorithm must eliminate more than one inversion per exchange.

Algorithms Notes – Insertion Sort

Animation depicting insertion sort

The following is the source code for insertion sort:

for i = 1 to length(A) - 1
x = A[i]
j = i - 1
while j >= 0 and A[j] > x
A[j+1] = A[j]
j = j - 1
end while
A[j+1] = x
end for

Here is an implementation in C++ from Weiss’s book:

template void Insertion_Sort(Etype A[], const unsigned int N)
{
Etype Tmp;

A[0] = Min_Val();
for (int P = 0; P <= N; P++)
{
Tmp = A[P];
for (int j = P; Tmp < A[j-1]; j–)
A[j] = A[j-1];
A[j] = Tmp;
}
}
[/sourcecode]

Analysis:

Since it has nested loops which take at most $n$ iterations, insertion sort is $\mathcal{O}(n^2)$. This is a tight bound (a reverse order array as input can achieve this bound). To calculate the average case:

$\sum_{P = 2}^{n} = 2 + 3 + ... + n = \Theta(n^2)$.

Algorithms Notes – Binary Trees

A binary tree.

Def: Let $T$ be a tree such that no node in $T$ has more than two children. $T$ is referred to as a binary tree.

A binary tree effectively consists of a root $r$ and two subtrees $T_{L}$ and $T_{R}$, any (or both) of which could be empty.

Since we already know the maximum number of children each node in a binary tree can have, we can implement it in a way that is very simple in comparison to the implementation of a general tree. Just store pointers for the two children in every node.

struct Tree_Node
{
Element_Type Element;
Tree_Node* Left_Child;
Tree_Node* Right_Child;
};


Def: Let $T$ be a binary tree such that each node has either zero or exactly two children. $T$ is called a full binary tree.

Full Binary Tree

Def: Let $T$ be a binary tree such that every level of $T$ except for possibly the last is completely filled and all nodes are as far-left as possible. $T$ is called a complete binary tree.

Complete Binary Tree

Algorithms Notes – Implementing Trees

The simplest way to implement a tree would be something like

struct Tree_Node
{
Element_Type Element;
Tree_Node* Child_One;
Tree_Node* Child_Two;
...
};


This approach has problems, though. To begin with, the number of children per node can vary greatly and is not known in advance, hence this causes a lot of space to be wasted. A more efficient way to implement a tree would be to store the children of each node in a linked list of nodes.

struct Tree_Node
{
Element_Type Element;
Tree_Node* Child_One;
Tree_Node* Next_Sibling;
};


Pictorial Representation of a tree implementation using the above-described method.