How Carpal Tunnel improved my Code


“Waterfall”. M.C Escher.1961.

When I started waking up with annoying and persistent tingling every morning around one and a half years ago, I didn’t take it very seriously which probably was the wrong thing to do. By the middle of 2016 however, the pain had gotten annoying enough to affect my concentration which is when it was evident that I had carpal tunnel. With help from my father who is luckily a medical doctor and told me about wrist braces and various lifestyle changes, the pain became a little manageable however I was still left with an engineering constraint.

Here is what usually happens when I sit and type: After the first ten minutes the area between the thumb and index finger of my left hand begins tingling (I am left handed so it starts with the left hand). Around the seventeen minute mark, my right hand starts feeling similar tingling. After around half and hour, my hands are pretty much in pain and I also begin feeling pain around my shoulders. After forty minutes, I simply have to stand up and either walk around or lie down for a bit before resuming work. The pain subsides more or less completely after ten minutes of rest but my breaks typically last twenty minutes as I am lazy.

To summarize, every forty minutes of typing incurs a cost of 20 minutes. Initially, I was pretty depressed as I made the mistaken and naive assumption of time spent writing code was directly proportional to my productivity as a programmer. With that assumption, carpal tunnel meant a ~33.3% loss in productivity. I have been measuring my roughly productivity by assigning difficulty levels to tasks I store on Google Tasks. This includes research projects, course projects, assignments, reading textbooks/documentation etc. Surprisingly, after eight months of this, it seems my initial assumption of a ~33.3% loss in productivity could not have been more inaccurate. In fact, it seems to me that my productivity has actually  increased over this period.

Yes, I am aware that correlation does not imply causality. Understandably, there are a lot of factors involved in this increase in productivity from the fact that I gain more knowledge and experience with time to the fact that my caffeine consumption has also increased significantly. However, I am still convinced that carpal tunnel is playing a significant role in this trend. Here are a few reasons why I think that is the case.

1- The 20 minute “break” isn’t time wasted

In fact, during the 20 minute break I think about the code I have written and what I plan to do in my next 40 minute work session. In other words, this has turned into a kind of planning session that precedes every work session. The key advantage to this is that, earlier on if I had say ways to approach subproblem X, I would first mentally sort them in descending order with an order principle like: (w_{1} * ease of implementation + w_{2} * probability of working – w_{3} * computational resources required) where the w_{i}s are weights. Then, I would mentally execute the following algorithm:

approach_list = []

while not tired:
    new_approach = think_of_approach()


while not working_fine(current_approach):
    current_approach = approach_list.pop()

until I managed to come up with one that worked fine.

Now what the planning session allows me to do is prune the approach_list before the implementation and testing steps by simply thinking through all the approaches and removing the ones I can deduce (or when I’m really unsure, mathematically prove) will not work. In other words, now I execute the following in between the two while loops (after sorting, although it doesn’t matter):

approach_list.filter(approach => deduce_correct(approach))

Since the deduce_correct function takes less time in the average case than implementtest, this method of pruning the dataset more than offsets the 20 minutes “wasted” in the planning session.

2- Pain incentivizes cleaner code

When every key-press hurts your hands, it is not difficult to motivate yourself to write code that is more:

  • Clear: as altering, say, variable values to figure out what a piece of code is doing later in case I forget will result in more agony.
  • Concise: the less code I write to address every problem, the less it hurts.
  • Reliant on the standard library and third party APIs: reinventing the wheel hurts too much.
  • Better commented: The last thing I want to do is rewrite a routine because I’ve forgotten how it works. The clearer (and more concise) comments I write, the less painful my future.

3- Picking the right tool

This is in some ways a corollary of reason 2. I have now also become more inclined to begin projects using programming languages and technologies that will help me get them done with the least amount of boilerplate code etc. While admittedly this can add performance and scalability challenges as things get more complex, I realize I often over-complicated my life by thinking that far ahead resulting in even bare-bones functionality taking a long time to get implemented and in me often resorting to hacky solutions. For example, it is almost redundant to mention that a simple web-crawler written quickly and beautifully with python + urllib or node-js + https/http libs (or even bash + wget) can take over twice the time (and twice the code) if implemented in certain other languages (Captain obvious: “He means Java”).

Statistics: Writing this took me 1.575 writing sessions and 2 planning sessions. 🙂

What Next – A Research Game Plan


Evariste Galois

The French mathematician Évariste Galois solved a three and a half century old standing problem on the solvability of polynomials. He also laid the foundations of group theory and galois theory somewhere between that time and his untimely death at 20. [More on Galois]

On the other hand, I spent the night of my twentieth birthday studying (and struggling to understand) very simple undergraduate linear algebra miles away from any of Galois’ accomplishments. Not all of us are Galois or anything close to him. That doesn’t mean we can’t strive to reach his level. If not at twenty, then maybe by the time we are fifty.

Luckily, there are plenty of exciting problems to be worked on in my research area and solving them will require a solid knowledge-base and understanding. Here is how I would rate my knowledge of some of the sub-fields I need to know at this moment in time:

  • Internet Architecture: Working Knowledge
  • Datacenters: Some Knowledge
  • Internet Measurements: Good Working Knowledge
  • Distributed Systems: Little Knowledge
  • Network Security: No Knowledge
  • Wireless Networks: No Knowledge
  • Internet Censorship: Working Knowledge

I am currently in the third year of my undergraduate programme. In the ideal case, before the start of the first year of my graduate programme (if any grad school miraculously agrees to take me in as a PhD student), I should have a “Good Working Knowledge” of all the above categories.


Needless to say this requires developing adequate background in Distributed Systems, Network Security and Wireless Networks (which hopefully will pave the way for perhaps some interesting Internet of Things research) along with brushing up more on Datacenters. Let’s see how things work out. Lots of exciting work ahead. As Carl Sagan would say, “Somewhere, something incredible is waiting to be known”. 🙂

Side note: The frequency of my posts has undoubtedly decreased as I focus more on exciting research projects and graduate-level courses. In fact, the only reason I had the time to post this today was because I have a mid-semester break. Sorry about that.

PageRank: A Case Study for Discrete-time Markov Chains and Graph Theory

Def: Let a webpage be a vertex in a directed graph such that

  1. the weight of an outgoing edge connected to the vertex can only take values in the interval (0, 1]
  2. the sum of the weights of all outgoing edge connected to the vertex equals 1
  3. the weight of an outgoing edge connected to the vertex is \frac{1}{k} where k is the total number of outgoing edges connected to the vertex.

Def: Let a hyperlink be any outgoing edge connected to a webpage.

Def: Let an internet be a graph I = (V, E) such that all vertices v in V are webpages and all edges e in E are hyperlinks. The following definition is taken from [1]:


Seem familiar? If not, I will write it in an equivalent but more recognizable form [3]:
These are the global balance equations of a Discrete-time Markov Chain!
Let $latex I$ be an internet represented by the following diagram. [2]
The above equations can be used to calculate the ranks of the four webpages in internet I. The ranks of webpage 1, 2, 3 and 4 are 0.38, 0.12, 0.29 and 0.19 respectively. [2]
Algorithm [3] (slightly modified):
  1. For an internet, determine all webpages and links between them.
  2. Create a directed graph.
  3. If page i has k > 0 outgoing links, then the probability of each outgoing link is 1/k.
  4. The vertices of the graph are states and its links are transitions of a Markov Chain. Solve the Markov Chain to get its limiting probabilities. The limiting probabilities are the pageranks of the webpages.
It is important to note that not every internet is conveniently ergodic (i.e aperiodic and irreducible. All internets are finite so positive recurrence is not a problem). See page 4 and 5 of [1] to see how Brin and Page solve this problem by creating artificial rank sources in rank sinks. A better explanation is provided in [2] with the title “The solution of Page and Brin”. Of course, to test the pagerank algorithm, Brin and Page created the Google Search Engine initially hosted at and then self-hosted at You might have heard of it.
[1] Paper: “The PageRank Citation Ranking: Bringing Order to the Web” by Brin, Page.
[2] “The Mathematics of Web Search” lecture notes (Winter, 2009) Cornell University
[3] Notes from Dr. Zartash’s lectures on DTMCs and Queueing theory and Notes from Dr. Ihsan’s lecture on the PageRank Algorithm.

Interesting Academic Genealogy

Def: let a → b be a directed edge in which the predecessor a is the advisor and the successor b is the advisee.
Jacob Bernoulli (Doctorate in Theology, 1676, Universität Basel – Switzerland)
→ Johann Bernoulli (Medicinae Dr. Universität Basel 1690 – Switzerland)
→ Leonhard Euler (Ph.D. Universität Basel 1726 – Switzerland)
→ Joseph Louis Lagrange (B.A. Università di Torino 1754 Italy)*
+ Pierre-Simon Laplace (unknown)
→ Simeon Denis Poisson (Ph.D. École Polytechnique 1800 France)^
→ Michel Chasles (Ph.D. École Polytechnique 1814 France)
→ Hubert Anson Newton (B.A. Yale University 1850 United States)
→ Eliakim Hastings Moore (Ph.D. Yale University 1885 United States)
→ Oswald Veblen (Ph.D. The University of Chicago 1903 United States)
→ Alonzo Church (Ph.D. Princeton University 1927 United States)
→ Alan Mathison Turing (Ph.D. Princeton University 1938 United States)
* Not officially. He didn’t get a doctorate degree.
^ co-advised by both.

Networks Notes: RFC 1149 – IP over Avian Carriers

This is a series containing notes I made while reading RFCs.

Link to this RFC.


Avian carrier

A Standard for the Transmission of IP Datagrams on Avian Carriers.

Note: this was the April Fool’s RFC for the year 1990. 🙂

  • Describes an experimental method for sending IP datagrams over “avian carriers” (pigeons!)
  • Primarily useful in Metropolitan Area Networks
  • Experimental, not recommended standard.

Overview and Rational

  • Avian carriers provide:
    • high delay
    • low throughput
    • low altitude service
  • Connection path limited to a single point-to-point path for each carrier.
  • Many carriers can be used without significant interference with each other, outside of early spring.

Frame Format

  • Scroll of paper wrapped around avian carrier’s leg.
  • Bandwidth depends on leg length.
  • MTU typically 256 mg. Paradoxically, generally increases as carrier ages.


  • Prioritized pecking order can be used for multiple types of service.
  • Built-in worm detection and eradication
  • Storms can cause dataloss.
  • Persistent delivery retry, until the carrrier drops.

Security Considerations

  • Not generally a problem in normal operation
  • Data encryption needed in tactical environments.

Networks Notes: RFC 792 – Internet Control Message Protocol

This is a series containing notes I made while reading RFCs.

Link to this RFC.

Internet Control Message Protocol (ICMP)

  • Used by a gateway or destination to communicate with a source host. To report an error, for example.
  • Uses the basic support of IP as if it was a higher-level protocol. Actually a part of IP, is implemented in every IP module.
  • Control messages provide feedback about problems in the communication environment.
  • Not designed to make IP reliable, IP is not designed to be completely reliable. For main purpose, see previous point.
  • No guarantees that a control message will be returned. Datagrams can still be undelivered without being reported by a control message.
  • Reliability is implemented in higher-level protocols that use IP (e.g TCP).
  • ICMP messages typically report errors in the processing of datagrams.
  • No ICMP messages are sent about ICMP messages (to avoid infinite loop).
  • ICMP messages only sent about errors in handing fragment zero of fragmented datagrams (the fragment with the fragment offset equal to zero).

Message Formats

  • Sent using the basic IP header.
  • First octet of the data portion of the datagram is a ICMP type field. Its value determines format of remaining data.
  • Protocol number of ICMP is 1.

For values of an ICMP message’s IP header, see the “Message Formats” section of the linked RFC.

ICMP Fields

  • Type
  • Code
  • Checksum

For details on ICMP message types see pages 4 to 19 of the linked RFC.


Networks Notes: RFC791 -Internet Protocol

This is a series containing notes I made while reading RFCs.

Link to this RFC.

Internet Protocol

  • Implements two basic functions:
    • Addressing
    • Fragmentation
  • Addresses carried in the internet header are used to transmit internet datagrams to their destinations.
  • The selection of a path for transmission is called routing.
  • Internet modules use fields in the internet header to fragment and reassemble internet datagrams when necessary in “small packet” networks.
  • An internet module resides in:
    • each host engaged in communication
    • each gateway interconnecting networks
  • Modules share common rules for interpreting the address fields and for fragmenting and assembling internet datagrams.
  • Modules have procedures for making routing decisions and other functions.
  • The protocol treats each internet datagram as an independent entity unrelated to any other internet datagram (i.e no connections or logical circuits).
  • The protocol uses four key mechanisms to provide service:
    • Type of Service
    • Time to Live
    • Options
    • Header Checksum


  • Type of Service
    • Indicates the quality of the service desired.
    • Used by gateways for:
      • selecting tranmission parameters for a particular network
      • selecting the network to be used for the next hop
      • selecting the next gateway when routing the internet datagram.
  • Time to Live
    • Indication of an upper bound on the lifetime of an internet datagram.
    • Set by sender of the datagram.
    • Decremented at points in the route where it is processed.
    • The internet datagram is destroyed if this  reaches zero before reaching the destination.
    • A “self-destruct” time limit.
  • Options
    • provide for control functions needed or sometimes useful.
    • unnecessary in most common communications.
    • can include provisions for timestamps, security, routing etc.
  • Header Checksum
    • provides a verification that the internet datagram has been transmitted correctly.
    • the internet datagram is discarded if the header checksum fails.
  • The internet protocol does not provide a reliable communication facility. There are no:
    • acknowledgements
    • error control for data, aside from the header checksum
    • retranmissions
    • flow control
  • Errors detected can be reported using the Internet Control Message Protocol (ICMP), implemented in the IP module.


  • Name – what we seek
  • Address – where it is
  • Route – how to get there

On addresses…

  • Addresses have a fixed length of four octets (32 bits).
  • An address:
    • begins with a network number
    • followed by the local address (“rest” field)
  • Addresses have three formats or classes.
    • Class A: high order bit is zero. Next 7 bits are the network. Last 24 bits are the local address.
    • Class B: high order two bits are one-zero. Next 14 bits are the network. Last 16 bits are the local address.
    • Class C: high order three bits are one-one-zero. Next 21 bits are the network. Last 8 bits are the local address.


  • Necessary when large packets have to pass through a local network that limits packets to a smaller size.
  • An internet datagram can be marked “don’t fragment”.
    • It will not be fragmented under any circumstances.
    • If it cannot be delivered without fragmentation, it will be discarded.
  • Needs to be able to break a datagram into an almost arbitrary number of pieces that can later be reassembled.
  • Receiver uses identification field to ensure fragments of different datagrams are not mixed.
  • Fragment offset field tells receiver the position of a fragment in the original datagram.
  • Fragment offset and length determine portion of the original datagram covered by the fragment.
  • More-fragments flag indicates (by being reset) the last fragment
  • Fields that provide sufficient information to reassemble datagrams:
    • identification field
    • fragment offset field
    • length
    • more-fragments flag
  • See RFC page 8-9 for a very well-written description of how fragmentation works.


  • Forward datagrams between networks
  • Also implement the Gateway to Gateway Protocol (GGP) to coordinate routing and other internet control information.
  • Higher level protocols need not be implemented in a gateway. GGP functions are added to the IP module.

The details on specification are very well written and I don’t think notes for them are needed. To read up the specification, refer to section 3 of the above linked RFC.

Networks Notes: RFC 768 – User Datagram Protocol

This is a series containing notes I made while reading RFCs.

Link to this RFC.

User Datagram Protocol


  • Uses IP as its underlying protool.
  • Does not guarantee reliable delivery of data streams
  • Protocol mechanism is minimal

Header contains:

  • Source port. (optional)
    • Contains the port of the sending process.
    • Any replies will be possibly addressed to this port.
    • Contains a zero value if unused.
  • Destination port.
    • Contains the port of the destination address.
  • Length.
    • Contains the length (in octets i.e bytes) of the datagram being sent including the header and the data.
    • Minimum value is eight because the header is eight bytes.
  • Checksum.
    • 16-bit one’s complement of the one’s complement sum of the information being sent.
    • Sums up the information in the IP header*, the UDP header and the data.
    • Pads data with zero octets to make a multiple of 2 octets (16-bits remember?)
    • Has a value of all zeroes if not generated (e.g for debugging purposes)
    • Same checksum procedure is also used in TCP.

* or, to be more precise, the pseudoheader which is assumed will be prefixed to the datagram.

A user-interface designed for this protocol should allow:

  • The creation of new receive ports
  • Functionality on the receive ports that does operations like returning the data octets and indicating the source port and address.
  • An operation allowing a datagram to be sent, specifying the data and destination ports and addresses to be sent.

IP Interface guidelines:

  • UDP module should get the source and destination addresses and the protocol field from the IP header.
  • One possible UDP/IP interface can:
    • return whole internet datagram (including internet header) on a receive operation.
    • pass whole internet datagram (again, including header) to the IP on a send operation.
    • Let the IP verify certain fields for consistency and compute the internet header checksum.

Possible applications of this protocol include:

  • Internet Name Server
  • Trivial File Transfer

When used in the Internet Protocol, the protocol number of UDP is 17 (21 in octal).


Project Twain

To hype-up my vocabulary for the GRE and gain lots of information on the way, I am going to read articles regularly and list them here with my comments.

#1 “Waiting for Godel” by Siobhan Roberts in The New Yorker. June 29, 2016. ( link )

My comments: A very informative article about Godel’s Theorem and Mathematics in general. Centered around the experience of Siobhan Roberts while attending a crash-course on Godel’s Incompleteness Theorem at the Brooklyn Institute for Social Research where he encountered people as diverse as “a computer scientist obsessed with recursion…, a public-health nutritionist with a fondness for her “Philoslothical” T-shirt; a philosopher in the tradition of American pragmatism; an ad man versed in the classics; and a private-school teacher who’d spent a lonely, life-changing winter reading Douglas Hofstadter’s “Gödel, Escher, Bach,”” A very pleasant read.

#2 “Killing the Computer to Save It” by John Markoff in The New York Times. October 29, 2012. ( link )

My comments: This article discusses the attempt of a famour Computer Scientist and Network Security specialist, Peter G. Neumann to redesign computer systems in an attempt to prevent the security flaws the modern-day internet is plagued with. With nostalgic descriptions of SRI, the early days of Computers and the Internet, and a meeting between Dr. Neumann and Albert Einstein which was to influence Dr. Neumann’s life and work greatly, the article is very vivid and engaging.

#3 “Soviet Computer Push” in The New York Times. January 5, 1988. ( link )

My comments: Not a particularly good article. Monotonously reports a 1985 Soviet attempt to accelerate the development of its computer industry in order to compete with the West.

#4 “Ada Lovelace, The First Tech Visionary” by Betsy Morais in The New Yorker. October 15, 2013. ( link )

My comments: This article provides a glimpse into the life of Ada Lovelace using various sources. It discusses her contributions to the “Analytical Engine” introduced to her by its inventor, Charles Babbage, which was later to be recognized as the world’s first designed computer. It describes how Ada was the first person to recognize the true potential of the Analytical Engine and write the world’s first computer program which weaved a sequence of Bernoulli numbers. Moreover it also informs us that Ada was the first person to predict the rise of a new science emergent from mathematics based on this design, which she called the “science of operations” and is now called computer science. In parallel with all this, the article also discusses the discrimination faced by women in mathematics and computing and even the attempts at discrediting Ada in the twentieth century in order to establish male domination in the field. ““As people realized how important computer programming was, there was a greater backlash and an attempt to reclaim it as a male activity,” Aurora told me. “In order to keep that wealth and power in a man’s hands, there’s a backlash to try to redefine it as something a woman didn’t do, and shouldn’t do, and couldn’t do.”” The author concludes with a mention of the Ada programming language, named after the Countess of Lovelace which “brought together a number of different programming languages. It’s fitting for Lovelace—a woman who rode horses and played the harp and studied poetry—to tie seemingly disparate elements together.”

#5 “Beautiful Code” by Zeke Turner in The New Yorker. March 30, 2015. ( link )

My comments: Some people might find this article beautiful. I see it as a narration of how three people tried to sell other people’s code and algorithms as art on fancy sandstone tablets. Wasn’t particularly informative or inspiring.

#6 “Tesla Slept Here” by Mark Singer in The New Yorker. January 14, 2008. ( link )

My comments: This article uses the setting of the hotel Nikola Tesla lived in for the last decade of his life as a starting point to discuss his life. It is unique because, unlike most Tesla articles, it has almost no details about his scientific contributions. Instead it focuses more on Tesla the person, and Tesla the occupant of rooms 3327 and 3328.

#7 “Slavoj Žižek on Brexit, the crisis of the Left, and the future of Europe” by Slavoj Zizek and Benjamin Ramm in Open Democracy. July 1, 2016.  ( link )  [ entry by Ali Mehdi Zaidi ]

Mehdi’s comments: This article has an interesting discussion on the feasibility of popular democracy and the various forms of governance which can be instituted to bring about radical change.

My comments: Very illuminating. Zizek is not afraid to make assertions he can back up with solid reasoning, even if it is criticism of the Left and democracy. ““Direct democracy is the last Leftist myth”, Žižek tells me. “When there is a genuine democratic moment – when you really have to decide – it’s because there is a crisis”.” I am not sure what his exact opinion on the EU is, but seems somewhat supportive with a bit of suggested changes in policy. Analyzes the Brexit fuck-up really well.

#8 “Claude Shannon, the Father of the Information Age, Turns 1100100” by Siobhan Roberts in The New Yorker. April 30, 2016. ( link )

My comments: This article gives a heartening account of the mathematician Claude Shannon who is aptly called the father of the Information Age. It discusses some of Shannon’s greatest contributions, laying the foundation of Information Theory and establishing the connection between Boolean Algebra and Circuit Design in his Master’s thesis which completely changed the field of Circuit Design. As the article quotes, there will probably be an entry Shannon in the 166th edition of Encyclopedia Galactica, something like “Claude Shannon: Born on the planet Earth (Sol III) in the year 1916 A.D. Generally regarded as the father of the information age, he formulated the notion of channel capacity in 1948 A.D. Within several decades, mathematicians and engineers had devised practical ways to communicate reliably at data rates within one per cent of the Shannon limit.”

#9 “The Saint and the Skyscraper” by Mohammed Hanif in The New York Times. June 15, 2016. ( link )

My comments: Very well-written article informing us about the shrine of Abdullah Shah Ghazi in Karachi and what it means to the poor people there. While being threatened by religious extremists and having suffered a suicide blast in 2010, the shrine now has a new enemy: corporate real estate investors. Bahria Icon towers has completely surrounded the shrine with concrete walls and has made it impossible for its visitors, the poor, to visit Ghazi’s shrine. Shielding Ghazi’s shrine from the people who actually needed it. The article moves on to criticize the general trend in Karachi for development by the rich and exclusively for the rich and how it harms and sidelines the poor. “This is the development model Karachi has followed. There are signal-free corridors for car owners, but hardly any footpaths for the millions who walk to work. There are air-conditioned shopping malls for affluent consumers, but the police hound street vendors claiming they’re a threat to public order…But then who needs the sea, or a saint to protect us against it, when we can have infinity pools in the sky?”

#10 “I Worry About Muslims” by Mohammed Hanif in The New York Times. December 17, 2015. ( link )

My comments: This article discusses the death of Sabeen Mahmud. The writer, Mohammed Hanif, discusses the hypocrisy of both her Muslim killers and the moderates who take it upon themselves to defend the reputation of Islam. Very insightful read. It ends with: “The most poetic bit Muslim pundits tell the world is that Islam says if you murder one human being you murder the whole human race. So how come Sabeen Mahmud is gone and the whole bloody human race, including her killers, is still alive?”

#11 “Leopold Weiss, The Jew Who Helped Invent the Modern Islamic State” by Shalom Goldman in Tablet. July 1, 2016. ( link ) [entry by Ali Mehdi Zaidi]

Mehdi’s comments: his is about Muhammad Asad, father of famous Anthropologist Talal Asad and a renowned scholar of the Quran. Apparently, the man had a very interesting back story and had profound ideas on Islamic governance. Definitely worth reading.

My comments: Very interesting story, especially considering his diverse background and very globalized life.

#12 “How Nikola Tesla Predicted the Smartphone” by Celena Chong in TIME magazine. November 10, 2015. ( link )

My comments: This articles depicts the genius of the Serbian scientist and inventor Nikola Tesla using yet another example, his accurate almost prophetic prediction of the smartphone. Today, Tesla is widely regarded as the father of the electric age and a man far ahead of his time.

#13 “There is no difference between computer art and human art” by Oliver Roeder in Aeon. July 20, 2016. ( link )

My comments: This article challenges the distinction some commentators make between human and computer art, arguing that computer art is as human as “paint art or piano art”.