Let $latex I$ be an internet represented by the following diagram. 
The above equations can be used to calculate the ranks of the four webpages in internet I. The ranks of webpage 1, 2, 3 and 4 are 0.38, 0.12, 0.29 and 0.19 respectively. 
Algorithm  (slightly modified):
For an internet, determine all webpages and links between them.
Create a directed graph.
If page i has k > 0 outgoing links, then the probability of each outgoing link is 1/k.
The vertices of the graph are states and its links are transitions of a Markov Chain. Solve the Markov Chain to get its limiting probabilities. The limiting probabilities are the pageranks of the webpages.
It is important to note that not every internet is conveniently ergodic (i.e aperiodic and irreducible. All internets are finite so positive recurrence is not a problem). See page 4 and 5 of  to see how Brin and Page solve this problem by creating artificial rank sources in rank sinks. A better explanation is provided in  with the title “The solution of Page and Brin”. Of course, to test the pagerank algorithm, Brin and Page created the Google Search Engine initially hosted at google.stanford.edu and then self-hosted at google.com. You might have heard of it.
 Paper: “The PageRank Citation Ranking: Bringing Order to the Web” by Brin, Page.
 “The Mathematics of Web Search” lecture notes (Winter, 2009) Cornell University
 Notes from Dr. Zartash’s lectures on DTMCs and Queueing theory and Notes from Dr. Ihsan’s lecture on the PageRank Algorithm.