Random-Surfer Web-Graph Model
(Redirected from Random Surfer Web-Graph model)
Jump to navigation
Jump to search
A Random-Surfer Web-Graph Model is a Web-based Random-Graph Model that describes the probability of a random user visiting a web page.
- Context:
- It can be artificially simulated by a Random Surfing Algorithm.
- …
- Example(s):
- Counter-Example(s):
- See: Markov Chain, Graph Theory, Web Page, Hyperlink, URL.
References
2021
- (Wikipedia, 2021) ⇒ https://en.wikipedia.org/wiki/Random_surfing_model Retrieved:2021-8-14.
- The random surfing model is a graph model which describes the probability of a random user visiting a web page. The model attempts to predict the chance that a random internet surfer will arrive at a page by either clicking a link or by accessing the site directly, for example by directly entering the website's URL in the address bar. For this reason, an assumption is made that all users surfing the internet will eventually stop following links in favor of switching to another site completely. The model is similar to a Markov chain, where the chain's states are web pages the user lands on and transitions are equally probable links between these pages.
2008
- (Chebolu & Melsted, 2008) ⇒ Prasad Chebolu, and Pall Melsted (2008). "PageRank and The Random Surfer Model". In: Proceedings of the Nineteenth Annual (ACM-SIAM) Symposium on Discrete Algorithms (SODA 2008).
- QUOTE: The Random Surfer Web-Graph model is a sequence of directed graphs $G_t, t = 1, 2, \ldots$. The graph $G_t$ has $t$ vertices and $t$ edges. The process is parameterized with a probability $p$ and we let $q = 1 − p$. $G_1$ consists of a single vertex $v_1$ and a directed self loop. Given $G_t$ we create $G_{t+1}$ in the following manner:
- 1. add $v_{t+1}$ to the graph
- 2. pick $u$ uniformly at random from $v_1, \ldots, v_t$
- 3. with probability $p$ add the edge $\left(v_{t+1}, u\right)$ to the graph and stop
- 4. otherwise with probability $q$ replace $u$ by the unique vertex $u$ points to and repeat step 3
- We see that $G_t$ will be a directed tree rooted at the first vertex, $v_1$, which we will refer to as the root.
For a directed graph the PageRank is defined as the stationary probability distribution of a random walk on the graph. The random walk follows the outgoing edges of the graph, but will restart itself every time with probability $1−c$, and start at a new vertex, chosen uniformly at random. The probability $c$ is called the decaying factor and influences the convergence of the computation of PageRank; (...)
2006
- (Blum et al., 2006) ⇒ Avrim Blum, T. H. Hubert Chan, and Mugizi Robert Rwebangira (2006). "A Random-Surfer Web-Graph Model". In: Proceedings of the Third Workshop on Analytic Algorithmics and Combinatorics (ANALCO 2006).
- QUOTE: The starting point for this paper is the observation that a simple “random surfer” model provides a natural explanation for preferential attachment. In particular, imagine that each new node (a person setting up their web page) puts in $k$ links into the existing graph by picking a random start node and then randomly surfing the web until it finds $k$ interesting pages to connect to. Imagine also that each page is equally likely to be interesting to the surfer and each link is bidirectional (so we have an undirected graph). Then, if the probability $p$ of a page being “interesting” is sufficiently small, these connections will be made (approximately) according to the stationary distribution of the walk, which is exactly the preferential attachment distribution.