Link prediction

From HandWiki
Short description: Problem in network theory

In network theory, link prediction is the problem of predicting the existence of a link between two entities in a network. Examples of link prediction include predicting friendship links among users in a social network, predicting co-authorship links in a citation network, and predicting interactions between genes and proteins in a biological network. Link prediction can also have a temporal aspect, where, given a snapshot of the set of links at time [math]\displaystyle{ t }[/math], the goal is to predict the links at time [math]\displaystyle{ t + 1 }[/math]. Link prediction is widely applicable. In e-commerce, link prediction is often a subtask for recommending items to users. In the curation of citation databases, it can be used for record deduplication. In bioinformatics, it has been used to predict protein-protein interactions (PPI). It is also used to identify hidden groups of terrorists and criminals in security related applications.[1]

Problem definition

Consider a network [math]\displaystyle{ G = (V, E) }[/math], where [math]\displaystyle{ V }[/math] represents the entity nodes in the network and [math]\displaystyle{ E \subseteq |V| }[/math] x [math]\displaystyle{ |V| }[/math] represents the set of "true" links across entities in the network. We are given the set of entities [math]\displaystyle{ V }[/math] and a subset of true links which are referred to as observed links. The goal of link prediction is to identify the unobserved true links. In the temporal formulation of link prediction the observed links correspond to true links at a time [math]\displaystyle{ t }[/math], and the goal is to infer the set of true links at time [math]\displaystyle{ t+1 }[/math] Usually, we are also given a subset of unobserved links called potential links [math]\displaystyle{ E' }[/math], and we need to identify true links among these potential links.

In the binary classification formulation of the link prediction task the potential links are classified as either true links or false links. Link prediction approaches for this setting learn a classifier [math]\displaystyle{ M_b }[/math] that maps links in [math]\displaystyle{ E' }[/math] to positive and negative labels i.e. [math]\displaystyle{ M_b:E' \to \{0,1\} }[/math]. In the probability estimation formulation, potential links are associated with existence probabilities. Link prediction approaches for this setting learn a model [math]\displaystyle{ M_p }[/math] that maps links in [math]\displaystyle{ E' }[/math] to a probability i.e. [math]\displaystyle{ M_p:E' \to [0,1] }[/math].

Single link approaches learn a model that classifies each link independently. Structured prediction approaches capture the correlation between potential links by formulating the task as a collective link prediction task. Collective link prediction approaches learn a model that jointly identify all the true links among the set of potential links.

Link prediction task can also be formulated as an instance of missing value estimation task. Here, the graph is represented as an adjacency matrix with missing values. The task is to complete the matrix by identifying the missing values. Matrix factorization based methods commonly use this formulation.

History

The task of link prediction has attracted attention from several research communities ranging from statistics and network science to machine learning and data mining. In statistics, generative random graph models such as stochastic block models propose an approach to generate links between nodes in a random graph. For social networks, Liben-Nowell and Kleinberg proposed a link prediction models based on different graph proximity measures.[2] Several statistical models have been proposed for link prediction by the machine learning and data mining community. For example, Popescul et al. proposed a structured logistic regression model that can make use of relational features.[3] Local conditional probability models based on attribute and structural features were proposed by O’Madadhain et al.[4] Several models based on directed graphical models for collective link prediction have been proposed by Getoor.[5] Other approached based on random walks.[6] and matrix factorization have also been proposed [7] With the advent of deep learning, several graph embedding based approaches for link prediction have also been proposed.[8] For more information on link prediction refer to the survey by Getoor et al.[9] and Yu et al.[10]

Approaches and methods

Several link predication approaches have been proposed including unsupervised approaches such as similarity measures computed on the entity attributes, random walk and matrix factorization based approaches, and supervised approaches based on graphical models and deep learning. Link prediction approaches can be divided into two broad categories based on the type of the underlying network: (1) link prediction approaches for homogeneous networks (2) link prediction approaches for heterogeneous networks. Based on the type of information used to predict links, approaches can be categorized as topology-based approaches, content-based approaches, and mixed methods.[11]

Topology-based methods

Topology-based methods broadly make the assumption that nodes with similar network structure are more likely to form a link.

Common neighbors

This is a common approach to link prediction that computes the number of common neighbors. Entities with more neighbors in common are more likely to have a link. It is computed as follows:

[math]\displaystyle{ CN(A,B) = {|A \cap B|} }[/math]

A weakness of this approach is that it does not take into account the relative number of common neighbors.

Jaccard measure

The Jaccard Measure addresses the problem of Common Neighbors by computing the relative number of neighbors in common:

[math]\displaystyle{ J(A,B) = {{|A \cap B|}\over{|A \cup B|}} }[/math]

Adamic–Adar measure

The Adamic–Adar measure[12] is the sum of the log of the intersection of the neighbors of two nodes. This captures a two-hop similarity, which can yield better results than simple one-hop methods. It is computed as follows:

[math]\displaystyle{ A(x,y) = \sum_{u \in N(x) \cap N(y)} \frac{1}{\log |N(u)| }, }[/math]

where [math]\displaystyle{ N(u) }[/math] is the set of nodes adjacent to [math]\displaystyle{ u }[/math].

Katz measure

Neighbor based methods can be effective when the number of neighbors is large, but this is not the case in sparse graphs. In these situations it is appropriate to use methods that account for longer walks. The Katz Measure[13] is one metric that captures this. It is computed by searching the graph for paths of length [math]\displaystyle{ t }[/math] in the graph and adding the counts of each path length weighted by user specified weights.

Let A be the adjacency matrix of a network under consideration. Elements [math]\displaystyle{ (a_{ij}) }[/math] of A are variables that take a value 1 if a node i is connected to node j and 0 otherwise. The powers of A indicate the presence (or absence) of links between two nodes through intermediaries. For instance, in matrix [math]\displaystyle{ A^3 }[/math], if element [math]\displaystyle{ (a_{2,12}) = 1 }[/math], it indicates that node 2 and node 12 are connected through some walk of length 3. If [math]\displaystyle{ C_{\mathrm{Katz}}(i) }[/math] denotes Katz centrality of a node i, then mathematically:

[math]\displaystyle{ C_{\mathrm{Katz}}(i) = \sum_{k=1}^\infty \sum_{j=1}^n \alpha^k (A^k)_{ji} }[/math]

Note that the above definition uses the fact that the element at location [math]\displaystyle{ (i,j) }[/math] of [math]\displaystyle{ A^k }[/math] reflects the total number of [math]\displaystyle{ k }[/math] degree connections between nodes [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math].

Node attribute-based methods

Node-similarity methods predict the existence of a link based on the similarity of the node attributes.

Euclidean distance

The attribute values are represented as normalized vector and the distance between the vectors used to measure similarity. Small distances indicate higher similarity.

Cosine similarity

After normalizing the attribute values, computing the cosine between the two vectors is a good measure of similarity, with higher values indicating higher similarity.

Mixed methods

Mixed methods combine attribute and topology based methods.

Graph embeddings

Graph embeddings also offer a convenient way to predict links.[8] Graph embedding algorithms, such as Node2vec, learn an embedding space in which neighboring nodes are represented by vectors so that vector similarity measures, such as dot product similarity, or euclidean distance, hold in the embedding space. These similarities are functions of both topological features and attribute-based similarity. One can then use other machine learning techniques to predict edges on the basis of vector similarity.

Probabilistic relationship models

A probabilistic relational model (PRM) specifies a template for a probability distribution over a databases. The template describes the relational schema for the domain, and the probabilistic dependencies between attributes in the domain. A PRM, together with a particular database of entities and unobserved links, defines a probability distribution over the unobserved links.[5]

Probabilistic soft logic (PSL)

Probabilistic soft logic (PSL) is a probabilistic graphical model over hinge-loss Markov random field (HL-MRF). HL-MRFs are created by a set of templated first-order logic-like rules, which are then grounded over the data. PSL can combine attribute, or local, information with topological, or relational, information. While PSL can incorporate local predictors, such as cosine similarity, it also supports relational rules, such as triangle completion in a network.[14]

Markov logic networks (MLNs)

Markov logic networks (MLNs) is a probabilistic graphical model defined over Markov networks. These networks are defined by templated first-order logic-like rules, which is then grounded over the training data. MLNs are able to incorporate both local and relational rules for the purpose of link prediction.[15]

R-Model (RMLs)

R-Models (RMLs) is a neural network model created to provide a deep learning approach to the link weight prediction problem. This model uses a node embedding technique that extracts node embeddings (knowledge of nodes) from the known links’ weights (relations between nodes) and uses this knowledge to predict the unknown links’ weights.[16]

Applications

Link prediction has found varied uses, but any domain in which entities interact in a structures way can benefit from link prediction.[17] A common applications of link prediction is improving similarity measures for collaborative filtering approaches to recommendation. Link prediction is also frequently used in social networks to suggest friends to users. It has also been used to predict criminal associations.

In biology, link prediction has been used to predict interactions between proteins in protein-protein interaction networks.[18] Link prediction has also been used to infer interactions between drugs and targets using link prediction [19] Another application is found in collaboration prediction in scientific co-authorship networks.

Entity resolution, also known as deduplication, commonly uses link prediction to predict whether two entities in a network are references to the same physical entity. Some authors have used context information in network structured domains to improve entity resolution.[20]

Link prediction in the context of network effects has been used to analyze tendencies to spread across networks and can be used to improve marketing strategies, in particular viral marketing.[citation needed]

Software packages

Free and open-source software


Proprietary software with free and open-source editions


Proprietary software


See also

References

  1. Al Hasan, Mohammad; Zaki, Mohammed (2011). Link Prediction in Social Networks. http://www.cs.rpi.edu/~zaki/PaperDir/SNDA11.pdf. 
  2. Liben-Nowell, David; Kleinberg, Jon (2007). "The Link-Prediction Problem for Social Networks". Journal of the American Society for Information Science and Technology 58 (7): 1019–1031. doi:10.1002/asi.20591. 
  3. Popescul, Alexandrin; Ungar, Lyle (2002). "Statistical Relational Learning for Link Prediction". Workshop on Learning Statistical Models from Relational Data. https://www.cis.upenn.edu/~ungar/papers/popescul/statistical03link.pdf. 
  4. O’Madadhain, Joshua; Hutchins, Jon; Smyth, Padhraic (2005). "Prediction and Ranking Algorithms forEvent-Based Network Data". Journal of the American Society for Information Science and Technology. https://www.ics.uci.edu/~johutchi/pubs/3-OMadadhain.pdf. 
  5. 5.0 5.1 Getoor, Lise; Friedman, Nir; Koller, Daphne; Taskar, Benjamin (2002). Learning Probabilistic Models of Link Structure. https://users.umiacs.umd.edu/~getoor/Publications/jmlr02.pdf. 
  6. Backstrom, Lars; Leskovec, Jure (2011). Supervised random walks: predicting and recommending links in social networks. doi:10.1145/1935826.1935914. 
  7. Menon, Aditya; Elkan, Charles (2011). "Link prediction via matrix factorization". Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. 6912. pp. 437–452. doi:10.1007/978-3-642-23783-6_28. ISBN 978-3-642-23782-9. https://link.springer.com/content/pdf/10.1007/978-3-642-23783-6_28.pdf. 
  8. 8.0 8.1 Xiao, Han; al., et. (2015). "From One Point to A Manifold: Knowledge Graph Embedding For Precise Link Prediction". SIGMOD. 
  9. Getoor, Lise; Diehl, Christopher (2005). Link mining: a survey. doi:10.1145/1117454.1117456. 
  10. Yu, Philips; Han, Jiawei; Faloutsos, Christos (2010). Link Mining: Models, Algorithms, and Applications. 
  11. Aggarwal, Charu (2015). Data Mining. Springer. pp. 665–670. 
  12. Adamic, Luda; Adar, Etyan (2003). "Friends and neighbors on the web". Social Networks 25 (3): 211–230. doi:10.1016/S0378-8733(03)00009-1. 
  13. Katz, L. (1953). "A New Status Index Derived from Sociometric Analysis". Psychometrika 18: 39–43. doi:10.1007/BF02289026. 
  14. Bach, Stephen; Broecheler, Matthias; Huang, Bert; Getoor, Lise (2017). "Hinge-Loss Markov Random Fields and Probabilistic Soft Logic". Journal of Machine Learning Research 18: 1–67. 
  15. Dominogs, Pedro; Richardson, Matthew (2006). Markov logic networks. http://nlp.jbnu.ac.kr/PGM/slides_other/mln.pdf. 
  16. Hou, Yuchen; Lawrence, Holder (2017). INTRODUCING MODEL R FOR LINK WEIGHT PREDICTION. https://par.nsf.gov/servlets/purl/10033443.pdf. 
  17. Martinez, Victor (2016). "A Survey of Link Prediction in Complex Networks". ACM Computing Surveys 49 (4): 1–33. doi:10.1145/3012704. 
  18. Qi, Yanjun (2006). "Evaluation of different biological dataand computational classification methods for use in protein interaction prediction". Proteins: Structure, Function, and Bioinformatics 63 (3): 490–500. doi:10.1002/prot.20865. PMID 16450363. 
  19. Shridar, Dhanya; Fakhraei, Shobeir; Getoor, Lise (2016). "A Probabilistic Approach for CollectiveSimilarity-based Drug-Drug Interaction Prediction". Bioinformatics 32 (20): 3175–3182. doi:10.1093/bioinformatics/btw342. PMID 27354693. https://linqs.github.io/linqs-website/assets/resources/sridhar-bio16.pdf. 
  20. Bhattacharya, Indrajit; Getoor, Lise (2007). "Collective entity resolution in relational data". ACM Transactions on Knowledge Discovery from Data 1: 5. doi:10.1145/1217299.1217304.