Message Passing Neural Network (MPNN)
(Redirected from MPNN)
Jump to navigation
Jump to search
A Message Passing Neural Network (MPNN) is a Spatial Graph Convolutional Network that treats graph convolutions as a message passing process in which information is passed from one node to another along edges.
- Context:
- It was first introduced by Gilmer et al. (2017).
- Example(s):
- the described in Gilmer et al. (2017),
- …
- Counter-Example(s):
- See: Recurrent Neural Network, Feedforward Neural Network, Attention Mechanism.
References
2020a
- (Wu et al., 2020) ⇒ Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu (2020). "A Comprehensive Survey on Graph Neural Networks". In: IEEE transactions on neural networks and learning systems, 32(1), 4-24.
- QUOTE: Message Passing Neural Network (MPNN) (Gilmer et al., 2017) outlines a general framework of spatial-based ConvGNNs. It treats graph convolutions as a message passing process in which information can be passed from one node to another along edges directly. MPNN runs K-step message passing iterations to let information propagate further. The message passing function (namely the spatial graph convolution) is defined as
$\mathbf{h}_{v}^{(k)}=U_{k}\left(\mathbf{h}_{v}^{(k-1)}, \sum_{u \in N(v)} M_{k}\left(\mathbf{h}_{v}^{(k-1)}, \mathbf{h}_{u}^{(k-1)}, \mathbf{x}_{v u}^{\tau}\right)\right)$ |
(21) |
- where $\mathbf{h}_{v}^{(0)} = \mathbf{x}_v, U_k(\cdot)$ and $M_k(\cdot)$ are functions with learnable parameters.
2020b
- (Zhou et al., 2020) ⇒ Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun (2020). "Graph neural networks: A review of methods and applications". AI Open, 1, 57-81.
- QUOTE: The message passing neural network (MPNN) (Gilmer et al., 2017) extracts the general characteristics among several classic models. The model contains two phases: a message passing phase and a readout phase. In the message passing phase, the model first uses the message function $M_t$ to aggregate the “message” $\mathbf{m}^t_v$ from neighbors and then uses the update function $U_t$ to update the hidden state $\mathbf{h}^t_v$:(...)
2019
- (Zhang et al., 2019) ⇒ Si Zhang, Hanghang Tong, Jiejun Xu, and Ross Maciejewski (2019). "Graph convolutional networks: a comprehensive review". In: Computational Social Networks, 6(1), 1-23.
- QUOTE: Message-Passing Neural Networks (MPNNs) proposed in Gilmer et al. (2017) generalize many variants of graph neural networks, such as graph convolutional networks (...) and gated graph neural networks (...). MPNN can be viewed as a two-phase model, including message-passing phase and readout phase. In the message-passing phase, the model runs node aggregations for $P$ steps and each step contains the following two functions:
$\mathbf{H}^{p+1}(u,:)=\displaystyle\sum_{v\in\mathcal{N}(u)}M^p\left(\mathbf{X}_p(u,:),\mathbf{X}^p(v,:),\mathbf{e}_{u,v}\right)$ |
(22) |
$\mathbf{X}^{p+1}(u,:)=U^p\left(\mathbf{X}^p(u,:),\mathbf{H}^{p+1}(u,:)\right)$ |
(23) |
- where $M^p$,$U^p$ are the message function and the update function at the pth step, respectively, and $\mathbf{e}_{u,v}$ denotes the attributes of edge $(u, v)$. Then, the readout phase computes the feature vector for the whole graph by:
$\mathbf{\hat{y}}=R\left(\{\mathbf{X}^P(u,:)|u\in \mathcal{V}\}\right)$ |
(24) |
- where $R$ denotes the readout function.
2017
- (Gilmer et al., 2017) ⇒ Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl (2017)."Neural Message Passing for Quantum Chemistry". In: Proceedings of the 34th International Conference on Machine Learning (PMLR 2017).
- QUOTE: Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework.