Graph Isomorphism Network (Xu et al., 2019).
GIN is provably the most expressive GNN architecture under the message passing framework, achieving the same discriminative power as the Weisfeiler-Lehman graph isomorphism test. Each GIN layer applies:
h_v' = MLP((1 + eps) * h_v + SUM(h_u for u in N(v)))where eps is a learnable parameter that weights self-features relative to neighbor aggregation.
Architecture
Node Features [batch, num_nodes, input_dim]
Adjacency [batch, num_nodes, num_nodes]
|
v
+--------------------------------------+
| GIN Layer 1: |
| 1. Aggregate: sum neighbor feats |
| 2. Combine: (1+eps)*h_v + agg |
| 3. Transform: MLP(combined) |
+--------------------------------------+
|
v
+--------------------------------------+
| GIN Layer N |
+--------------------------------------+
|
v
Node Embeddings [batch, num_nodes, hidden_dim]Usage
model = GIN.build(
input_dim: 16,
hidden_dims: [64, 64],
num_classes: 2,
epsilon_learnable: true
)References
- Xu et al., "How Powerful are Graph Neural Networks?" (ICLR 2019)
- https://arxiv.org/abs/1810.00826
Summary
Functions
Build a Graph Isomorphism Network.
Single GIN layer: aggregate neighbors via sum, combine with self, apply MLP.
Get the output size of a GIN model.
Types
@type build_opt() :: {:activation, atom()} | {:dropout, float()} | {:epsilon_learnable, boolean()} | {:hidden_dims, pos_integer()} | {:input_dim, pos_integer()} | {:num_classes, pos_integer() | nil} | {:pool, atom()}
Options for build/1.
Functions
Build a Graph Isomorphism Network.
Options
:input_dim- Input feature dimension per node (required):hidden_dims- List of hidden dimensions for each GIN layer (default: [64, 64]):num_classes- If provided, adds a classification head (default: nil):epsilon_learnable- Whether eps is a learnable parameter (default: true):dropout- Dropout rate (default: 0.0):activation- Activation for MLPs (default: :relu):pool- Global pooling for graph classification: :mean, :sum, :max (default: nil)
Returns
An Axon model with two inputs ("nodes" and "adjacency").
@spec gin_layer(Axon.t(), Axon.t(), pos_integer(), keyword()) :: Axon.t()
Single GIN layer: aggregate neighbors via sum, combine with self, apply MLP.
Options
:name- Layer name prefix (default: "gin"):epsilon_learnable- Whether eps is learnable (default: true):dropout- Dropout rate (default: 0.0):activation- Activation for MLP (default: :relu)
@spec output_size(keyword()) :: pos_integer()
Get the output size of a GIN model.