Graph out-of-distribution (OOD) generalization remains a major challenge in graph neural networks (GNNs). Invariant learning, aiming to extract invariant features across varied distributions, has recently emerged as a promising approach for OOD generalization. However, the exploration within graph data remains constrained by the complex nature of graphs. The invariant features at both the attribute and structural levels, combined with the absence of prior knowledge regarding environmental factors, make the invariance and sufficiency conditions of invariant learning hard to satisfy on graph data. Existing studies, such as data augmentation or causal intervention, either suffer from disruptions to invariance during the graph manipulation process or face reliability issues due to a lack of supervised signals for causal parts.
To solve the problems, a research team led by Wenyu Mao published their new research on 15 January 2026 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL), to extract the invariant features of graphs and enhance models' generalization ability to unseen distributions. Specifically, InfoIGL introduces a redundancy filter to compress task-irrelevant information related to environmental factors. Cooperating with the designed multi-level contrastive learning, InfoIGL maximize the mutual information among graphs of the same class in the downstream classification tasks, preserving invariant features for prediction to a great extent. An appealing feature of InfoIGL is its strong generalization ability without depending on supervised signal of invariance.
Experiments on both synthetic and real-world datasets demonstrate that this method achieves state-of-the-art performance under OOD generalization for graph classification tasks.