Graph Neural Networks: Theory, Models, Algorithms and Applications

Graph Neural Networks: Theory, Models, Algorithms and Applications











Graph neural networks (GNNs) are an emerging field that explores how techniques in deep neural network theory can be generalised to non-Euclidean data, such as graphs and manifolds. Geometry is being regarded as one of the promising avenues for advancing machine learning and deep learning in general. Data in biology, physics, computer graphics and social networks are usually not vectors in an Euclidean space, but objects on a manifold. The study of data that are not Euclidean brings with it many challenges: the data are not only high-dimensional but also have an intricate structure of internal relations. Thus, on the one hand, one seeks methods that allow studying complexity in data; on the other hand, one needs efficient ways to represent the additional geometric structure in the data. Graph neural networks, or more generally, geometric deep learning, strive to solve these problems.

Graph neural networks in their various incarnations have become a topic of intense research by the remarkable ability of graph representations in learning tasks such as node classification, graph classification, and link prediction. The increasing number of GNN-related submissions to journals and conferences (workshops/tutorials as well) indicates that both academic and industrial communities have a considerable demand of developing more advanced techniques, algorithms and theoretical foundations for GNNs. Of the core is to develop new GNN models and efficient algorithms, in spectral, spatial, or mixture form. Various practical scenarios, such as large-scale, dynamic, ambiguous graphs, add on the challenge of modelling and efficient algorithmic design for GNNs. Besides, one needs appropriate mathematical underpinnings and rigorous theoretical interpretations to interpret and validate the effectiveness and limitations of GNNs. In particular, the research foci of the special issue include the theory of graph representation learning, GNN modelling and efficient algorithmic design, and applications of GNNs.

This special session aims to provide a forum for both the academic and industrial communities to report recent results related to (advanced) graph neural networks from the perspectives of theory, models, algorithms and applications. Topics appropriate for this session include (but are not necessarily limited to):

      • Graph Neural Networks

      • Deep Learning on Graphs

      • Deep/Dynamic/Robust GNNs

      • Graph Representation Learning

      • Fast and/or Distributed Learning Algorithms for GNNs

      • Self-supervised learning/ Meta learning with GNNs

      • Spectral-based and Spatial-based Methods

      • Algorithms for Pre-Trained GNNs

      • Expressive, Generalization and Representational Power of GNNs

      • Universality of Invariant or Equivariant GNNs

      • Learning Theory on GNNs

      • Generative Models of Graphs

      • Combination of GNNs and Gaussian Processes

      • Heterogeneous GNNs, Hyper-GNNs, Multi-View GNNs

      • Graph Classification, Node Classification, Link Prediction

      • Advanced applications of GNNs to Computer Vision, Nature Language Processing, Computer Graphics, Social Networks, Recommender Systems, etc.


Shirui Pan, Monash University, Australia.


Shirui Pan received his PhD degree in computer science from the University of Technology Sydney. He is currently a Lecturer with the Faculty of Information Technology, Monash University, Australia. Prior to that, he was a Research Fellow with the School of Software, University of Technology Sydney. He has published over 50 research papers in top-tier journals and conferences, including the IEEE Transactions on Neural Networks and Learning Systems (TNNLS), IEEE Transactions on Knowledge and Data Engineering (TKDE), the IEEE Transactions on Cybernetics (TCYB), Pattern Recognition, ICDE, IJCAI, AAAI, the IEEE International Conference on Data Mining, SDM, CIKM, and PAKDD. He is a Senior PC member for IJCAI-2020, and a PC member of many top-tier conference, including KDD, NeurIPS, and AAAI. He serves as a Guest Editor for the Complexity journal and IEEE ACCESS. His current research interests include data mining and artificial intelligence.

Jiye Liang, Shanxi University, China


Jiye Liang received the Ph.D degree from Xi’an Jiaotong University. He is a professor in Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, the School of Computer and Information Technology, Shanxi University. His research interests include artificial intelligence, granular computing, data mining, and machine learning. He has published more than 120 papers in data mining and artificial intelligence domain, including IEEE TPAMI, JMLR, IEEE TKDE, IEEE TFS, DMKD, AI, ICML, AAAI.

Other Workshops