Skip to content

chwan1016/awesome-gnn-systems

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Graph Neural Network Systems Awesome

A list of awesome systems for graph neural network (GNN). If you have any comment, please create an issue or pull request.

Contents

Open Source Libraries

Papers

Survey Papers

Venue Title Affiliation       Link         Source  
CSUR 2024 Distributed Graph Neural Network Training: A Survey BUPT [paper]Scholar citations
Proceedings of the IEEE 2023 A Comprehensive Survey on Distributed Training of Graph Neural Networks Chinese Academy of Sciences [paper]Scholar citations
arXiv 2023 A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware UCLA [paper]Scholar citations
arXiv 2022 Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis ETHZ [paper]Scholar citations
CSUR 2022 Computing Graph Neural Networks: A Survey from Algorithms to Accelerators UPC [paper]Scholar citations

GNN Libraries

Venue Title Affiliation       Link         Source  
JMLR 2021 DIG: A Turnkey Library for Diving into Graph Deep Learning Research TAMU [paper]Scholar citations [code]GitHub stars
arXiv 2021 CogDL: A Toolkit for Deep Learning on Graphs THU [paper]Scholar citations [code]GitHub stars
CIM 2021 Graph Neural Networks in TensorFlow and Keras with Spektral Università della Svizzera italiana [paper]Scholar citations [code]GitHub stars
arXiv 2019 Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks AWS [paper]Scholar citations [code]GitHub stars
VLDB 2019 AliGraph: A Comprehensive Graph Neural Network Platform Alibaba [paper]Scholar citations [code]GitHub stars
arXiv 2019 Fast Graph Representation Learning with PyTorch Geometric TU Dortmund University [paper]Scholar citations [code]GitHub stars
arXiv 2018 Relational Inductive Biases, Deep Learning, and Graph Networks DeepMind [paper]Scholar citations [code]GitHub stars

GNN Kernels

Venue Title Affiliation       Link         Source  
MLSys 2022 Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective THU [paper]Scholar citations [code]GitHub stars
HPDC 2022 TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU GW [paper]Scholar citations
IPDPS 2021 FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks Indiana University Bloomington [paper]Scholar citations [code]GitHub stars
SC 2020 GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks THU [paper]Scholar citations [code]GitHub stars
ICCAD 2020 fuseGNN: Accelerating Graph Convolutional Neural Network Training on GPGPU UCSB [paper]Scholar citations [code]GitHub stars
IPDPS 2020 PCGCN: Partition-Centric Processing for Accelerating Graph Convolutional Network PKU [paper]Scholar citations

GNN Compilers

Venue Title Affiliation       Link         Source  
MLSys 2022 Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph ShanghaiTech [paper]Scholar citations [code]GitHub stars
EuroSys 2021 Seastar: Vertex-Centric Programming for Graph Neural Networks CUHK [paper]Scholar citations
SC 2020 FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems Cornell [paper]Scholar citations [code]GitHub stars

Distributed GNN Training Systems

Venue Title Affiliation       Link         Source  
arXiv 2023 Communication-Free Distributed GNN Training with Vertex Cut Stanford [paper]Scholar citations
arXiv 2023 GNNPipe: Accelerating Distributed Full-Graph GNN Training with Pipelined Model Parallelism Purdue [paper]Scholar citations
OSDI 2023 MGG: Accelerating Graph Neural Networks with Fine-Grained Intra-Kernel Communication-Computation Pipelining on Multi-GPU Platforms UCSB [paper]Scholar citations [code]GitHub stars
VLDB 2022 Sancus: Staleness-Aware Communication-Avoiding Full-Graph Decentralized Training in Large-Scale Graph Neural Networks HKUST [paper]Scholar citations [code]GitHub stars
MLSys 2022 BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling Rice, UIUC [paper]Scholar citations [code]GitHub stars
MLSys 2022 Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs Intel [paper]Scholar citations [code]GitHub stars
WWW 2022 PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm PKU [paper]Scholar citations
ICLR 2022 PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication Rice [paper]Scholar citations [code]GitHub stars
ICLR 2022 Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks PSU [paper]Scholar citations [code]GitHub stars
arXiv 2021 Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs AWS [paper]Scholar citations
SC 2021 DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks Intel [paper]Scholar citations [code]
SC 2021 Efficient Scaling of Dynamic Graph Neural Networks IBM [paper]Scholar citations
CLUSTER 2021 2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters NUDT [paper]Scholar citations
OSDI 2021 $P^3$: Distributed Deep Graph Learning at Scale MSR [paper]Scholar citations
OSDI 2021 Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads UCLA [paper]Scholar citations [code]GitHub stars
arXiv 2021 GIST: Distributed Training for Large-Scale Graph Convolutional Networks Rice [paper]Scholar citations
EuroSys 2021 FlexGraph: A Flexible and Efficient Distributed Framework for GNN Training Alibaba [paper]Scholar citations
EuroSys 2021 DGCL: An Efficient Communication Library for Distributed GNN Training CUHK [paper]Scholar citations [code]GitHub stars
SC 2020 Reducing Communication in Graph Neural Network Training UC Berkeley [paper]Scholar citations [code]GitHub stars
VLDB 2020 G$^3$: When Graph Neural Networks Meet Parallel Graph Processing Systems on GPUs NUS [paper]Scholar citations [code]GitHub stars
IA3 2020 DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs AWS [paper]Scholar citations [code]
MLSys 2020 Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc Stanford [paper]Scholar citations [code]GitHub stars
arXiv 2020 AGL: A Scalable System for Industrial-purpose Graph Machine Learning Ant Financial Services Group [paper]Scholar citations
ATC 2019 NeuGraph: Parallel Deep Neural Network Computation on Large Graphs PKU [paper]Scholar citations

Training Systems for Scaling Graphs

Venue Title Affiliation       Link         Source  
DaMoN 2024 In situ neighborhood sampling for large-scale GNN training Boston University [paper]Scholar citations [code]
HPCA 2024 BeaconGNN: Large-Scale GNN Acceleration with Out-of-Order Streaming In-Storage Computing UCLA [paper]Scholar citations
EuroSys 2023 MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks UW–Madison [paper]Scholar citations [code]GitHub stars
VLDB 2022 ByteGNN: Efficient Graph Neural Network Training at Large Scale ByteDance [paper]Scholar citations
VLDB 2022 Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching Seoul National University [paper]Scholar citations [code]GitHub stars
ISCA 2022 SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures KAIST [paper]Scholar citations
ICML 2022 GraphFM: Improving Large-Scale GNN Training via Feature Momentum TAMU [paper]Scholar citations [code]
ICML 2021 GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings TU Dortmund University [paper]Scholar citations [code]GitHub stars
OSDI 2021 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs UCSB [paper]Scholar citations [code]GitHub stars

Quantized GNNs

Venue Title Affiliation       Link         Source  
Neurocomputing 2022 EPQuant: A Graph Neural Network Compression Approach Based on Product Quantization ZJU [paper]Scholar citations [code]GitHub stars
ICLR 2022 EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression Rice [paper]Scholar citations [code]GitHub stars
PPoPP 2022 QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core UCSB [paper]Scholar citations [code]GitHub stars
CVPR 2021 Binary Graph Neural Networks ICL [paper]Scholar citations [code]GitHub stars
CVPR 2021 Bi-GCN: Binary Graph Convolutional Network Beihang University [paper]Scholar citations [code]GitHub stars
EuroMLSys 2021 Learned Low Precision Graph Neural Networks Cambridge [paper]Scholar citations
World Wide Web 2021 Binarized Graph Neural Network UTS [paper]Scholar citations
ICLR 2021 Degree-Quant: Quantization-Aware Training for Graph Neural Networks Cambridge [paper]Scholar citations [code]GitHub stars
ICTAI 2020 SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization UCSB [paper]Scholar citations [code]GitHub stars

GNN Dataloaders

Venue Title Affiliation       Link         Source  
NSDI 2023 BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing ByteDance [paper]Scholar citations
MLSys 2022 Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining MIT [paper]Scholar citations [code]GitHub stars
EuroSys 2022 GNNLab: A Factored System for Sample-based GNN Training over GPUs SJTU [paper]Scholar citations [code]GitHub stars
KDD 2021 Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs UCLA [paper]Scholar citations
PPoPP 2021 Understanding and Bridging the Gaps in Current GNN Performance Optimizations THU [paper]Scholar citations [code]GitHub stars
VLDB 2021 Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture UIUC [paper]Scholar citations [code]GitHub stars
TPDS 2021 Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs USTC [paper]Scholar citations [code]GitHub stars
SoCC 2020 PaGraph: Scaling GNN Training on Large Graphs via Computation-aware Caching USTC [paper]Scholar citations [code]GitHub stars
arXiv 2019 TigerGraph: A Native MPP Graph Database UCSD [paper]Scholar citations

GNN Training Accelerators

Venue Title Affiliation       Link         Source  
ISCA 2022 Graphite: Optimizing Graph Neural Networks on CPUs Through Cooperative Software-Hardware Techniques UIUC [paper]Scholar citations
ISCA 2022 Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network Alibaba [paper]Scholar citations
arXiv 2021 GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing PKU [paper]Scholar citations
DATE 2021 ReGraphX: NoC-enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks WSU [paper]Scholar citations
TCAD 2021 Rubik: A Hierarchical Architecture for Efficient Graph Learning Chinese Academy of Sciences [paper]Scholar citations
FPGA 2020 GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms USC [paper]Scholar citations [code]GitHub stars

GNN Inference Accelerators

Venue Title Affiliation       Link         Source  
JAIHC 2022 DRGN: a dynamically reconfigurable accelerator for graph neural networks XJTU [paper]Scholar citations
DAC 2022 GNNIE: GNN Inference Engine with Load-balancing and Graph-specific Caching UMN [paper]Scholar citations
IPDPS 2022 Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators GaTech [paper]Scholar citations [code]GitHub stars
arXiv 2022 FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming GaTech [paper]Scholar citations
CICC 2022 StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing UCLA [paper]Scholar citations
HPCA 2022 Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures HUST [paper]Scholar citations
HPCA 2022 GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design Rice, PNNL [paper]Scholar citations [code]GitHub stars
arXiv 2022 GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration GaTech [paper]Scholar citations
DAC 2021 DyGNN: Algorithm and Architecture Support of vertex Dynamic Pruning for Graph Neural Networks Hunan University [paper]Scholar citations
DAC 2021 BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices PKU [paper]Scholar citations
DAC 2021 TARe: Task-Adaptive in-situ ReRAM Computing for Graph Learning Chinese Academy of Sciences [paper]Scholar citations
ICCAD 2021 G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency Rice [paper]Scholar citations
MICRO 2021 I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization PNNL [paper]Scholar citations
arXiv 2021 ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration SJTU [paper]Scholar citations
TComp 2021 EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks Chinese Academy of Sciences [paper]Scholar citations
HPCA 2021 GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks GWU [paper]Scholar citations
APA 2020 GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks PKU [paper]Scholar citations
ASAP 2020 Hardware Acceleration of Large Scale GCN Inference USC [paper]Scholar citations
DAC 2020 Hardware Acceleration of Graph Neural Networks UIUC [paper]Scholar citations
MICRO 2020 AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing PNNL [paper]Scholar citations
arXiv 2020 GRIP: A Graph Neural Network Accelerator Architecture Stanford [paper]Scholar citations
HPCA 2020 HyGCN: A GCN Accelerator with Hybrid Architecture UCSB [paper]Scholar citations

Contribute

We welcome contributions to this repository. To add new papers to this list, please update JSON files under ./res/papers/. Our bots will update the paper list in README.md automatically. The citations of newly added papers will be updated within one day.