[KDD 2020] Redundancy-Free Computation for Graph Neural Networks

[KDD 2020] Redundancy-Free Computation for Graph Neural Networks

Dec 16, 2020
|
111 views
Details
Graph Neural Networks (GNNs) are based on repeated aggregations,of information from nodes’ neighbors in a graph. However, because,nodes share many neighbors, a naive implementation leads to repeated and inefficient aggregations and represents significant computational overhead. Here we propose,Hierarchically Aggregated,computation Graphs,(HAGs), a new GNN representation technique,that explicitly avoids redundancy by managing intermediate aggregation results hierarchically and eliminates repeated computations,and unnecessary data transfers in GNN training and inference.,HAGs perform the same computations and give the same models/accuracy as traditional GNNs, but in a much shorter time due,to optimized computations. To identify redundant computations,,we introduce an accurate cost function and use a novel search algorithm to find optimized HAGs. Experiments show that the HAG,representation significantly outperforms the standard GNN by increasing the end-to-end training throughput by up to 2.8,×,and,reducing the aggregations and data transfers in GNN training by,up to 6.3,×,and 5.6,×,, with only,0,.,1%,memory overhead. Overall,,our results represent an important advancement in speeding-up,and scaling-up GNNs without any loss in model predictive performance.

Comments
loading...