图学习学术速递[2021/10/13]

Graph相关(图学习|图神经网络|图优化等)(4篇)

[ 1 ] GraPE: fast and scalable Graph Processing and Embedding
标题:GRAPE:快速可扩展的图形处理与嵌入
链接:https://arxiv.org/abs/2110.06196

作者:Luca Cappelletti,Tommaso Fontana,Elena Casiraghi,Vida Ravanmehr,Tiffany J. Callahan,Marcin P. Joachimiak,Christopher J. Mungall,Peter N. Robinson,Justin Reese,Giorgio Valentini
机构:AnacletoLab, Dipartimento di Informatica, Universita degli Studi di Milano, Italy, The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA, Lawrence Berkeley National Laboratory, USA, European Laboratory for Learning and Intelligent Systems (ELLIS)
摘要:图形表示学习方法使得能够以图形形式表示的数据能够解决广泛的学习问题。然而,经济、生物学、医学和其他领域中的一些现实世界问题与现有方法及其软件实现产生了相关的缩放问题,这是因为现实世界中的图形具有数百万个节点和数十亿条边。我们介绍了GraPE,一种用于图形处理和基于随机游走的嵌入的软件资源,它可以扩展到大型和高次图形,并显著加快计算速度。GraPE包括专门的数据结构、算法和快速并行实现,与最先进的软件资源相比,它在经验空间和时间复杂性方面显示了几个数量级的改进,在边缘和节点标签预测以及图形的无监督分析方面,机器学习方法的性能得到相应提升。GraPE设计用于在笔记本电脑和台式电脑以及高性能计算集群上运行
摘要:Graph Representation Learning methods have enabled a wide range of learning problems to be addressed for data that can be represented in graph form. Nevertheless, several real world problems in economy, biology, medicine and other fields raised relevant scaling problems with existing methods and their software implementation, due to the size of real world graphs characterized by millions of nodes and billions of edges. We present GraPE, a software resource for graph processing and random walk based embedding, that can scale with large and high-degree graphs and significantly speed up-computation. GraPE comprises specialized data structures, algorithms, and a fast parallel implementation that displays everal orders of magnitude improvement in empirical space and time complexity compared to state of the art software resources, with a corresponding boost in the performance of machine learning methods for edge and node label prediction and for the unsupervised analysis of graphs.GraPE is designed to run on laptop and desktop computers, as well as on high performance computing clusters

 

[ 2 ] ConTIG: Continuous Representation Learning on Temporal Interaction  Graphs
标题:ConTIG:时态交互图上的连续表示学习
链接:https://arxiv.org/abs/2110.06088

作者:Xu Yan,Xiaoliang Fan,Peizhen Yang,Zonghan Wu,Shirui Pan,Longbiao Chen,Yu Zang,Cheng Wang
备注:12 pages; 6 figures
摘要:基于时间交互图的表示学习(TIG)是对复杂网络进行建模的一种方法,它能动态地演化出各种各样的问题。现有的TIG动态嵌入方法仅在交互发生时对节点嵌入进行离散更新。它们无法捕捉节点嵌入轨迹的连续动态演化。在本文中,我们提出了一个名为ConTIG的两模块框架,这是一种捕捉节点嵌入轨迹连续动态演化的连续表示方法。通过两个基本模块,我们的模型利用了动态网络中的三个因素,包括最新交互、邻居特征和固有特征。在第一个更新模块中,我们使用一个连续推理块,通过使用常微分方程从节点对之间的时间相邻交互模式学习节点的状态轨迹。在第二个转换模块中,我们引入了一种自我注意机制,通过聚合历史时间交互信息来预测未来的节点嵌入。实验结果表明,与一系列最先进的基线相比,ConTIG在时间链路预测、时间节点推荐和动态节点分类任务方面具有优势,特别是在长间隔交互预测方面。
摘要:Representation learning on temporal interaction graphs (TIG) is to model complex networks with the dynamic evolution of interactions arising in a broad spectrum of problems. Existing dynamic embedding methods on TIG discretely update node embeddings merely when an interaction occurs. They fail to capture the continuous dynamic evolution of embedding trajectories of nodes. In this paper, we propose a two-module framework named ConTIG, a continuous representation method that captures the continuous dynamic evolution of node embedding trajectories. With two essential modules, our model exploit three-fold factors in dynamic networks which include latest interaction, neighbor features and inherent characteristics. In the first update module, we employ a continuous inference block to learn the nodes' state trajectories by learning from time-adjacent interaction patterns between node pairs using ordinary differential equations. In the second transform module, we introduce a self-attention mechanism to predict future node embeddings by aggregating historical temporal interaction information. Experiments results demonstrate the superiority of ConTIG on temporal link prediction, temporal node recommendation and dynamic node classification tasks compared with a range of state-of-the-art baselines, especially for long-interval interactions prediction.

 

[ 3 ] SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in  Breast Cancer
标题:SlideGraph+:整个幻灯片图像水平图预测乳腺癌HER2状态
链接:https://arxiv.org/abs/2110.06042

作者:Wenqi Lu,Michael Toss,Emad Rakha,Nasir Rajpoot,Fayyaz Minhas
机构:Tissue Image Analytics (TIA) Centre, Department of Computer Science, University of Warwick, UK, Nottingham Breast Cancer Research Centre, Division of Cancer and Stem Cells, School of Medicine, Nottingham City Hospital, University of Nottingham, Nottingham, UK
备注:20 pages, 11 figures, 3 tables
摘要:人表皮生长因子受体2(HER2)是一个重要的预后和预测因子,在15-20%的乳腺癌(BCa)中过度表达。确定其状态是选择治疗方案和预测的关键临床决策步骤。HER2状态通过原位杂交(ISH)使用跨组学或免疫组织化学(IHC)进行评估,这需要额外的成本和组织负担,以及手动观察评分偏差方面的分析变量。在本研究中,我们提出了一种新的基于图神经网络(GNN)的模型(称为SlideGraph+),用于直接从常规苏木精和伊红(H&E)玻片的整个玻片图像预测HER2状态。除了两个独立的测试数据集外,该网络还接受了来自癌症基因组图谱(TCGA)的幻灯片的训练和测试。我们证明,所提出的模型优于最先进的方法,在TCGA上ROC曲线下面积(AUC)值>0.75,在独立测试集上>0.8。我们的实验表明,所提出的方法可用于病例分类以及诊断环境中的预排序诊断测试。它也可用于计算病理学中的其他弱监督预测问题。SlideGraph+代码可从以下网址获得:https://github.com/wenqi006/SlideGraph.
摘要:Human epidermal growth factor receptor 2 (HER2) is an important prognostic and predictive factor which is overexpressed in 15-20% of breast cancer (BCa). The determination of its status is a key clinical decision making step for selection of treatment regimen and prognostication. HER2 status is evaluated using transcroptomics or immunohistochemistry (IHC) through situ hybridisation (ISH) which require additional costs and tissue burden in addition to analytical variabilities in terms of manual observational biases in scoring. In this study, we propose a novel graph neural network (GNN) based model (termed SlideGraph+) to predict HER2 status directly from whole-slide images of routine Haematoxylin and Eosin (H&E) slides. The network was trained and tested on slides from The Cancer Genome Atlas (TCGA) in addition to two independent test datasets. We demonstrate that the proposed model outperforms the state-of-the-art methods with area under the ROC curve (AUC) values > 0.75 on TCGA and 0.8 on independent test sets. Our experiments show that the proposed approach can be utilised for case triaging as well as pre-ordering diagnostic tests in a diagnostic setting. It can also be used for other weakly supervised prediction problems in computational pathology. The SlideGraph+ code is available at https://github.com/wenqi006/SlideGraph.

 

[ 4 ] GCN-SE: Attention as Explainability for Node Classification in Dynamic  Graphs
标题:GCN-SE:动态图中节点分类的可解释性关注
链接:https://arxiv.org/abs/2110.05598

作者:Yucai Fan,Yuhang Yao,Carlee Joe-Wong
机构:Carnegie Mellon University
备注:Accepted by ICDM 2021
摘要:图卷积网络(GCNs)是一种流行的图表示学习方法,已被证明对节点分类等任务有效。虽然典型的GCN模型侧重于对静态图中的节点进行分类,但最近的几个变体提出了拓扑和节点属性随时间变化的动态图中的节点分类,例如,具有动态关系的社会网络,或具有不断变化的合著者身份的文献引用网络。然而,这些工作并不能完全解决在不同时间灵活地为图的快照分配不同重要性的挑战,这取决于图的动力学,可能或多或少对标签具有预测能力。我们提出了一种新的方法GCN-SE来应对这一挑战,该方法受挤压和激励网络(SE-Net)的启发,在不同的时间将一组可学习的注意力权重附加到图形快照上。我们证明了GCN-SE在各种图形数据集上的性能优于先前提出的节点分类方法。为了验证注意权重在确定不同图形快照重要性方面的有效性,我们将基于扰动的方法从可解释机器学习领域应用到图形设置中,并评估GCN-SE学习的注意权重与不同快照重要性之间随时间的相关性。这些实验表明,GCN-SE实际上可以识别不同快照对动态节点分类的预测能力。
摘要:Graph Convolutional Networks (GCNs) are a popular method from graph representation learning that have proved effective for tasks like node classification tasks. Although typical GCN models focus on classifying nodes within a static graph, several recent variants propose node classification in dynamic graphs whose topologies and node attributes change over time, e.g., social networks with dynamic relationships, or literature citation networks with changing co-authorships. These works, however, do not fully address the challenge of flexibly assigning different importance to snapshots of the graph at different times, which depending on the graph dynamics may have more or less predictive power on the labels. We address this challenge by proposing a new method, GCN-SE, that attaches a set of learnable attention weights to graph snapshots at different times, inspired by Squeeze and Excitation Net (SE-Net). We show that GCN-SE outperforms previously proposed node classification methods on a variety of graph datasets. To verify the effectiveness of the attention weight in determining the importance of different graph snapshots, we adapt perturbation-based methods from the field of explainable machine learning to graphical settings and evaluate the correlation between the attention weights learned by GCN-SE and the importance of different snapshots over time. These experiments demonstrate that GCN-SE can in fact identify different snapshots' predictive power for dynamic node classification.

因上求缘,果上努力~~~~ 作者:每天卷学习,转载请注明原文链接:https://www.cnblogs.com/BlairGrowing/p/15401240.html

原文地址:https://www.cnblogs.com/BlairGrowing/p/15401240.html