As the number of graph-level embedding techniques increases at an unprecedented speed, questions arise about their behavior and performance when training data undergo perturbations. This is the case when an external entity maliciously alters training data to invalidate the embedding. This paper explores the effects of such attacks on some graph datasets by applying different graph-level embedding techniques. The main attack strategy involves manipulating training data to produce an altered model. In this context, our goal is to go in-depth about methods, resources, experimental settings, and performance results to observe and study all the aspects that derive from the attack stage.
Adversarial attacks on graph-level embedding methods: a case study
Maurizio Giordano;Lucia Maddalena;
2022
Abstract
As the number of graph-level embedding techniques increases at an unprecedented speed, questions arise about their behavior and performance when training data undergo perturbations. This is the case when an external entity maliciously alters training data to invalidate the embedding. This paper explores the effects of such attacks on some graph datasets by applying different graph-level embedding techniques. The main attack strategy involves manipulating training data to produce an altered model. In this context, our goal is to go in-depth about methods, resources, experimental settings, and performance results to observe and study all the aspects that derive from the attack stage.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.