Lowering ONNX-MLIR to MHLO dialect
We aim to enhance the robustness of GNNs under structural poisoning attacks from both theoretical and empirical aspects. We certify the robustness of node classiﬁcation with GNNs using data-dependent random noise added to edges and non-edges at training. In addition, we also propose a practical, albeit noncertiﬁed, approach that exhibits signiﬁcantly better robust accuracy on two different GNN models (GCNs and PPNP) against state-of-the-art poisoning attacks.
Towards Inspecting and Eliminating Trojan Backdoors in Deep Neural Networks
A trojan backdoor is a hidden pattern typically implanted in a deep neural network. We propose TABOR (Towards Inspecting and Eliminating Trojan Backdoors in AI Systems) and formalize trojan detection as an optimization problem. First, we design new regularization terms for our object function, which could shrink the adversarial sample subspace. Second, we leverage the idea of explainable AI to further prune irrelevant adversarial samples and thus minimizes incorrect trojan detection. Last but not least, we invent a new anomaly detection method to eliminate the adversarial samples mistakenly pinpointed as malicious triggers in a clean model.
A Generic Technique for Counters to Improve
We propose a generic technique: SElf-ADaptive counters (SEAD Counter). The counter could be added with a set of predefined probability, and therefore we can improve the space efficiency. This technique is very useful when the space of counters is limited, like on-chip memory. We proposed two versions of SA Counter: the static version and the dynamic version. Our method is generic as it can be both applied to sketches and bloom filters.