Abstract: Vision Transformers (ViTs) that leverage self-attention mechanism have shown superior performance on many classical vision tasks compared to convolutional neural networks (CNNs) and gain increasing popularity recently. Existing ViTs works mainly optimize performance and accuracy, but ViTs reliability issues induced by soft errors in large-scale VLSI designs have generally been overlooked. In this work, we mainly study the reliability
Read more
Tags: Updated, arxiv, accuracy, networks, Mechanism, transformers, work, tasks, attention, Vision Transformers, study, Reliability, errors, and, neural networks
Related Posts
- Black-box Attacks Against Neural Binary Function Detection. (arXiv:2208.11667v2 [cs.CR] UPDATED)a
- TrojViT: Trojan Insertion in Vision Transformers. (arXiv:2208.13049v4 [cs.LG] UPDATED)a
- Degree-Preserving Randomized Response for Graph Neural Networks under Local Differential Privacy. (arXiv:2202.10209v4 [cs.CR] UPDATED)a
- Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization. (arXiv:2211.11236v2 [cs.CV] UPDATED)a
- Adversarial Camouflage for Node Injection Attack on Graphs. (arXiv:2208.01819v4 [cs.LG] UPDATED)a