SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer

Xuanyao Chen*¹, Zhijian Liu*², Haotian Tang², Li Yi¹, Hang Zhao¹, Song Han²
Tsinghua University¹, Massachusetts Institute of Technology²
(* indicates equal contribution)

News

Waiting for more news.

Awards

No items found.

Competition Awards

No items found.

Abstract

High-resolution images enable neural networks to learn richer visual representations. However, this improved performance comes at the cost of growing computational complexity, hindering their usage in latency-sensitive applications. As not all pixels are equal, skipping computations for less-important regions offers a simple and effective measure to reduce the computation. This, however, is hard to be translated into actual speedup for CNNs since it breaks the regularity of the dense convolution workload. In this paper, we introduce SparseViT that revisits activation sparsity for recent window-based vision transformers (ViTs). As window attentions are naturally batched over blocks, actual speedup with window activation pruning becomes possible: i.e., ~50% latency reduction with 60% sparsity. Different layers should be assigned with different pruning ratios due to their diverse sensitivities and computational costs. We introduce sparsity-aware adaptation and apply the evolutionary search to efficiently find the optimal layerwise sparsity configuration within the vast search space. SparseViT achieves speedups of 1.5x, 1.4x, and 1.3x compared to its dense counterpart in monocular 3D object detection, 2D instance segmentation, and 2D semantic segmentation, respectively, with negligible to no loss of accuracy.

Video

Citation

@inproceedings{chen2023sparsevit,  

title={SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer},  

author={Chen, Xuanyao and Liu, Zhijian and Tang, Haotian and Yi, Li and Zhao, Hang and Han, Song},  

booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},  

year={2023}

}

Media

No media articles found.

Acknowledgment

This work was supported by National Science Foundation (NSF), MIT-IBM Watson AI Lab, MIT AI Hardware Program, Amazon-MIT Science Hub, NVIDIA Academic Partnership Award, and Hyundai. Zhijian Liu was partially supported by Qualcomm Innovation Fellowship.

Team Members