Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
ML in the Spotlight: Trailblazers with Walter Isaacson Covers Predictive Analytics
 Predictive analytics got another public spotlight and Machine Learning Times Executive...
ChapGPT Doesn’t “Know” But It Can Tell
  Polanyi’s paradox, named in honor of the philosopher...
Take the 2023 Rexer Analytics Data Science Survey Now
  Rexer Analytics and Machine Learning Week launch 2023...
Three Ethical Issues Related to Credit Scores
 A reasonable credit score and its accompanying benefits provide...
SHARE THIS:

1 year ago
Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

 
Originally published in Google AI Blog, Dec 7, 2021.

Transformer models consistently obtain state-of-the-art results in computer vision tasks, including object detection and video classification. In contrast to standard convolutional approaches that process images pixel-by-pixel, the Vision Transformers (ViT) treat an image as a sequence of patch tokens (i.e., a smaller part, or “patch”, of an image made up of multiple pixels). This means that at every layer, a ViT model recombines and processes patch tokens based on relations between each pair of tokens, using multi-head self-attention. In doing so, ViT models have the capability to construct a global representation of the entire image.

At the input-level, the tokens are formed by uniformly splitting the image into multiple segments, e.g., splitting an image that is 512 by 512 pixels into patches that are 16 by 16 pixels. At the intermediate levels, the outputs from the previous layer become the tokens for the next layer. In the case of videos, video ‘tubelets’ such as 16x16x2 video segments (16×16 images over 2 frames) become tokens. The quality and quantity of the visual tokens decide the overall quality of the Vision Transformer.

The main challenge in many Vision Transformer architectures is that they often require too many tokens to obtain reasonable results. Even with 16×16 patch tokenization, for instance, a single 512×512 image corresponds to 1024 tokens. For videos with multiple frames, that results in tens of thousands of tokens needing to be processed at every layer. Considering that the Transformer computation increases quadratically with the number of tokens, this can often make Transformers intractable for larger images and longer videos. This leads to the question: is it really necessary to process that many tokens at every layer?

To continue reading this article, click here.