Originally published in Google AI Blog, Dec 7, 2021. Transformer models consistently obtain state-of-the-art results in computer vision tasks, including object detection and video classification. In contrast to standard convolutional approaches that process images pixel-by-pixel, the Vision Transformers (ViT) treat an image as a sequence of patch tokens (i.e., a smaller part, or “patch”, of an image made up of multiple pixels). This
Originally published in DeepMind, Oct 4, 2021. Based on Transformers, our new Enformer architecture advances genetic research by improving the ability to predict how DNA sequence influences gene expression. When the Human Genome Project succeeded in mapping the DNA...
Originally published in The Washington Post, Oct 26, 2021. Facebook engineers gave extra value to emoji reactions, including ‘angry,’ pushing more emotional and provocative content into users’ news feeds. Five years ago, Facebook gave its users five...
Originally published in Springboard Blog, Jan 22, 2021. Data science might just be the most buzzed-about job in tech right now, but its pop culture sheen conceals some of the harsh realities of being a fresh graduate in...
Originally published in Towards Data Science on Oct 26, 2020. This year, Twitter sponsored the RecSys 2020 Challenge, providing a large dataset of user engagements. In this post, we describe the challenge and the insights we had...
Originally posted to Wired.com, Oct 11, 2020. Researchers found they could stop a Tesla by flashing a few frames of a stop sign for less than half a second on an internet-connected billboard. Safety concerns over automated...