Originally published in Google AI Blog, April 4, 2022. In recent years, large neural networks trained for language understanding and generation have achieved impressive results across a wide range of tasks. GPT-3 first showed that large language models (LLMs) can be used for few-shot learning and can achieve impressive results without large-scale task-specific data collection or model parameter updating. More
Originally published in Marca, Feb 3, 2022. For over a decade, Russia has been at the forefront of disinformation farms that spread all over the world. Their main goal is to destabilize countries and meddle with election processes...
Originally published in JDSupra, March 3, 2022. The US Copyright Office Review Board (“Board”) rejected a request to register a computer-generated image of a landscape for copyright protection, explaining that a work must be created by a...
Originally published in Google AI Blog, Feb 15, 2022. Machine learning (ML) has become prominent in information technology, which has led some to raise concerns about the associated rise in the costs of computation, primarily the carbon footprint,...
Originally published Nature Medicine, Jan 20, 2022. Abstract Artificial intelligence (AI) is poised to broadly reshape medicine, potentially improving the experiences of both clinicians and patients. We discuss key findings from a 2-year weekly effort to track...
Originally posted on Twitter Engineering, Sept 21, 2021. In October 2020, people on Twitter raised concerns that the saliency model we used to crop images didn’t serve all people equitably. Shortly thereafter, we published our algorithmic bias assessment which...
Originally published in Amazon News/Retail, Jan 20, 2022. Our first-ever physical apparel store offers a personalized, convenient shopping experience where Amazon’s technology and operations make it easy for customers to find styles they love at great prices....