Skip to main content

A Scalable Approach for Partially Local Federated Learning

Training Machine Learning Models More Efficiently with Dataset Distillation

Interpretable Deep Learning for Time Series Forecasting

A Fast WordPiece Tokenization System

More Efficient In-Context Learning with GLaM

General and Scalable Parallelization for Neural Networks

Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Google at NeurIPS 2021

Evaluating Syntactic Abilities of Language Models

RLDS: An Ecosystem to Generate, Share, and Use Datasets in Reinforcement Learning